The library is written in C++ where all the matrix operations are performed using Eigen. One of the advantages of using Eigen is that it can be used on any hardware platform (even microcontrollers), as it got no library dependencies. This allowed us to easily test the code on our laptops and then afterwards use it directly in the Android application by using the Android NDK toolset. As usual the code is available on Github: FaceRecognitionLib.
A few screenshots of the Android application can be seen below:
The application is available on Google Play and the source code is provided at the following link.
If you have any questions leave them below or open up an issue on Github.
As part of the Robot Vision course at the Institute of Electronic Systems at Aalborg University we had to develop a vision-based LEGO DUPLO stacker using a Universal Robots UR5 robot, a webcam and MATLAB. Equipped with a robot cell as shown below, the task was to develop a system capable of stacking randomly placed LEGO DUPLO bricks in a certain order.
A detailed description of the project and the development of both the image processing and robot control software, is contained within the project report available here: Universal Robots vision-based LEGO stacker.pdf
Conceptual robot cell layout
Using several image processing techniques, including color segmentation, thresholding, BLOB analysis, feature extraction etc. the system is capable extracting the color, location and orientation of the DUPLO bricks currently present in the camera image. This allows an Universal Robots UR5 robot arm to pick up the brick and stack them in an color-ordered scheme of: red, green, blue, yellow, orange.
The webcam is mounted on the robot arm to avoid having to fixate it anywhere else and to link it up to the tool position. This allows easy calibration between the robot tool frame and the camera frame by using the free-drive mode of the Universal Robots arms, allowing the user to grab the robot arm and move it manually to a calibration spot.
A video demonstration of the project including the calibration procedures is shown below.
The purpose of this project was to come up with an interactive demonstration for the Pygmalion Festival 2016 at UIUC. The end result was a demo where an Android device was given to the visitors, each visitor could then draw any continuous path on the Android device. The x,y-coordinates would then be uploaded to the cloud and a trajectory based on Bézier curves would be generated using a Python script. Finally ROS was used to control a small drone. Camera software was then used to highlight the brightest light in the scene, in this case a LED on the drone. This resulted in the path being visualised in 3D-space.
An overview of the project can be seen in figure below. The Android application is used as a simply user interface. The path drawn is then uploaded to Dropbox and a trajectory is generated using a Python script.
Project overview
Finally the drone flies the trajectory. A short video of the project can be seen below:
At 4th semester of my bachelor at Aalborg University me and my project partner became a part of a new research project, UAWorld (DRONER RYKKER INDENDØRS MED DANSK TEKNOLOGI). A project aiming for developing a new infrastructure and a set of drones capable of being used in indoor industrial environments with dynamically changing obstacles (and layout) and human beings likely to walk around. The drones within the project is intended to carry assembly line goods around an assembly line hall into a warehouse where it will be autonomously offloaded.
UAWorld usecase
The main research group within the project had already taken several decisions regarding the drone typology, which indoor positioning system to use and which wireless communication to use. But being dependent on these systems (positioning and wireless link) to reliably navigate a mission critical environment, making sure that the drone would never drop the goods or crash into human beings even at emergency situations, is just as an important task as making the quadcopter navigate safely.
For download links to the report and source code, please scroll to the bottom of the post. Further videos of the project undergoing development can also be found in the bottom of the post. Read more…
Some time ago I had a course dealing with image analysis i.e. image segmentation, moments, colour detection, object recognition etc. As part of the course everyone had to make a project that showcased the theory we had been learning throughout the course. We were allowed to use OpenCV as the backbone for accessing the camera etc, but not allowed to use any of the built-in filters. Instead the goal was to implement the different algorithms ourself.
One day one of my friends was playing the Smartphone game ZomBuster. A screenshot of the gameplay can be seen below:
ZomBuster gameplay
The goal of the game is to tap the lane with the zombie in it, in order to kill it. As the zombies are green and humans are blue I thought it would be a fun challenge to build a robot that could play the game autonomously for the course.
This also allowed me to use the 3D printer I had just bought at the time. For that reason I created a 3D model with all the needed components:
As a part of my electronic engineering degree I have decided to look into the world of Software Defined Radios, a complicated but very powerful tool.
Software Defined Radios, SDR in short, is in short a software-based radio platform, making it possible to program the RF transmissions schemes and updating them on the fly if necessary, a bit similar to what we in the digital world know as FPGA’s. This allows end-products to redefine their radio needs, such as when sending a satellite into orbit where it would be impossible to update the RF hardware platform to support other radio protocol and schemes.
USRP N200 module
To get familiar with the SDR’s I decided to work with a basic USRP N200 module which is supported by LabVIEW and other tools, eg. GNU Radio, and write a detailed report about my progress and discoveries (see the bottom of the post for a link to the report).
The N200 module is controlled over an Ethernet interface, which is also used to exchange (transmit and receive) the so called IQ samples when they have been converted by the analog RF frontend.
In the video below I demonstrate the use of a Software-Defined radio setup with two USRP N200 modules programmed in LabVIEW programmed with an AM modulation and demodulation scheme.
The modules are programmed and tested thru LabVIEW where a graphical interface allows me to transmit a single tone signals or an audio-file from one SDR unit to another for.
Currently it supports several different modes including acro/rate mode, self level mode, heading hold and altitude hold. Below is a series of videos demonstrating the different modes:
I would really recommend anyone that is interested in this sort of thing to read through it for a deeper understanding on the fundamental theory and how it is implemented on a flight controller in practice.
It consists of three parts. The first part presents a theoretical model and the equations used to estimate the attitude and altitude of the quadcopter. The second part describes how the system is implemented on the microcontroller and lists the hardware used for the project.
The final part measures the performance of the flight controller by logging the data in real time. This data is then compared to the simulated results based on a theoretical model simulated using Simulink.
Flight modes
In total there are four different flight modes supported by the flight controller. The first one is acro/rate mode, which only uses the gyroscope to stabilise the quadcopter. This mode is mainly used for advanced pilots and acrobatic manoeuvres. In this mode the aileron and elevator stick inputs indicate the desired rotation rate of the quadcopter. Thus, if the user wants the quadcopter to rotate fast clockwise along its roll axis the aileron input can be put all the way to the right.
In this blog post I will describe a IoT (Internet of Things) Vending Machine that I built quite some time ago with a friend of mine Sigurd Jervelund Hansen.
At Sigurd’s dorm room they got hold of an old vending machine free of charge, as it did not work. We quickly decided that we wanted to get it working and give it a overhaul as well. In the end we enabled it to take both RFID/NFC cards and coins and make funny twitter updates about it.
The video below gives a short overview on how it works.
As mentioned we reused some shift registers, relays and voltage regulators on the original mainboard. One Arduino Pro Mini is connected to the mainboard and takes care of reading and lighting up the buttons (lights up if the relevant slot is not empty), controls the 7-segment LED display, reading the output from the coin validator and returning money if the user requests it by pressing a dedicated button.
As some of you might know I have been studying in San Francisco the last semester at San Francisco State University. For that reason I have not done as much as development as I usually do, due to all my equipment being back in Denmark and also because I prioritised being social and not just sit behind my desk coding all night 😉
Anyway I did not fully stop working. I actually started working on my own flight controller written from scratch in one of by courses. Below is the result so far:
To make our robots even more autonomous we would like to investigate the world of Laser range finding using LIDAR technology. Unfortunately for the users who want to try out LIDAR it’s a very expensive technology to get your hands on.
Throughout the years though Vacuum Clearner robots have evolved a lot, both in the algorithms gettings better but also in the use of more advanced sensors. Lately the Neato XV-11 All Floor Robotic Vacuum System included a small range (0.2m to 6m) LIDAR with 1 degree precision and a resolution of a couple of centimeters. As this vacuum cleaner only costs around $400 makes it a bargain to get hold of a LIDAR if just you could disassemble the robot and use just the LIDAR.
Recent Comments