Home > TKJ Electronics > Universal Robots vision-based LEGO DUPLO stacker

Universal Robots vision-based LEGO DUPLO stacker

As part of the Robot Vision course at the Institute of Electronic Systems at Aalborg University we had to develop a vision-based LEGO DUPLO stacker using a Universal Robots UR5 robot, a webcam and MATLAB. Equipped with a robot cell as shown below, the task was to develop a system capable of stacking randomly placed LEGO DUPLO bricks in a certain order.

A detailed description of the project and the development of both the image processing and robot control software, is contained within the project report available here: Universal Robots vision-based LEGO stacker.pdf

Conceptual robot cell layout


Using several image processing techniques, including color segmentation, thresholding, BLOB analysis, feature extraction etc. the system is capable extracting the color, location and orientation of the DUPLO bricks currently present in the camera image. This allows an Universal Robots UR5 robot arm to pick up the brick and stack them in an color-ordered scheme of: red, green, blue, yellow, orange.

The webcam is mounted on the robot arm to avoid having to fixate it anywhere else and to link it up to the tool position. This allows easy calibration between the robot tool frame and the camera frame by using the free-drive mode of the Universal Robots arms, allowing the user to grab the robot arm and move it manually to a calibration spot.

A video demonstration of the project including the calibration procedures is shown below.

The project is Open Source and all resources, including both MATLAB code and the URScript, are available online in the GitHub repository: https://github.com/mindThomas/URRobot-LEGOstacker


The develpoment of the project has been seperated into two parts:

  • Image processing
  • Robot control

After the system has been calibrated the task of the image processing part is to grab a picture of the randomly placed LEGO DUPLO bricks and generate a list of the available bricks including their color, location in pixels and orientation.
The Robot control converts the pixel locations from this list into cartesian coordinates within the base frame of the UR robot arm, such that the bricks can be picked up. The task is then to pick up the bricks in the desired order to result in a stacked set of ordered bricks as shown below.

Color-ordered DUPLO bricks getting stacked

The image processing part performs a BLOB analysis on a full HD picture from a Logitech C920 webcam. The RGB image is converted into a normalized RGI image from which a calibrated background RGI image is subtracted. A color segmentation is then performed to split the resulting RGI image into several binary images, one for each color. These binary images are refined through morphology using a closing an opening operation. The BLOB extraction is carried out with a Grass-fire algorithm which detects the individual BLOBs within each binary image and gives them a unique identifier. For each BLOB the mass, center of mass and rotation is calculated and small BLOBs are removed to reduce noise. Based on all detected BLOBs within each binary image and their individual features (location and rotation) the brick list is generated, which has been visualized in the image below.

Result of image processing algorithm with BLOB analysis

The robot control part then takes over and convert these pixels into camera frame cartesian coordinates using the pin-hole camera model. These coordinates are furthermore converted into the base frame of the UR5 robot as shown in the frame layout of the robot cell used within this project.

Frames of the robot cell including the UR5 robot


The UR5 robot arm has been programmed through the PolyScope teach pendent to receive commands over a TCP/IP connection such that the computer running MATLAB can be connected to the PolyScope teach pendent over an Ethernet connection to request the robot arm to move, close the gripper, enable free-drive etc.
With the brick locations given within the base frame of the UR5 robot the robot control MATLAB script sends a command to the PolyScope teach pendent of the UR5 robot over the TCP/IP connection, requsting the robot arm to move to the location the first brick. From there on the stacking begins.

Categories: TKJ Electronics Tags:
  1. Dhananjaya BM
    February 19th, 2018 at 12:16 | #1

    Hi

    Thank you for a wonderful project. I downloaded the MATLAB files and found readrobotmsg(t) function missing, can you please share the missing files, thank you in advance

  2. February 19th, 2018 at 14:01 | #2

    @Dhananjaya BM
    Nicely spotted, thank you.
    I have now uploaded the missing file so you should be able to run the project now.
    Good luck.

  3. jethro
    October 17th, 2019 at 06:10 | #3

    HI there

    i m currently doing a project which has the same concept which involves picking up and stacking up bolts/nuts using Web Camera and UR robot but i only allowed to use python…

    do u have any reference i can read as i m struck and unable to move on

    thank u so much

  4. Bala
    March 23rd, 2021 at 14:54 | #4

    Hello.
    can you tell me step by step how you linked matlab and polyscope.

    Thank you

  5. April 2nd, 2021 at 16:49 | #5

    @Bala
    Please read section 4.3 of the report from: http://www.tkjelectronics.dk/uploads/Universal%20Robots%20vision-based%20LEGO%20stacker.pdf
    As highlighted in the report we decided to write our own URScript for the PolyScope to connect to our MATLAB script over a TCP/IP socket. The URScript can be fonud here: https://github.com/mindThomas/URRobot-LEGOstacker/tree/master/URScript

  1. No trackbacks yet.