The goal of the project is to implement a fetch and delivery behavior for the Robot Tiago.
In the simulated environment (Gazeboo) there are two rooms separated by a wall, plus some obstacles into the environment:
In the first room is placed a table with some objects above that the robot has to fetch (red, blue and green). Tiago has to grasp the correct
objects in some defined order, not collide to not wanted objects, move to the second room where are present
coloured tables, pick the color-associated table, move towards it, place the object and come back in the previous
room. The goal is considered achieved when Tiago places correctly the objects on the right tables in the wanted
order.
Note: the blue lines represents what the Laser sensor scans.
Program Structure
To accomplish this task we decided to exploit the modularity and scalability of ROS, implementing the structure
with different service nodes and action server nodes that are all called upon by the main central node node a.
In the block diagram are present:
Rounded blocks: they represents the main nodes in forms of service/action client-server that manages
the deliberative at higher level
Square blocks: they represents the components that perform sub-tasks demanded by the main nodes.
If you are interested into the details of the program flow described according to the Block diagram, you can check the Report into the GitHub repo .
Main steps
I want to list the main steps of the Tiago's Routine:
Move to table room
Tiago has to move from the starting position to the objects-table and place itself in front of the object to be picked.
Detect object's Pose
Once Tiago has reached the target pose (position + orientation) in front of the table, it has to tilt the head in order
to have a clear view of the object, thus to be able to detect the object's AprilTag (ex. 'tag_6' 'tag_2' in the pics).
This phase is essential in order to obtain the object's pose: the precise coordinates that Tiago has to approach with his arm.
Detect object's Pose phase screenshots
Collision objects
Collision objects are creted in order to avoid our robot's arm crashing into objects
Collision objects
In the above picture the computed collision objects are represented, in the simulated environment, as the
green polygons: the squared one represents the table, and the cylindrical one is the collision object of
the obstacle present above the table ('tag_6' obj.: the hexagon polygon).
Note: the 'tag_2' object has no collision object since it has to be picked and not avoided.
Pick phase
Once the Robot computed the pose of the object to be picked and the collision objects of the unwanted ones, it can proceed
to actually pick the wanted one.
The pick phase consists in the following sub-phases:
Positioning the arm so that it is easier to grip the object
Grasp of the object
Attach the object to the gripper and close it
Positioning of the arm equal to the first point
Placement of the arm in a safe configuration for movement within the environment
Example of pick phase
Automatic docking routine
Problem: Recognize which is the correct table, thus define the correct target position.
Solution:
Robot moves into table room
Looks towards the coloured tables (Tiago's POV)
Threshold the image to segment the right table
The first non-black point defines the target position (in terms of right/center/left)
1. Tiago's POV - 2. Segmented image
Place phase
The place phase consists of the following subphases:
Positioning the arm so that it is easier to place the object
Place the object
Open the gripper and detach the object
Positioning of the arm equal to the first point
Placement of the arm in a safe configuration for movement within the environment
Example of place phase
Repeat
The previous points clarify each main step of the Routine, so once an object has been successfully placed,
Tiago is going to return in the other room to repeat the routine for the remaining objects.
Once all the coloured pieces are successfully placed in the 'place room' the routine is complete and
the goal is achieved.