Fish Achievement 5

Fish 4.2

Our robot has been a bit shy up until this moment and has not shown much of its capabilities to the world. This week it decided to complete achievement 5.

Achievement 5

In achievement 5 the goal is to grasp an unknown brick based on its color placed randomly somwehere on the workspace. After some issues that we had with recognition of small objects with our previous frcnn model we decided to create a bigger dataset that includes objects with smaller dimentions. Due to lack of time we decided to start by training for a subset of colors, so for this achievement we are able to recognise between 'green', 'blue', 'red' and 'white' bricks (of various shapes and sizes).

Our algorithm logs all the possible bricks that it recognised in an image, but choses to only approach and grasp the closest brick with the predefined color. This allow us to specify the color of the brick on a JSON file and change the robot's goal color on real time. The command is of the following form:

{"stateM": {"object_class":"red","box_id":"box_1"},"sleep": {"t": 1}, "spot_box": {"camera": "", "box_identifier": "box_1"}}

IR sensor and obstacle avoidance

We made an attempt to mount the IR sensor which is connected to the teensy in order to have a better understanding about when our robot approaches a brick and is in its gripper space, but this cannot work due to a kernel module not compiled in the Jetson. Since we only have two more days for the end of the competition we decided to focus on the tasks in hand and not create new issues by recompiling the kernel. We also considered to use the Lego IR sensor, but it is too bulky and heavy to be mounted on our gripper which needs to go up and down for releasing the objects to boxes.

Clustering

Our approach on the clustering task is based on the following paper. Similarly with the work done by Kilinc et al. we remove a layer from the network that we use for classification and adding an extra layer that performs the clustering and provides a latent annotation for each brick. This basically consists of a number of softmax nodes that correspond to an individual annotation. This architecture can be seen in the following image taken from the aforementioned paper.

Following, we will be using this information to distinguish between the different brick categories and let our robot perform the moving the bricks to different boxes task. Our network is currently training, so I will provide an appropriate image as soon as this completes.