Our research group's flagship project, Robot Compiler, abbreviated RoCo, has come along a long way since our lab has launched. But can it be improved in terms of user experience?

RoCo is basically an online tool that allows people without technical skills to create complex robots easily. Because we want everyone to able to use and navigate RoCo, we strive to make it as accessible, simple and user-friendly as possible. To start building a robot, the user can use electrical, mechanical and software components pre-defined in RoCo. The platform is powered by an API written in Python and is visualized using Google Blockly and a web-based CAD framework.

When I joined the lab a few months ago, RoCo was separated into two online interfaces: one for 3D CAD design for the outer body of the robot and one for its electrical functionality design. Currently, our members are working on combining both interfaces to get them work on a single website. While I find it very useful to have these two interfaces, they may still be challenging for a user group who is not experienced in CAD design. For example, I would like my 60-year-old dad, who is a surgeon without any robotics-related experience, to be able to operate RoCo in order to create a robot that can help him during a surgery. However, he is likely to struggle navigating the current RoCo interface. The current user interface can be seen in the figure below; the mechanical design interface shows the 3D space representation of a rectangular finger and its generated unfolding. Electrical and software design interface consists of colored blocks powered by Google’s Blockly (as seen in MIT Scratch). Therefore I have been looking for an alternative, more intuitive user experience and recently come up with the following UI idea.

My ultimate idea is to have a compiler that can construct a robot (or give the user instructions about how to construct it) when inputted what kind of task the robot is supposed to accomplish. I propose that the user simply drag and drop blocks from a given set of components (related to electronics and mechanics of the robot) onto a blank area to create an action list. These blocks representing robot components are similar to Blockly’s interlocking blocks. Rather than having the user design the outer body and the 3D mechanical structure of the robot via a CAD tool, this component just asks for desired function(s). These include locomotion, communication with other devices, image and sound processing. Based on this input, RoCo then generates the optimal design that can accomplish these tasks. The 3D structure of the robot is designed automatically by using preset components.

I created mockups and a prototype for this proposed UI by using a prototyping tool named Marvel. The prototype can be accessed via https://marvelapp.com/28c0176. Start by clicking the “Move” block and continue as instructed.


The circle icons on the left represent component categories such as mechanical components and electrical components. If one is chosen, the respective components are shown. These blocks representing components are drag and dropped onto the blank area in the middle, which is called “action list”. If enough components are inserted in the action list, a preview of what the robot will roughly look like is shown on the right hand side. A list of components that the user will need in order to assembly the robot is also shown below the preview image. In the prototype given above, the robot is supposed to move, and grip a mug if it “sees” one on its way. For this to work, the user will need to have an Arduino UNO, 3 servo motors, an Android smartphone and a fishing line. The phone is needed to recognize objects using its camera. Our group is currently working on phone (app) controlled paper robots and one of our aims is to enable robots to perform image recognition. I believe that a larger audience will be able to use RoCo if we can provide such a user experience.

Again, you can find the prototype here: https://marvelapp.com/28c0176

M. Doga Dogan

Next Post Previous Post