The ML layer and the Gazebo simulation layers for RoCo were developed independently. I worked on the communication between these layers. Each of the layers is dependent on the other at certain stages. For example, for each simulation step, Gazebo needs new control parameters from the ML layer. Similarly, when the ML layer runs a simulation step, it needs to wait for Gazebo to finish. This communication was built mostly using socket IO. Sockets worked quite well because there was a constant two way stream of information between the layers, where the ML layer would provide parameters for the simulation and Gazebo would send back results. We also had to decide on a format for all these messages. At first, we thought we'd use some sort of json format but then decided to simulate the format of Gym which simply sent parameter lists and a result value. Then we left it to the Gazebo side to interpret the parameter lists. In the end, the communication seems to be working quite well and the major bottleneck is the Gazebo simulation, as we expected. Now we are trying to speed up the simulations via performance optimizations and parallelization.