For the past six weeks I've been working on the Random Dataset Generation for CoLo, our multirobot localization simulation for localization algorithm development. The dataset generated is built to resemble in form the UTIAS Multi-Robot Cooperative Localization and Mapping Datasets, which CoLo uses as a base for its general simulations.

The random dataset generates random data at runtime, seeded for reproducability. Data generation must contain three components. First, the odometry of the data must be known. Second, we need the groundtruth of the robot's location, ignoring noise from the sensors and measurements. Finally, the dataset contains the measurments taken by the robot, which implicitly contains the map of the landmarks and their locations. These elements composed an MRCLAM dataset, and they are necessary to the random dataset as well. Though they are generated at runtime and there is no practical use for writing them to a file, they are still output for diagnostic purposes, and the tests that are run on the random data are run on these output files independently of the rest of the simulation.

The random dataset generation first generates the odometry, as well as a corresponding groundtruth for each robot. This design is most convenient for the user to specify a movement pattern for the robot. A class Robot controls the robot movement decisions, making its choices based off the current location, velocity, angular velocity, and other parameters that the user can choose to pass to the Robot. Users can extend or even replace the Robot class to achieve different movement patterns. As of now the Robot class does not let a user specify these choices from the top level of the simulation. They must configure it in the source code. That functionality is forthcoming.

The successes for the dataset generation are the models of physical events. The odometry, measurements, and groundtruth have all been tested and verified. So any part of the simulation that depends on robots being where they say they are when they say they are and seeing objects at the location and bearing they see them is currently working.

There are two current limitations of the random dataset generator. The first is that the random dataset generator lacks a response function that would allow the localization algorithms to use the dataset generated to make localization decisions. The localization functions that want to use the measurement data to update other robots on the collective location cannot do so without this function, so using a localization algorithm without this function complete only makes the robot's estimated location less accurate. The second is a problem encountered due to lackluster use of version control. After large changes to the existing dataset module that imports a set of files as an existing dataset from the MRCLAM files, there are compatibility issues between the simulation and the random dataset generator. These can be resolved easily, and I will keep working on refactoring the code so that both thee existing dataset and the random dataset inherit from a single dataset class. This will ensure that simple interface changes like the ones that happened to the existing dataset generator no longer invalidate a version of the random dataset generator that has not been touched.

The reason for these limitations was a divide between my expectations and reality. The physical model, I understood. Consequently, it is implemented completely and correctly. However, I didn't have a good sense of the requirements for a localization algorithm, and I had a similarly weak grasp on the simulation framework. When I tried to implement functionality that depended on these features, I was frustrated by a lack of documentation on how the simulation works and what it requires to simulate localization. This lack of documentation for CoLo is being addressed right now.

Next Post Previous Post