CoLo is a portable simulation environment for cooperative localization that allows users to create and test estimation algorithms to compare how accurate the estimated robot locations are, compared with the locations supplied by the UTIAS real world data set.
In the past week, I finished up one of the key features of CoLo: providing an animated frame-by-frame analysis of robot locations by adding insights on top of the existing robot animation. Using matplotlib, I displayed the graphs on the same screen and simulataneously animated the demo and insights graphs over time using the FuncAnimation method provided in the matplotlib library. The insights include instantaneous robot’s location root mean square (RMS) error (how far off the estimate is from the actual location) and state variance (squaring the standard deviation). I also explicitly added the current time of the simulation at the bottom right corner of the screen. This will allow the user to better recognize at what particular time some robot's location deviates from the groundtruth (taken from the UTIAS real world dataset) or observe any other particular insights of interest pertaining to a specific algorithm.
For this particular component of CoLo, what is left is potentially new features for deeper and more complex analysis of localization algorithms as we continue working on other features of CoLo. For now, this component will be maintained and further refined to take into consideration any potential backwards compatibility issues. The next step forward beyond this component is to construct a Flask or Django server with Firebase (a real-time database) to obtain real-time updates regarding locations or special notifications determined by the localization algorithms.