In order to get a live ground truth reading of the location of an agent we need to utilize sensors. We chose to use OpenCV to detect the color of the robot. So fatr the results have been good with just a solid color however the model struggles to identify a multicolored robot. It was thought that multiple color masks could be combined in order to identify the robot howver this would make the system very slow and there is no indication that this method would work. Looking forward another system is needed. Aruco tags looks like a promising option whereby the system can use the tag to determine the orientation of the robot in space.
The image shows the correct identification and localization of the yellow sheet of paper.