In the past few days, I have been working on integrating the visual processing work I have been doing with the work the rest of the Arnhold team has done so far. AprilTags will be integral to our demonstration next week, so I have been trying to determine the ideal resolution where the most area on the ground can be seen while the AprilTags are still detectable. I had been zooming in to better view AprilTags at long range, but this means that only a small area can be seen at a time. The goal is to be able to see most of the field at the same time and still detect AprilTags. In this first video, this is the more zoomed in version (a resolution of 240x240 pixels). This detects the tags well, but is a smaller frame.

The second video is the largest frame that can be used to detect the tags, because if you go any bigger, the processing becomes very slow as the camera does not have enough memory to maintain a high frame rate. This resolution (320x240 pixels) is able to detect the AprilTags at a similar range as the previous resolution, and very little accuracy or range is lost. The area that is detectable at one time is also larger.

In addition, in both of these videos, the openMV camera is being controlled by the ESP featherboard. The featherboard is calling the AprilTag detection function, and the camera then returns the tags that it detects. This also leads to autonomy It can only return information for two tags per function call, because passing any more information in one function call leads to processing slowdowns (likely caused by either communication issues or data unpacking issues). To solve this issue, the featherboard requests information for two tag IDs at a time and cycles through all the IDs that the featherboard is interested in.

Next Post Previous Post