Relative Position Detection Algorithm for High-Level Strategies
Shahrul Kamil bin Hassan (Kamil) mesh network blimp autonomy visual processing i2c cars feedback-controlledIn the past 2 weeks, I tried to develop and algorithm that can be used if we have a blimp that have a camera facing down. With the camera, we would be able to detect the field of interest and control the robots within the field. In this case, the current algorithm can be used to control any vehicle with differential control. This includes two-wheel cars, boats, and controlling the blimp in the xy-plane. In order to do this, the vehicle needs to have some sort of identification, in this case, an April tags to determine its position and orientation relative to the camera. Once we have that, we can direct the vehicle to any trackable object within the field, which includes primitive shapes, another April tag, a colored blob, and more. This algorithm is also scalable to multiple vehicles. This means that we can control multiple vehicle at the same time, or do high level planning such as redirecting the nearest vehicle to a target, etc. Of course, these functionality can be implemented within the bound of the hardware limitation. Some of the limitations are:
- The camera needs to be within a certain distance above the field relative to the size of the smallest thing needed to be tracked.
- The Barrel Effect of a camera will contribute to relative error position if the objects are at the edge of the camera frame.
- sending over mesh create delays within the main loop
Here are the test we made for this algorithm.