Thursday, November 16, 2023

Story: Using AprilTags for calibration

 Apriltag

The github page of AprilTag [1] says it “is a visual fiducial system popular in robotics research”. At its core are barcode-like tags with a number encoded in such a way that you can also determine the location and orientation of the tag. 

The image above shows a sample AprilTag and a human-readable text. 

AprilTag has different families and the 36h11-family shown is one of the recommended families with a square of 6x6=36 data bits at the centre. More data bits mean more data can be encoded, but also makes each bit smaller and hence, larger tags are required for detection when distance from the camera to the tag increases.

The data bits are surrounded by a black border which, in turn, is surrounded by a white border. The outer black border shown is not a part of the tag but it prevents me from cutting the mandatory white border from the tag. It took me most of a Sunday to find out the missing white border was why my tags were detected sometimes (in retrospect: when the background is bright enough) but mostly not… 

Number 11 after the H is the hamming distance, the tolerance to incorrect bits. Not all bits of a tag might be correctly identified in a picture, e.g., due to lighting conditions. In such cases, the hamming distance prevents reporting a tag with the wrong ID. Or more important: reporting some random, non-tag pattern in an image as a valid tag. 

Higher hamming distance means more robust but, as a consequence, a lower percentage of the 36 bits can be used to encode values. The 36h11 family allows for 587 to be encoded. 

And finally, you might have guessed from the text, this tag is #20. 

The library

The AprilTag library locates tags in an image and returns their properties.

Apriltags is developed by the APRIL Robotics Laboratory at the University of Michigan [5].

With their implementation, coded in c [1].

Several python wrappers for this library are available. We decided to try pupil-apriltags: python bindings for the Apriltags3 library [2].

So, let’s install this and use it? Well… no. It turns out that the latest pupil-apriltag release [2] is depending on functions introduced in numpy 1.20, while the current numpy version in Raspberry Pi OS is 1.19.5. 

The obvious thing to do is update numpy, but since numpy is at the core of OpenCV (and probably a lot of other libraries), a change of version might introduce numerous dependency issues. So, based on the release dates of the numpy 1.20, pupil-apriltag version 1.0.4 has been selected as the most likely candidate to function with numpy 1.19.5. And it works 😊

Problem to the solution

So, we have tags and software that can locate those tags on camera images on the Pi. That’s cool but shouldn’t we be working on the Pi Wars challenges? 

Yes, we should. And we are. 

In our story on the Real Time controller was mentioned the base robot will be using odometry. This means it will keep track on the position of the base robot while moving. And this will allow the robot to move autonomously to a given position.

Of course, this won’t be perfect since the estimated position will drift from the real one due to imperfections in odometry, e.g., due to wheel slip. But our expectation is that this will allow us to move faster with enough accuracy on short tracks. That is, faster than only with the feedback from the camera, with the lag of processing images and communications between the Raspberry PI and the Real Time controller.

But our main sensor is the camera and will be used, for instance, to locate the barrels in the Eco-Disaster challenge. To effectively use odometry-based movements, we need to translate the location from a barrel on a camera image to the location on the floor.

Such translations are called homography and are supported by OpenCV [3]. The math looks quite intimidating (at least, to me) but using the OpenCV functions is not that complex [4] if you have 4 reference-points on an image and their related position on the floor.

This is where AprilTag come in: 4 tags are placed on known locations on the floor and the AprilTag library is used to determine the location of the tags on an image. And with those point-pairs – image location and floor location – the transformation matrix is generated by OpenCV.

With this matrix, coordinates of objects on an image can be translated into coordinates on the floor, relative to the robot.


Image: four tags, setup for calibration. The number of each tag is shown, next to green cross identifying its centre. A custom-printed banner from an old project is used to put the tags on the designated position, relative to the robot.

One practicality is ‘on the floor’. For calibration our AprilTag need to be (mostly) upright to be visible on the camera given the distance to the tags (see image). The upright position raises the centre of the tag to 75mm. This could of course be compensated in software, but intimidated by the math we choose to raise the robot during calibration. 

Image: The robot raised for calibration with tag #2 in the background. 

Bonus

Now that we have AprilTag running, we can use it for other purposes. I guess we can’t put tags on zombies or above the blue and yellow area of the Eco disaster during the challenges. But we might use them as references during testing. Or to launch the code of the difference challenges based on the tag number in front of the camera...


[1] https://github.com/AprilRobotics/apriltag

[2] https://pypi.org/project/pupil-apriltags

[3] https://docs.opencv.org/4.x/d9/dab/tutorial_homography.html

[4] https://learnopencv.com/homography-examples-using-opencv-python-c/

[5] https://april.eecs.umich.edu/software/apriltag.html

No comments:

Post a Comment

The Finale - Sunday, 21st April 2024

 Wow What an event this Pi Wars is, being inspiring for all participants. On Saturday we saw many young people transforming from being anxi...