Friday, May 3, 2024

The Finale - Sunday, 21st April 2024

 Wow

What an event this Pi Wars is, being inspiring for all participants.

On Saturday we saw many young people transforming from being anxious in the morning into happy, shining people in the afternoon. And on Sunday this repeated with different, a bit older, people.

So much interaction between all contestants interested in each other’s creations and everybody willing to share and help others out.

Many thanks to Mike, Tim and Dave for organizing such a super friendly event with an amazing relaxed atmosphere during the whole weekend.

Of course we also need to thank all other volunteers and sponsors who make this event possible.

Time to put the robot where the challenge is…


Below are the videos of all our challenges. Challenge definitions can be found via https://piwars.org/2024-disaster-zone/challenges/.

Technical (11th) & Artistic Merit (15th) & Most Disastrous (11th)

For these three challenges, we needed to submit a video before the event, to give the judges time to view and score these.



The Temple of Doom (not ranked)

We started with The Temple of Doom being many great and fun challenges in one.

Here it really showed we didn’t spend enough time practicing driving the robot…

As one of our supporters summed it up: What a drama


Pi Noon (not ranked)

The video says it all.



Minesweeper (Autonomous 5th)

The first autonomous challenge for the day for us. It was great to see the strategy we defined (see: https://dutch-rescue-team.blogspot.com/2024/04/story-strike-pose.html), coming to live. We ‘reused’ our Eco Disaster extension to ensure covering 4 tiles at once.



The Zombie Apocalypse (Remote Control 4th)

In the preparation room we checked if our laser cross was still pointing in the right direction and already discovered that we needed to turn off the lights to be able to see the laser cross.



During testing a week before we left for the competition we’d became aware of this possible issue, but decided to take the chance. However, at the actual course, the vertical line of the laser cross could be seen, but the horizontal line was completely invisible. Luckily we managed to get a few hits.


Lava Palava (Autonomous 1st)

With our extension for the required balance to take on the sleeping policeman and our camera with OpenCV, we were able to have three smooth runs.



Eco Disaster (Autonomous 2nd)

Our personal pinnacle. Lots of work has gone into polishing this challenge.



Escape Route (Autonomous shared 2nd)

Before the robot starts moving, it knows exactly the route to take, because the configuration and start pose is known. From the picture taken by our camera, the colors of the first two blocks are determined and the position of our robot compared to the first block.


Blogging (4th)

We hope this blog may inspire others to get interested in robotics and maybe even to participate in Pi Wars.

Wednesday, April 10, 2024

We got far … (April 2024)

 




June 1, 2023 the return of Pi Wars was announced: Pi Wars 2024 - Disaster Zone

Within a week, we formed a team of some autonomous robot hobbyists hoping to be able to compete.

Why did we start this?

As described in the  first blog of this series (We want to go far ...), we discussed our goals for the journey ahead:

  • have lots of fun
  • have a great weekend with like-minded people in the UK
  • learn new skills
    • Even though no one has experience with OpenCV, it is on everyone's to-do list.
  • satisfaction

So, where are we now?

Less than two weeks before the competition, we can safely say that we met (or will meet) all these goals.

We had lots of fun. Not only with all kinds of (crazy) ideas for approaching each challenge, but also testing together, seeing progress made by others in shared videos, finding solutions for issues together and regularly mocking each other.

Not only did most of us (5 out of 6) gain OpenCV skills. 











But individually skills were new or enhanced, like:

  • Python.
  • Having a robot with multithreaded python.
  • Making molds for and casting silicone rubber 'all-terrain wheels' (for The Temple of Doom).
  • Defining and managing services using Systemd.
  • Getting much better understanding of installing and using servos, after the spirit (smoke) left one of those in the nerf-gun provided by another team member.

We are very satisfied that we achieved to be able to compete in every challenge at a level we can be happy about. The only sacrifice we had to make for that was to reduce our scope:

  • We decided to use remote control instead of autonomous for The Zombie Apocalypse challenge to have sufficient time to finish all other challenges.
  • We didn’t publish blogs as frequent as we had planned, but prioritized the challenges.

Even more satisfying is the way we were able to do this as a team. We have definitely seen the proverb ‘If you want to go fast, go alone. If you want to go far, go with others.’ being valid for this journey. It is great to work with a group of team players, where everybody contributes to their possibilities, both in available time and skills.

What is still ahead?

Regarding the great weekend with like-minded people in the UK: That can’t go wrong as long as we manage to get there. We love talking robotics and admiring other people’s creations.

Although our journey can’t fail anymore, it would be of course the icing on the cake if our robot can shine in the challenges.


Tuesday, April 9, 2024

Story: Strike a Pose

Minimal Viable Product

End of June 2023 during our first team meeting we defined the Minimal Viable Product for the Minesweeper challenge.



 MVP:

Info: Walls 30 cm high

Approach

In July we found a possible quite minimal approach using only our wheel encoders and Raspberry Pi Camera Module 3 Wide camera for sensors.



The wide camera has a diagonal field of view of 120 degrees and horizontal 102 degrees. So we realized that if we were able to position the robot in the right pose, we might be able to:

  1. Cover four squares at once.
  2. Be able to see all 12 other squares, we’re not covering.

The name Magic Pose was coined immediately.

Only 4 Magic Poses, each near a corner of the arena, would be sufficient.

Determine current pose

Key element in the approach is being aware of the current pose relative to the arena all the time.

Although our odometry based on the wheel encoders is quite accurate, it may drift away from reality with every move we make. Plus we don’t know the exact pose at our start position.

So, every time we arrive at a Magic Pose the houghlines function is used, combined with the expected position of the center of the arena based on our odometry, to figure out the current actual pose.



At the start we just assume the robot to be in the middle of the ‘bottom, left’ corner square, facing the opposite wall and drive to the closest Magic Pose, where the actual pose will be determined for the first time.

Detect lit square

Do we need to be able which square is lit?

Given the robot is over four squares, we don’t need to detect if one of those lights up, but just stay put.

Further we only have three possible destinations, the other three Magic Poses, to drive to, so after determining the current pose, for each possible destination a mask is created that covers the four tiles for that Magic Pose.

Testing

No matter how good an idea, a reliable implementation needs to be proven by a lot of testing to validate assumptions and circumstances.

Of course we had to validate the right positions for the Magic Poses.

But we also did testing with:

  • Different environmental lighting.
    • Proved good to have some  light on the robot as well.
    • Bright light may make (parts of) the black lines look white on the camera image.
  • Different colors red for the squares.
    • Should also not be set too sensitive.
  • Different types of lights for the squares.
    • Incredible how many shades of red exist.
  • Different floors for the arena.
    • The real white floor showed with quite some red pixels on the camera image when no square was lit.

Result

A month ago, this was the status:


Quite some improvements were implemented since than.

Although the execution  seems reliable now, even last Saturday, two weeks before competition, strange behavior during testing resulted in finding a bug in the code. Further the challenge arena and lighting in Cambridge will be a surprise for us.

So, you’re invited to join us on Sunday April 21 at Pi Wars 2024 – Disaster Zone in Cambridge to see the result.


Monday, April 8, 2024

Story: The Zombie Apocalypse - Shoot the undead! (Part 2)

After finalizing the nerf-gun Proof of Concept (POC1) around September last year (see blog entry Raspberry Pie...), it was time for some hardware upgrades. Since we are building 3 robots, we needed 3 nerf guns!

Learnings sofar:

Nerf sizes:

While testing the POC1, sometimes a nerf dart was activated with the trigger servo, but it did not shoot at all… Somehow the trigger servo range was too short, or, wait, looking at this particular dart, the dart was too short to reach the flying wheels. Time to measure about 60 darts and determine the average length.

After some math, redrawing the trigger system in CAD and starting to remove the too short and the too long darts, this problem should not arise anymore.

Nerfs (not) falling down in the cartridge:

Since nerf darts are very light, small cavities and burrs inside the cartridge easily prevent nerf darts falling down to the bottom. And when there is no dart at the bottom, the trigger mechanism is not firing… There you stand during the challenge… A tested, shooting nerf gun system, fully loaded, but the 2nd dart is not popping out… Oh nooo! 

 When you search for different systems, there are different options forcing the darts down in the magazine. Of course there are the (commercial) spring loaded devices and there are also ‘contra mass’ solutions. Since the spring is also damaging the nerfs, the latter one is the system we are using.

Another aspect are the 3D printed magazines. It’s nice having an opening, to see the darts is (almost) obvious. And for filling the magazine, it’s handy too. And although the edges were grinded a bit, there were still cavities & burs inside, which could be enough for not dropping a dart. That’s why the next cartridges will be laser cutted from acrylic plates (and still debured). This will give a smooth surface, towards the darts.

Quick release mechanism:

For replacing the nerf gun quickly in/out the robot, a quick release mechanism was build:



Tilt & Pan…

Since we have a moving (& rotating) robot, in theory the gun would not need pan functionality. The robot was expected to rotate within +/-1.5 [deg], which is still a huge range, for a small target 2 meters away. So a pan function might be handy. In this case the nerf gun could pan about +/-7 [deg] and together with the robot movements, we should be able to shoot 360 [deg] around.

2nd CAM

Our fixed camera is pointing downwards, for doing all other challenges. So we also decided to add a second CAM on the nerf, which is moving around with the gun.

Laser pointer

It’s just for fun, having a laser pointer onboard. This will definitely help during autonomic shooting, so at least there is some visual feedback for us as developers.

Maybe the laser pointer could be used for calibrating the gun, but let’s see where we end up (in time).

Nerf gun v2

So now everything is combined in this new design, this should work like a… 

 And after creating some more parts and assembling (Sept. 2023)…  

 Besides the original (green) test version, there are now three V2’s: 

 That looks awesome!


And ready for dispatch (Dec. 2023), within our team: 

 (The team did expand a little, so also the POC1 found a new home.)

Electronics

The new nerf gun was tested again with the temporary arduino setup. This manual setup basically has some potmeters, buttons and servo pins. Enough for playing around and testing the nerf gun in manual mode:

Time for some laser calibration!


Actuators/components used sofar:

  • FVT LITTLEBEE Little bee BLHeli-s 30A ESC (ali)
  • RS2205 2300kv Motor (ali)
  • Tilt & pan: ES3302 servo (ali)
  • Trigger mechanism: Tower Pro SG92R servo


So it’s time to upgrade the electronics enabling autonome shooting. For this setup, all the servo’s are now controlled by an Adafruit 16 channel servo board. This servo board is still activated by an Arduino, over I2C. There is some basic API on the arduino, controlling the nerf gun and reporting tilt & pan positions if desired. With a serial protocol, the RPi4 is able to communicate with the arduino.

Turret like testing setup

Picture of nerf gun test setup, including the RPi:

 For the final robot, the servo wires are connected towards the robot's motor driver board, via a magnetic connector, for easy installation.

Testing environment

Pictures from different zombie sets. PI-wars zombies were available later in time. 

Zombies from PI-Wars

Zombies used for testing:

Around January 2024, it’s time for some more testing. A box with a hole is an ideal testing platform for getting a better insight of the shooting accuracy. It’s placed about 2 meters away. If the nerf enters the hole, it will hit an angled flap and (almost) everytime drop down in the box. Shooting 5 times each round, for over 20 times, a score of >80% is achieved. This is giving some confidence.

Software

Finally some free time in February, playing around with OpenCV and the nerf gun setup. There are different ways of detecting potential zombie pictures. An AI algorithm seems a bit too far away, so let’s try something different.

The approach below seems to work, finding ‘distortions’ in the field. Then filtering all elements out of scope:

Step 1: take picture & scale 50%

Step 2: greyscale:



Step 3: Canny

Step 4: Dilation & Erosion



Step 5: Blobs generator

Step 6: Display targets found, with range of nerf gun & with minimum size

So recognition seems to work and after some more coding, the nerf gun could also be manual controlled by Python shell commands.

This also enables easy calibration. Since we lack a distance sensor, the calibration only works with a fixed distance from the target.

Together with the test team at home, the verification of the autonome version:


After every shot, a screen dump is made. For the current test objectives not very interesting, but when shooting real zombies, you know if they are hit or not.

Upgrades V2.1

The 2nd CAM did not have the right resolution and did not add more value, so it’s rejected.

The pan system had some play, which was noticeable when turning on the spinners and turned slightly a bit off. This was covered by a rubber band between the pan-servo and the fixed attachment point. Problem solved.

The cheap Tower Pro SG92R servo jitters a lot and sometimes the measuring angle seems off, to be replaced with a ES3302 servo!

The pan angle is still very small and upgraded towards an +/- 25 [deg] version. So the robot does not have to move at all. Although the new system still has too much play… 

The final two working robots are updated with these latest upgrades and being prepared for PI Wars 2024.

Manual wireless controller, for manual shooting via RPi.

Final thoughts

Personally it took a long way, from the brainstorm session (June 2023), up to a working autonome version (February). I’m happy, we did succeed in some way shooting some zombies (& first time RPi & OpenCV programming). The amount of work was not the issue, but the real challenge was finding enough hobby time.

Within the team, we decided to drop the autonome part for this challenge and go for the manual shooting. There are still too many uncertainties, like: 

  • influence final background used, incl. light conditions,
  • training with right zombie set,
  • robustness of code (image recognition would probably be better),
  • the bigger pan mechanism has too much play, resulting in vibrations during shooting,
  • (auto) calibration at the final scene, actually the system needs a distance sensor too..

Nevertheless, it was a nice experience and fun having these nerf guns shooting around!




Thursday, November 16, 2023

Story: Using AprilTags for calibration

 Apriltag

The github page of AprilTag [1] says it “is a visual fiducial system popular in robotics research”. At its core are barcode-like tags with a number encoded in such a way that you can also determine the location and orientation of the tag. 

The image above shows a sample AprilTag and a human-readable text. 

AprilTag has different families and the 36h11-family shown is one of the recommended families with a square of 6x6=36 data bits at the centre. More data bits mean more data can be encoded, but also makes each bit smaller and hence, larger tags are required for detection when distance from the camera to the tag increases.

The data bits are surrounded by a black border which, in turn, is surrounded by a white border. The outer black border shown is not a part of the tag but it prevents me from cutting the mandatory white border from the tag. It took me most of a Sunday to find out the missing white border was why my tags were detected sometimes (in retrospect: when the background is bright enough) but mostly not… 

Number 11 after the H is the hamming distance, the tolerance to incorrect bits. Not all bits of a tag might be correctly identified in a picture, e.g., due to lighting conditions. In such cases, the hamming distance prevents reporting a tag with the wrong ID. Or more important: reporting some random, non-tag pattern in an image as a valid tag. 

Higher hamming distance means more robust but, as a consequence, a lower percentage of the 36 bits can be used to encode values. The 36h11 family allows for 587 to be encoded. 

And finally, you might have guessed from the text, this tag is #20. 

The library

The AprilTag library locates tags in an image and returns their properties.

Apriltags is developed by the APRIL Robotics Laboratory at the University of Michigan [5].

With their implementation, coded in c [1].

Several python wrappers for this library are available. We decided to try pupil-apriltags: python bindings for the Apriltags3 library [2].

So, let’s install this and use it? Well… no. It turns out that the latest pupil-apriltag release [2] is depending on functions introduced in numpy 1.20, while the current numpy version in Raspberry Pi OS is 1.19.5. 

The obvious thing to do is update numpy, but since numpy is at the core of OpenCV (and probably a lot of other libraries), a change of version might introduce numerous dependency issues. So, based on the release dates of the numpy 1.20, pupil-apriltag version 1.0.4 has been selected as the most likely candidate to function with numpy 1.19.5. And it works 😊

Problem to the solution

So, we have tags and software that can locate those tags on camera images on the Pi. That’s cool but shouldn’t we be working on the Pi Wars challenges? 

Yes, we should. And we are. 

In our story on the Real Time controller was mentioned the base robot will be using odometry. This means it will keep track on the position of the base robot while moving. And this will allow the robot to move autonomously to a given position.

Of course, this won’t be perfect since the estimated position will drift from the real one due to imperfections in odometry, e.g., due to wheel slip. But our expectation is that this will allow us to move faster with enough accuracy on short tracks. That is, faster than only with the feedback from the camera, with the lag of processing images and communications between the Raspberry PI and the Real Time controller.

But our main sensor is the camera and will be used, for instance, to locate the barrels in the Eco-Disaster challenge. To effectively use odometry-based movements, we need to translate the location from a barrel on a camera image to the location on the floor.

Such translations are called homography and are supported by OpenCV [3]. The math looks quite intimidating (at least, to me) but using the OpenCV functions is not that complex [4] if you have 4 reference-points on an image and their related position on the floor.

This is where AprilTag come in: 4 tags are placed on known locations on the floor and the AprilTag library is used to determine the location of the tags on an image. And with those point-pairs – image location and floor location – the transformation matrix is generated by OpenCV.

With this matrix, coordinates of objects on an image can be translated into coordinates on the floor, relative to the robot.


Image: four tags, setup for calibration. The number of each tag is shown, next to green cross identifying its centre. A custom-printed banner from an old project is used to put the tags on the designated position, relative to the robot.

One practicality is ‘on the floor’. For calibration our AprilTag need to be (mostly) upright to be visible on the camera given the distance to the tags (see image). The upright position raises the centre of the tag to 75mm. This could of course be compensated in software, but intimidated by the math we choose to raise the robot during calibration. 

Image: The robot raised for calibration with tag #2 in the background. 

Bonus

Now that we have AprilTag running, we can use it for other purposes. I guess we can’t put tags on zombies or above the blue and yellow area of the Eco disaster during the challenges. But we might use them as references during testing. Or to launch the code of the difference challenges based on the tag number in front of the camera...


[1] https://github.com/AprilRobotics/apriltag

[2] https://pypi.org/project/pupil-apriltags

[3] https://docs.opencv.org/4.x/d9/dab/tutorial_homography.html

[4] https://learnopencv.com/homography-examples-using-opencv-python-c/

[5] https://april.eecs.umich.edu/software/apriltag.html

Sunday, October 1, 2023

Story: Script for initializing GIT on Raspberry Pi

 When we need new software on our Raspberry Pi, we create a new image containing that software.

However, the image doesn’t contain the required GIT configuration to access our code for the Raspberry Pi as stored on GitHub, because we want each team member to use his own account for committing changes.

Although configuring GIT manually is not a huge task, it is a repeating task...

So, we created a script that:

  • Configures GIT on the Raspberry Pi with settings used on the Windows computer.
    • git user name and email
  • Generates an ssh-key on the Raspberry Pi and adds the public key to the GitHub account used on the Windows computer.
  • Clones the GitHub repository with our code for the Raspberry Pi to the Raspberry Pi.

Some preparation on the Raspberry Pi

The installation of the packages ‘expect’ and ‘pwgen’ on the Raspberry Pi is done as part of our image generation.

Preparation Windows development workstation

On the Windows development station is quite some preparation required, because you need passwordless ssh access to your GitHub repositories and passwordless GitHub CLI.



Disclaimer

It sounds better than it is at this moment, because sometimes the script hangs or fails for unknown reasons. However, in general it just works and takes care of configuring git on the Raspberry Pi and making our repository available in less than 20 seconds, with one command. If it takes more time, we just stop the PowerShell script with CTRL-C and rerun.

Improving this will probably not reach the top of our priority list until after Pi Wars 2024.

Resources 

Story: Generating the DRT Raspberry Pi image

 

Why?

Due to us building three ‘the same’ robots, we don’t only have to align the components used and their assembly, but also the software that we run. If on every robot tests would be done installing different Python libraries on the Raspberry Pi, we might have unpredictable side effects when running the same code, due to differences in the installed software.

So, to ensure running really the same code, we decided to maintain bespoke DRT SD card images, containing the operating system and all other software, except for the configuration and code maintained in our git repository. Loading a specific DRT image version on an SD card, followed by cloning a specific git commit of our repository to it, should result in the exact same Raspberry Pi configuration every time.

How?

We’ve chosen not to create our images from an existing SD card, but to use the pi-gen tool. Although in hindside it would probably have been more sensible to just clone pi-gen and modify it, we made a ‘wrapper’ with only our configuration and a script that loads the pi-gen code, replaces the standard build configuration with our own and just runs pi-gen.



The secrets used in our configuration are managed using KeePassXC and stored in KDBX 4 format.
Current secrets stored:

  • Default username and password for our image.
  • The credentials for the wifi networks often used by the team members.
The password file and its passphrase are not stored in our git repo, but the configuration file containing the location of those is.

A DRT image can be written to an SD card using the Raspberry Pi Imager, with optionally using the advanced options.



What?

We use the standard first four stages of pi-gen to generate the Normal Raspberry Pi OS image.

The standard stage 5 is replaced by our own stage 5 to take care of the specific DRT packages and configuration, like:

  • enabling SSH access
  • enabling VNC Server
  • enabling RDP Server
  • Git Cola
  • Thonny
  • OpenCV
  • Numpy
  • Matplotlib
  • python_json_config
  • pupil-apriltags
  • wifi networks used by team members

Resources

Tool used to create the official Raspberry Pi OS images

https://github.com/RPi-Distro/pi-gen

DRT pi-gen wrapper with bespoke configuration

https://github.com/Dutch-Rescue-Team/drt-gen

Introduction to Raspberry Pi Imager

https://www.raspberrypi.com/news/raspberry-pi-imager-imaging-utility/


Story: Tooling - Dynamometer…

 Since some of us are lacking experience with our custom real time controller (RTC) board and also lacking experience working via Raspberry Pi, it’s time to discover the RTC capabilities directly from the PC’s USB terminal. Now there is a stable connection and some long wires…

Driving around on a desk is not very handy, so the next obvious step is building a dynamometer test bench.

The dynamometer

Goal: wheel calibration.

The basic thing to test, is drive straight forward and measure the traveled distance from both wheels. And if the distance is known and the time it took, also the speed profile could be derived…

Way of measuring:

So the wheels need to be placed on encoders, directly reading wheel rotations and the robot should not drive from the test bench. Keeping the robot wheels always in the same location on top of the encoder, requires a second idler bearing. Now the wheels could rotate without moving from the test bench.


Design of freedom:

In this way, forward motion (X) and rotation in the horizontal plane (rZ) of the robot is eliminated. One idler bearing should be replaced with a v-grooved version, so side motion (Y) is also fixed. (But this setup seems to work fine too.)

Electronics:

Ideally incremental encoders should be used, just reading pulses quickly enough, will tell the traveled distance. It’s basically the same as what our robot does. Unfortunately these encoders are not laying around.

But found some absolute encoders, inside well designed feeders from an Ultimaker printer.

And the good thing about that, next to the encoders, there are also nicely machined filament gripper wheels & bearings inside those feeders, which are just wide enough fitting our wheels. This is even more interesting, since these gripper wheels do have a good grip pattern too. The encoder reads changes of magnetic orientation from the gripper wheel, so the encoder itself is discoupled from the gripper wheel.

Since there is already some code available for reading these AMS-AS5048B encoders (via arduino), this project is a piece of cake.

Encoder types:

The handy thing about incremental encoders in this case: after one encoder rotation, you could still continue reading pulses and calculate the actual position. Basically the amount of encoder-turns does not matter at all. 

While using absolute encoders, after every full rotation, the ‘current’ angle starts at zero again, which generates a ‘saw-tooth’ graph. And suppose you have jitter, just around this zero point, what will your code report? 

This AMS encoder reports angle information over i2c and also provides the number of rotations by an extra ‘PWM’ pin. But I’m lacking to read out the PWM signal, so let’s solve this problem by software.

Response rates & actual positions:

For this project an Arduino Mega is used, sending ‘current time’ and 2x ‘position’ information back to the pc. At first the numpy.unwrap() function was programmed, which converts the angle-saw-tooth information into a straight line. For doing so, you need enough ‘data’ points between one rotation.

Since the gripper wheels are relatively small compared to our robot wheels, the encoder rotates about 8 times faster. Sending these 3 numbers every time over USB, is slowing down the maximum reading speed of the Arduino a lot and so the maximum testing speed of the robot. So it’s working, but not ideal.

Instead of using the unwrap() function, it’s time to re-write the code. The Arduino will directly calculate the traveled distance. Once the dynamometer is calibrated, this ‘should’ be a constant anyway. For speeding up the readings, the Arduino is measuring as fast as possible, but only reports every 20 [ms] new positions to the PC. This is helping a lot for reading maximum travel speeds. For solving the jitter issue around the zero-point, a similar technique like incremental encoders is used.

So rotating from/to angle:

  • 359 → 1 degree: positive rotation, so: total_rotations += 1
  • 1 → 359 degree: negative rotation, so: total_rotations -= 1

This logic seems simple, but for instance looking at a positive rotation (359 → 1), how does the Arduino know? Since “PreviousAngle > CurrentAngle” is true for almost every negative rotation!

So the encoder is ‘divided’ into quadrants. Where the 1st quadrant starts from 0 to 90 degrees and the 4th quadrant is from 270 to 360, or in example below using the 12 bit encoder output:

long CompensateFullRotation(long CurrentAngle, long PrevousAngle) {

  if (PrevousAngle>12288 and CurrentAngle<4096) {

    return 1;

  } else if (PrevousAngle<4096 and CurrentAngle>12288) {

    return -1;

  } else {

    return 0;

  } }

Finally the position calculation for wheel A is straight forward:

  • Angle_A = CurrentAngleA / 4096
  • Rotations_A += CompensateFullRotation( CurrentAngleA, PrevousAngleA )
  • Distance_wheel_A = ( Rotations_A + Anlge_A ) * Pi * Diameter_Encoder_Wheel

Again with the assumption, the maximum reading speed always covers one quadrant, which seems to be the case. Instead of quadrants, most likely the triple-zone approach would also work, which is increasing the maximum reading speed a bit..

Calibration dynamometer:

Hmm, now there are 2 devices to calibrate… For the dynamometer a one meter ruler is used, which is gently pushed across the gripper wheel and repeated a couple of times for getting an averaged output. It’s somewhat different then the measured gripper wheel diameter, so the calculated Diameter_Encoder_Wheel is tuned a little. (Which was expected.)

Testing the robot:

Since the dynamometer is assumed to be correct, it’s time to test the robot.

The command DriveXY: 1000 mm in forward direction, at a speed of 200 mm/sec.

 

This graph shows both left & right wheel displacements and also the speed in green.

Some remarks:

  • The robot did ‘move’ around 1000 [mm]
  • There is a slight difference shown between the left & right wheel, which is correct, since both wheels do not have the exact same diameter and in this case, it was not yet calibrated for that.
  • There is a nice speed ramp up and ramp down shown.
  • The average speed is calculated by: numpy.diff(distance) / numpy.diff(time)
    This gives some (expected) noise, but seems quite average at around 300 mm/s…
    Somehow, the speed is not correct, compared to the input value.

Conclusion:

Calibration is an interesting topic and I’m just starting to understand a bit more about odometry and the way our RTC works. Although we do realize this dynamometer will not solve all issues. Every robot has some systematic and none-systematic errors. Most likely, this dynamometer could be used for reducing (some?) systematic errors. 

Like just moving in a straight line, calibrating wheel diameter differences might be a good option. Driving on different floor types will result in different none-systematic errors. So final calibration will always depend on the floor & speed you drive.

This project is not finished and after solving the speed issue, it’s time to do some more dynamometer test runs. Beside that, it’s now also possible to drive the robot wireless, so it’s a good moment to check the calibrated values on the floor. To be continued.

Tools... (August 2023)

This month we worked on assembling the first prototype of our robot for all 3 instances. For alignment we organised a physical meeting on August 19 for a craft / testing day with the whole team. Resulting in three similar looking robots...


Further we continued discussions on topics like:

  • Servo control for our nerf gun.
  • How to connect extensions to our robot.
  • How to split up and store configuration.
    • Overall configuration.
    • Robot specific configuration based on calibration for each specific robot.
  • Line detection for Lava Palava.


Having our Realtime Controller available we’ve been busy further testing it, for some exploring it and of course testing controlling it from the Raspberry Pi.

Further quite some time was spent on the tools described in the following stories.

Wednesday, September 20, 2023

Story: The Team

The Dutch Rescue Team consists of 5 members living spread over The Netherlands, all being autonomous robot hobbyists.

Skills

Our combined skills:

  • mechanical
  • electronics
  • 3D print design
  • robotics
  • programming

Per team member, per skill, the experience varies both in time (0 to +20 years) and level (novice to expert). As mentioned in the previous blog, none of us has OpenCV skills, but we’re all eager to acquire those. Further we have many knowledgeable contacts in The Netherlands and Belgium able and willing to advise us and review our designs.

Team is so abstract ...

You will probably not see all our faces online, but if admitted all five of us will be present at Pi Wars 2024 to get to know many other attendees.

A few of us during an online meeting on Pi Wars.

 



Other facts

  • We’re not living close to each other.
  • Physical meetings will be limited to approximately once per month.
  • Where online meetings are more frequent.
  • Every team member has the option to build the DRT robot to own.
  • As a result, 3 the same robots are built with the same components and will be running the same code.
  • Having the three robot instances also increases testing capacity.
  • Most coding and testing will be done at home.
  • Our physical meetings are mostly used for discussions, demo and diagnosing together.

Story: The Zombie Apocalypse - Shoot the undead!

 Aim of the challenge

Zombies have taken over the Pi Wars Tower! It is up to you to rid the building of the "differently mortal".

Shoot all the zombies in the Tower and save the inhabitants. The targets will be at a number of different levels and you must use projectiles as your "anti-zombie device" to knock them over.


What a fun challenge!

Always wanted to hack an automated shooting turret, so this is the perfect opportunity to do so. It’s still a bit unknown how the exact challenge will look like, but somehow we need to shoot the undead…


The arena

Targets:

  • Targets will be no smaller than a standard playing card 62mm wide by 88mm length

Shooting rounds:

  • 3 rounds
  • every round 5 shots
  • 5 minutes maximum


Shooting methods

Using nerf darts was unanimously chosen as our favorite way of shooting projectiles. But what options do you have for shooting nerf darts?

Watching previous versions of Pi-Wars and doing some research on internet, we identified three different ways of shooting nerf darts:


Although assuming compressed air might be the best option, creating compressed air is not the most easy one to accomplish. There are also ways for shooting with the pre-tensioned springs. But might be less optimal than spinning motors. So spinning motors will be our favorite way.


Proof of Concept (POC1)

After some tinkering, a first proof of pudding was created. It is a mobile shooting device, which has 2 spinners and a trigger mechanism and 3d printed 5-darts-magazine. It shoots quite far, at least over 15 meters, which is not really needed.

User acceptance test

Okay, our little secret is our test team. Here we did a shooting test, shooting darts in the garden. It was an obvious hit!


First shooting series




First performance test

Without going into too much detail today, first impressions of precision seem hopeful…


There are (ofcourse) still some minor issues to be solved, which might be a subject in another post.

To be continued…

The Finale - Sunday, 21st April 2024

 Wow What an event this Pi Wars is, being inspiring for all participants. On Saturday we saw many young people transforming from being anxi...