Skip to content

Overengineering-squared/Overengineering-squared-RoboCup

Repository files navigation





This is the repository for the German team Overengineering² (Georg-Büchner-Gymnasium, Seelze), which competed in the RoboCup Junior sub-league Rescue-Line.

About the Competition

"The land is too dangerous for humans to reach the victims. Your team has been given a challenging task. The robot must be able to carry out a rescue mission in a fully autonomous mode with no human assistance. The robot must be durable and intelligent enough to navigate treacherous terrain with hills, uneven land, and rubble without getting stuck. When the robot reaches the victims, it has to gently and carefully transport each one to the safe evacuation point where humans can take over the rescue. The robot should exit the evacuation zone after a successful rescue to continue its mission throughout the disaster scene until it leaves the site. Time and technical skills are essential!"

[source: https://junior.robocup.org/wp-content/uploads/2024/04/RCJRescueLine2024-final-1.pdf]

As mentioned in the short scenario introduction of the official 2024 Rescue-Line rules, the goal of this competition is to develop a fully autonomous robot capable of performing various tasks on a series of different obstacle courses. In the competitions, the team whose robot completes the course with the fewest errors or human interventions wins. The tasks the robots have to complete along the courses include following a black line with gaps, intersections (the placement of green intersection markers indicates the direction of the path the robot should follow), speed bumps, large obstacles, debris, ramps, and seesaws. In addition, the robot must recognize the silver reflective strip at the entrance to the so-called evacuation zone (130 cm x 90 cm x at least 10 cm), proceed into it, first pick up two silver balls (alive victims), place them in the green evacuation point (6 cm high), then pick up a black ball (dead victim) and place it in the red evacuation point. The robot should then find the exit of the evacuation zone (marked with a black line), follow the black line from there, and finally stop at a red line and remain stationary.

There are very few restrictions for the robot. In general, the robot only needs to be able to navigate under tiles that form bridges over other tiles (25 cm high) that are supported by pillars at the corners of the tiles (entry/exit width of 25 cm). Besides that, there are only a few part restrictions on components that have been developed solely for a specific competition task (e.g., ready-to-use line-following cameras).

About the project

In order to achieve all the tasks mentioned above, we decided to take a completely camera-based approach using Raspberry Pi. The development of this design started in March 2022 and has been iterated until now (2024).

With this latest iteration of the robot design, we achieved first place (Individual Team) at the RoboCup Junior World Championship 2024 in Eindhoven. (See the results here.)


As required by the competition rules, in addition to this GitHub, we created a detailed documentation of the robot and our development process in the form of an Engineering Journal, a Team Description Paper, and a Poster (all three of which earned us the highest possible score at the 2024 World Championship in Eindhoven).

Components

Aside from the custom, 3D-printed robot chassis and customized wheels to fit our mounting hubs, we used mostly off-the-shelf components for their reliability and active support. Our latest iteration used the following parts:

Software

Main Program

Our main program is written in Python using primarily OpenCV and NumPy for the image processing while following the black line. Different parts of the program like the communication with one of the Arduino Nanos for archiving sensor measurements via USB serial, are split into different files to use Python's multiprocessing to execute them simultaneously and to fully use the available resources. Some parts of the image processing are additionally accelerated by the just-in-time compiler Numba. For more information about our image processing, see our Engineering Journal, TDP, or Poster.

GUI

We also used CustomTkinter to create a visually appealing graphical user interface (GUI) for the display on our robot. This GUI (see the image below) shows the feeds from the two cameras with applied image processing, readings from installed sensors, timers that track total run time and time spent in the evacuation zone, and various other debugging information. Furthermore, it contains a detailed, rotating model of our robot, displayed with 5580 images that were previously rendered in Blender and assigned to the corresponding real time rotation values obtained from the gyro sensors.

AI

We have chosen to switch to an AI model for the following tasks because we believe it is the most efficient way to solve them. Previous image processing methods had many flaws (such as OpenCV's Hough Circle Transform), and we believe that AI models can solve these tasks with less effort and better reliability.

Victim Detection

The victim detection inside the evacuation zone relies on a self-trained YoloV8 model. The model was trained on a dataset of 3145 images (available here) using Google Colab. Together with a wide angle camera and a Coral USB Accelerator, the model can detect the victims with a high accuracy as well as high FPS. Combining everything, we can reliably complete the evacuation zone in under two minutes or in 52.36 seconds when optimized for speed ;). (See the video here.)

Evacuation Zone Entrance Detection

Just like the victim detection, the detection of the silver reflective strip at the entrance of the evacuation zone is done through a YoloV8 AI model, but instead using the classification task. After collecting 10998 images for the dataset (available here) and training the model using Google Colab, the model can detect the silver strip with an extremely high accuracy and has so far never failed us. The model is exported using the Ultralytics export mode to an ONNX model for increased performance and runs on the CPU of our Raspberry Pi 5.




Our robot's capabilities are probably best demonstrated in a "perfect run" (without human intervention or error) at the 2024 World Championship in Eindhoven.
(See the video below.)


Why open source?

We decided to open source our code because we think our code can be a great learning resource for teams wanting to try out camera-based robots or current teams who are looking for resources about camera robots. We also think it's important to share knowledge, and we encourage you to do the same! Either by uploading videos of your scoring runs to YouTube, open-sourcing your code, or as simple as chatting with other teams at competitions.

About the team

The team originally was founded in 2017 through two former members and began with a Lego Mindstorms NXT robot. The current team (Tim & Marius) switched to a camera-based robot in around March 2022, when Tim joined the robotics club. Besides preparing for our graduation (Abitur) this year, we also iterated on the robot's design until now (2024) and managed to win first place (Individual Team) at the 2024 World Championship in Eindhoven. Our progress can be seen in the many YouTube videos we uploaded over the years. More information about the team can also be found in the Engineering Journal, TDP, or the Poster.

We will not participate in any RoboCup Junior competitions in the future, as we will both be university students in 2025.

Achievements

  • 1st @ Local School Qualifying Tournament 2022
  • 22nd @ German Open Kassel 2022
  • 2nd @ Qualifying Tournament Hanover 2023
  • 2nd @ German Open Kassel 2023
  • 3rd @ European Championship Varaždin 2023
  • 1st @ Qualifying Tournament Hanover 2024
  • 1st @ German Open Kassel 2024
  • 1st @ World Championship Eindhoven 2024

Links

License

This project is licensed under the GNU GPLv3 License - see the LICENSE file for details.
Additionally, you can read up one the license here.

We will answer questions whenever we can, but please don't expect active support for this repository