Our hard work finally paid off, as we were crowned the Best Overall and 1st Place champions of the Droid Racing Challenge 2023! Thank you so much to the QUT Robotics Club for hosting such an awesome event and for allowing us to attend!
The winning algorithm we ended up using is located in woflydev/odyssey_lsd. Take a look if you're interested!
Visit our website to learn more about us and the team. We are a passionate group of individuals developing a custom made robot designed to compete with universities around Australia in the Droid Racing Challenge. The DRC is held annually in July with teams from all around the country flying to Brisbane's QUT Gardens Campus to compete. Last year, we finished honourably at 7th, out of 15 national universities, beating both the University of New South Wales (UNSW) and the University of Sydney (USYD), becoming the youngest ever team to compete in DRC history.
We operate under the GPLv3 license, which means that any product you make with our code must be open-source and available to the general public.
- NVIDIA Jetson Nano for all computations
- GPIO -> motor encoders for motor control
- Tensorflow (python) for AI
- Inference accelerated with TensorRT
- Autoencoders with symmetric skips for semantic segmentation
- Nodejs for web panels / real-time visualisation
- OpenCV + numpy (python) for image processing
Repo | Description |
---|---|
woflydev/odyssey_nnn | New and refreshed implementation of Project Odyssey's CNN driver. |
woflydev/odyssey_lsd | New Lane Segment Detection implementation for Project Odyssey. |
woflydev/odyssey_data | Unity simulation to generate virtual road scenes to train AI |
woflydev/odyssey_img | Data exported from woflydev/odyssey_data |
If you do plan on running this on a Jetson Nano, beware that you will need to:
- rebuild opencv with GSTREAMER support
- install tensorflow with GPU support enabled (much, much harder than it seems)
If you plan on using our API and code, beware that:
- some documentation below is outdated
- we do not plan on updating the documentation here
Git (recommended):
git clone https://github.com/woflydev/odyssey_cnn.git
cd odyssey_cnn
pip install -r requirements.txt
GitHub CLI:
gh repo clone woflydev/odyssey_cnn
cd odyssey_cnn
pip install -r requirements.txt
Usage of our custom motor controller is as follows:
from driver.driver import move, off
# 'move' takes a value between -100 and 100, and an optional timeout value in ms.
move(50, 50, 1000)
move(10, 10)
# shorthand for move(0, 0)
off()
Usage of our motor throttle calculation algorithm is as follows:
import throttle_calc
r = 10 # where 'r' is base speed out of 100
theta = 0 # where 'theta' is desired heading, with 0 being straight
# throttle_calc returns a tuple. do with that what you will.
left_speed = throttle_calc(r, theta)[0]
right_speed = throttle_calc(r, theta)[1]
More functions can be found below.
import z_opencv_driver
z_opencv_driver(source, time_delay, log_level)
You can also call it from the command line:
z_opencv_driver.py source time_delay log_level
Parameter | Type | Description |
---|---|---|
source |
string/int |
Required, Lane detection source material. |
time_delay |
int/float |
Slowing down playback for debug analysis. |
log_level |
string |
DEBUG, INFO, ERROR, CRITICAL levels can be selected. |
import y_data_extractor
y_data_extractor(file_path, file_name, output_dir)
Alternatively, call from command line:
y_data_extractor.py file_path file_name output_dir
Parameter | Type | Description |
---|---|---|
file_path |
valid system path |
Required, Dashcam footage for data extraction. |
file_name |
valid filename |
Required, Dashcam footage filename. |
output_dir |
valid system path |
Required, Directory in which extracted data should be outputted. |
Please check woflydev/odyssey_lsd for our winning CNN Lane Segment Detection.
git clone https://github.com/woflydev/odyssey_lsd.git
cd odyssey_lsd
pip install -r requirements.txt
sudo chmod +x permissions.sh # required for Arduino port access
For more ways of using the LSD repository, click here.
π¦ odyssey_cnn/
βββ README.md
βββ data
β βββ TestTrack.mp4
β βββ img
β β βββ 40deg dep.jpg
β β βββ 45dep.jpg
β β βββ depth_correction.jpg
β β βββ lane_dashcam_hsv.png
β β βββ school_tape.jpg
β β βββ school_tape2.jpg
β β βββ school_tape3.jpg
β β βββ school_tape4.jpg
β β βββ school_tape5.jpg
β β βββ school_tape6.jpg
β β βββ self_car_data_hsv.png
β β βββ test_lane_video2_hsv.png
β β βββ test_white.jpg
β β βββ video_extract.png
β βββ lane_dashcam.mp4
β βββ models
β β βββ nav
β β β βββ train.ipynb
β β βββ obj
β β βββ object_model_placeholder
β βββ self_car_data.mp4
β βββ test_lane_video.mp4
βββ dependencies.sh
βββ requirements.txt
βββ utils
β βββ camera_tools
β β βββ v_show_video.py
β β βββ w_depth_correct.py
β β βββ w_newChessCalibrator.py
β β βββ w_new_u_turn.py
β β βββ w_pickTransform.py
β β βββ w_plot.py
β β βββ y_test_image_processing.py
β βββ motor_lib
β βββ BTS7960.pdf
β βββ README.md
β βββ controller.py
β βββ driver.py
β βββ driver_test.py
β βββ old_controller.py
β βββ pair.sh
βββ y_data_extractor.py
βββ y_hsv_picker.py
βββ z_cnn_driver.py
βββ z_opencv_driver.py
- OpenCV
- TechRule's implementation of NVIDIA's DAVE-2 Convolutional Neural Network
- Tensorflow for Python
- The entire Python project
While we are all for contributing in open-source projects, we will not be accepting any outside contributions due to the nature of the competition. However, you are welcome to fork the code and make your own modifications as per usual.
Before making modifications in your own cloned repo or fork, make sure to run pip install -r requirements.txt
and update .gitignore
to your own needs.
That said, if you really really really want to contribute, open a pull request and we'll review it.
See z_opencv_driver.py
for the pure OpenCV implementation of our custom lane detection and motor calculation algorithm. Our convolutional neural network's trained model has not been uploaded to the repository as of yet, but the basis for using the model to drive can be found at z_cnn_driver.py
. Training data was extracted by using y_data_extractor.py
. To pick the HSV values accurately for the OpenCV_Driver
class, you can use the y_hsv_picker.py
tool.
x_test.py
is only used for testing in a Windows environment when the robot is not available, and simulates the robot's movements through text prompts.
If you have any suggestions for our project and competition, you can reach us from our website here. Alternatively, you can open an issue on our repository.
This software is provided 'AS-IS', under absolutely no warranty. We are not responsible for any damage, thermonuclear war, or job firings from using this software. We will not be providing support for issues that arise within code. This project was coded in Python 3.9.13.