Skip to content

"Deep Learning Obstacle Detection and Avoidance for Powered Wheelchair" Paper Source Files Repository

License

Notifications You must be signed in to change notification settings

yahyatawil/WODA_Paper

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 

Repository files navigation

License CC BY-NC-SA 4.0 Python 3.7

Deep Learning Obstacle Detection and Avoidance for Powered Wheelchair

Overview of the developed system

Our system presented in the paper uses Mobilenetv2 SSD fine-tunned using a developed dataset built by us. The dataset is captured from Hasan Kalyoncu University campus sidewalks. The dataset and fine-tunning is done using EdgeImpulse and the overall code is Python-based. The object detection model passes the obstacle position to a control law to calculate the required linear and angular speed to push the center of deteted bounding box to the sides of image. The controle law is image based. Our aim in this project is to build an object detection and avoidance system for smart wheelchair using Camera only. We used one Raspberry Pi 4, Raspberry Pi camera and Sabertooth for motor control.

Authors

  • Yahya Tawil
  • Abdul Hafez Abdul Hafez (supervisor).

Demo

The demo video of our system is available on Youtube.

ASYU 2022 Conferance Paper Presentation

Available on Youtube.

This Repository Contents:

Directory File Info
code detection_avoidance.py Main code:Loading .eim EdgeImpulse model, object detection, drawing using openCV on frames, and control law implementation.
code augmentation.py Apply 4 types of augmentation to original images (Gamma - contrast - quality - noise). The scripts should be executed inside the images directory.
code/model model_without_augmentation.eim(lite) The resultant model after fine-tunning Mobilenetv2 SSD with our dataset.
code labels.py Modify the bounding_boxes.labels file exported from EdgeImpulse to add new images to it.
Documentation/control_law_samples - 3 expirments logs including (frames, velocity log and video)
Documentation/diagrams - source code of the diagrams/art-work provided in the paper.
Documentation PR-Data.xlsx Precision Recall of the fine-tunned model.
Documentation velo_curves.ods Velocity (angular and linear) curve for one of the experiments.

Dataset Download Link:

Please fill this form to download our dataset WODD: form link.

You can access our Edge Impulse project used to annotate and train the model publicly from here: Project

Software Development Tools:

Edge Impulse: Annotation, training and versioning Tensorflow: Dataset augmentation OpenCV: frames processing Python: Development language Gimp: Dataset images croping and resizing

Paper Preparation Tools:

Overleaf: Latex editing collaboratively Draw IO: Digrams kdenlive: Video Editing PlantUML: UML diagrams

Hints

  • Downloading OpenCV to Raspberry Pi is a little bit tricky. This guide seems the best one. Swap area may need to be increased if the installation stuck.
  • If you intend to use another Hardware than Raspberry Pi (.i.e. Jetson Nano), then you need to connect to Edge Impulse project and build it again for your new Hardware.

Cite this

Y. Tawil and A. H. A. Hafez, "Deep Learning Obstacle Detection and Avoidance for Powered Wheelchair," 2022 Innovations in Intelligent Systems and Applications Conference (ASYU), 2022, pp. 1-6, doi: 10.1109/ASYU56188.2022.9925493.

About

"Deep Learning Obstacle Detection and Avoidance for Powered Wheelchair" Paper Source Files Repository

Topics

Resources

License

Stars

Watchers

Forks

Languages