Skip to content

Latest commit

 

History

History
39 lines (27 loc) · 2.57 KB

README.md

File metadata and controls

39 lines (27 loc) · 2.57 KB

Pothole Detection Model using YOLOv8

This repository contains a pothole detection model trained using YOLOv8, an object detection algorithm. The model is designed to detect and localize potholes in images or videos, providing a valuable tool for road maintenance and safety.

Model Architecture

The pothole detection model is built on top of the YOLOv8 architecture, which is a state-of-the-art object detection algorithm. YOLOv8 utilizes a single neural network to simultaneously predict bounding boxes and classify objects within those boxes.

Dataset

The model was trained on a diverse dataset of images and videos containing various road conditions and types of potholes. The dataset was carefully annotated with bounding boxes around the potholes to provide ground truth labels for training.

Training

The model was trained using the following steps:

  • Data Preprocessing: The dataset was preprocessed by resizing the images to a fixed size, normalizing pixel values, and applying data augmentation techniques like random flipping and rotation to improve generalization.

  • Model Initialization: The YOLOv8 architecture was initialized with pre-trained weights from the COCO dataset, which helps the model to learn meaningful representations and accelerate convergence during training.

  • Training: The model was trained using a combination of loss functions, including object detection loss, classification loss, and box regression loss. The training process involved optimizing the model's parameters using backpropagation and stochastic gradient descent.

  • Hyperparameter Tuning: Various hyperparameters such as learning rate, batch size, and anchor box dimensions were fine-tuned to achieve optimal performance. This process involved experimentation and validation on a separate validation set.

  • Model Evaluation: The trained model was evaluated using evaluation metrics such as mean average precision (mAP) and intersection over union (IoU) to assess its accuracy and performance on both the training and validation datasets.

  • Inference: The final trained model can be used for inference by providing images or videos as input. The model will output bounding boxes around detected potholes, along with their confidence scores.

Demo

demo gif demo gif

Acknowledgements