Skip to content

Real-time Vehicle Detection using YOLOv5 — trained on the Vehicles OpenImages dataset for Cars, Trucks, Buses, Motorcycles, and Ambulances. Developed as part of the **Master’s in MIS/ML program at the University of Arizona**.

Notifications You must be signed in to change notification settings

JDede1/vehicle-detection-using-yolov5

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

🚗 Self-Driving Car Object Detection using YOLOv5

This project focuses on real-time vehicle detection for self-driving applications using the YOLOv5 object detection framework.
The model identifies five vehicle classes — Car, Truck, Bus, Motorcycle, and Ambulance — using the Vehicles OpenImages dataset.

Developed as part of the Master’s in Management Information Systems / Machine Learning program at the University of Arizona.


Project Overview

The goal of this project is to train and compare different variants of the YOLOv5 model — Nano, Small, Medium, and Medium (Frozen Layers) — to analyze their trade-offs between speed, accuracy, and computational efficiency.

The models were fine-tuned on a custom dataset, evaluated on unseen data, and tested on real-world images and videos for autonomous vehicle perception.


YOLOv5 Model Performance Summary

Model Parameters (M) Precision Recall mAP@0.5 mAP@0.5–0.95 Training Time (hrs)
YOLOv5n (Nano) 1.77 0.441 0.466 0.441 0.295 ~1.0
YOLOv5s (Small) 7.02 0.489 0.615 0.540 0.364 ~1.9
YOLOv5m (Medium) 20.9 0.606 0.647 0.623 0.467 ~0.12
YOLOv5m (Frozen 15 Layers) 20.9 0.645 0.606 0.648 0.466 ~0.10

Best Performing Model: YOLOv5m (Frozen 15 Layers) — achieved the best balance between accuracy, speed, and stability.


Results Overview

🔹 Validation Predictions

YOLOv5 models were validated on unseen data to measure performance across classes:

Validation Inference
Validation Inference

Project Workflow

1️⃣ Dataset Preparation

  • Used Vehicles OpenImages dataset (627 images, 1194 annotations)
  • Split into train, valid, and test sets
  • Converted labels to YOLO format using a YAML configuration file

2️⃣ Model Training

Trained multiple YOLOv5 models using Google Colab:

  • YOLOv5n.pt → Nano (lightweight baseline)
  • YOLOv5s.pt → Small (balanced model)
  • YOLOv5m.pt → Medium (best accuracy)
  • YOLOv5m (Frozen) → Transfer learning, 15 layers frozen

Each model was trained for 15 epochs with TensorBoard monitoring.

3️⃣ Validation and Visualization

  • Plotted metrics: precision-recall, F1, and confusion matrices
  • Visualized predicted bounding boxes on validation data

4️⃣ Inference

  • Performed inference on unseen images and videos
  • Saved detection results to the runs/detect/ directory

Conclusion

  • Increasing model size improved detection accuracy and recall.
  • The YOLOv5m (Frozen Layers) model achieved the best trade-off between training time and precision.
  • Transfer learning and selective freezing improved convergence speed and model stability.

Final Takeaway

The YOLOv5m (Frozen) model is ideal for real-world self-driving systems, balancing efficiency and reliability for real-time object detection.


Future Scope

Model Optimization

  • Apply pruning or quantization for edge deployment.
  • Perform advanced hyperparameter tuning to boost accuracy.

Dataset Expansion

  • Add new classes (e.g., pedestrians, bicycles).
  • Include nighttime and adverse weather conditions.

Inference Pipeline

  • Add GPU acceleration and object tracking (e.g., DeepSORT) for real-time video analysis.

Transfer Learning & Explainability

  • Extend YOLOv5 to new domains (e.g., agriculture or healthcare).
  • Apply Explainable AI (XAI) methods such as Grad-CAM or SHAP.

Tech Stack

Category Tools
Language Python 3.10
Framework PyTorch, Ultralytics YOLOv5
IDE / Environment Google Colab
Visualization Matplotlib, TensorBoard
Dataset Vehicles OpenImages (via Roboflow)
Version Control Git, GitHub

Repository Structure

self-driving-yolov5-vehicles/
│
├── .gitignore
├── README.md
├── requirements.txt
│
├── data/
│   ├── Vehicles-OpenImages.zip
│   └── inference_data_yolov5.zip
│
├── notebooks/
│   ├── Self_Driving_Car_Object_Detection_Using_YOLOv5.ipynb
│   └── self_driving_car_object_detection_using_yolov5.py
│
└── runs/ (optional)


Author

Ajibola Jeremiah Dedenuola
Master’s in MIS/ML, University of Arizona


Acknowledgments

  • Ultralytics for the YOLOv5 framework
  • OpenImages Dataset for high-quality vehicle images
  • Google Colab for GPU/TPU compute resources

License

This project is distributed under the MIT License.

About

Real-time Vehicle Detection using YOLOv5 — trained on the Vehicles OpenImages dataset for Cars, Trucks, Buses, Motorcycles, and Ambulances. Developed as part of the **Master’s in MIS/ML program at the University of Arizona**.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages