Team Leader Email - shreyashmohadikar@gmail.com
This repository contains the code and resources for an advanced object detection model for autonomous vehicles. The Prototpe is designed to excel in extreme weather conditions like fog and snow, thereby improving the safety and reliability of autonomous driving systems.
The dataset used in this project was obtained from Roboflow, a platform for computer vision data management. The dataset can be accessed using the following link: Roboflow - Self-driving Car Object Detection
The FOV of the car is located around the centre of the image in most of the times. The above image represents the same, where the obstacles such as cars and pedestrians are located at the centre of the image, hence we have a denser heatmap over there.
To prepare the dataset for training, we utilized the preprocessing capabilities offered by Roboflow. The following preprocessing steps were applied to the images:
- Auto-orientation of pixel data, including EXIF-orientation stripping.
- Resizing the images to a resolution of 480x480 pixels using a stretch method.
- Random brightness adjustment of the images within a range of -50% to +50%.
- Random brightness adjustment of the images within a range of -25% to +25%.
- Random Gaussian blur with a variance between 0 and 2 pixels.
- The image noise to 10% of the pixels in the images.
Before Preprocessing | After Preprocessing |
---|---|
I-> |
After preprocessing the dataset, we employed the DETR (Detection Transformer) transfer learning technique to train our advanced object detection model. By leveraging this state-of-the-art approach, we aimed to enhance the model's ability to accurately detect and classify objects in real-time, even in challenging weather conditions.
Using Intel's OneAPI Pytorch Optimization Framework(IPEX) and neural compressor on Intel DevCloud, the model training has been optimized. The training and inference time have decreased drastically facilitating faster model usage and deployment.
For the deployment of the trained prototype, we used Flask. We developed a web application that provides an intuitive interface for users to use the prototype. The Flask app allows users to upload images for object detection. The detected objects are then visualized and displayed to the user, providing valuable insights and enhancing the capabilities of autonomous driving systems.
Deployment/
: You can find the deployment of the prototype.notebooks/
: Jupyter notebooks showcasing the data preprocessing, model training, and evaluation processes.README.md
: The file you are currently reading, providing an overview of the project.
Note: Please download the model file from here and save it in the Deployment/
repository as vehicle_det.pth
before use.
- Roboflow: For image input and preprocessing.
- OneDAN: For data analysis of images and drawing conclusions.
- OneDNN: for framework optimization.
- OneVPL: video processing tasks.
- Google Colaboratory: For coding and effective communication between team members.
- Pytorch framework
- Flask: For deployment of the solution.
- Open up a bash terminal(Linux) or Command Prompt(Windows)
- Cone the repository with the code:
git clone https://github.com/SneakyTurtIe/intel-oneAPI.git
- Change the working directory to the deployment directory:
cd intel-oneAPI/Deployment
- Install the model file as mentioned here.
- Install the requirements of the project with
pip install -r requirements.txt
- Run the deployment on localhost using the command:
python app.py
Your prototype should be running at localhost:5000
During the course of our project in object detection for autonomous vehicles, we have gained valuable insights and experiences. Here are the elaborations and additional points:
Limitations of existing models: We have identified several limitations in existing object detection models, particularly in extreme weather conditions such as fog, mist, and camouflage. These conditions can significantly affect the accuracy and reliability of object detection algorithms, making it essential to address these challenges.
Specialized models for extreme weather: Recognizing the limitations, we have emphasized the importance of developing specialized models specifically tailored for extreme weather conditions. These models incorporate advanced techniques and algorithms that can handle the challenges posed by weather-related factors, enabling more accurate and robust object detection.
Data augmentation for weather conditions: To tackle the limitations caused by extreme weather, we performed data augmentation techniques. By augmenting the dataset to resemble various weather conditions, such as fog and mist, we aimed to train the object detection models to be more resilient and adaptable in adverse weather scenarios.
Pre-processing steps: As part of the data preparation, we applied pre-processing steps to each image. These steps included auto-orientation of pixel data with EXIF-orientation stripping and resizing the images to a standardized size of 480x480 pixels, allowing consistent input for the object detection models.
Augmentation techniques: To create diverse variations of each source image, we employed augmentation techniques. This involved randomly adjusting the brightness of the images within a range of -25% to +25%, applying random Gaussian blur with a range of 0 to 2 pixels, and introducing salt and pepper noise to 10% of the pixels. These augmentations aimed to increase the variability of the dataset and improve the model's generalization capabilities.
Leveraging Intel's oneAPI OneDNN: Throughout our project, we explored the capabilities of Intel's oneAPI OneDNN tool. By utilizing this deep neural network tool, we were able to optimize and accelerate the performance of our object detection code. Leveraging hardware accelerators and parallel computing, we achieved improved efficiency and speed in our models.
Overall, our project focused on addressing the limitations of existing object detection models in extreme weather conditions. Through specialized model development, data augmentation, and the use of advanced tools like Intel's oneAPI OneDNN, we aimed to improve the accuracy, reliability, and robustness of object detection for autonomous vehicles in challenging weather scenarios.