Skip to content

Multi-model approach for autonomous driving ๐Ÿš—๐Ÿค–: A holistic exploration of traffic sign detection ๐Ÿ›‘๐Ÿšฆ, vehicle detection ๐Ÿš—๐Ÿ“ก, and lane detection ๐Ÿ›ฃ๏ธ๐Ÿ“ธ, powered by the magic of deep learning ๐Ÿง™โ€โ™‚๏ธ, within the captivating world of the Udacity Self-Driving Car Simulator ๐Ÿš€๐ŸŽฎ.

License

Notifications You must be signed in to change notification settings

Jkanishkha0305/AutoDrive-Vision

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

43 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

AutoDrive Vision

Multi-Model Autonomous Driving

๐Ÿš— Building the Future of Autonomous Vehicles with Deep Learning ๐Ÿค–

Welcome to the Multi-Model Autonomous Driving project! Here, we're on a journey to advance the world of self-driving cars using the power of deep learning and sensor fusion.

Project Overview

In this repository, you'll find a comprehensive exploration of multiple deep learning models designed to enhance autonomous driving capabilities. We're not just building models; we're paving the way for safer, smarter, and more efficient autonomous vehicles.

Key Achievements

  • Multi-Model Approach: ๐Ÿ“Š We've developed and tested various deep learning models to detect and classify traffic signals, spot obstacles, and identify lanes.

  • Comparative Study: ๐Ÿ“ˆ We've conducted in-depth comparative studies using prominent models like Mask-RCNN, ResNet50, InceptionV3, and MobileNet in realistic simulated environments.

  • 3D Data Visualization: ๐ŸŒ Our work includes KITTI 3D data visualization, which plays a pivotal role in understanding the vehicle's surroundings.

  • Algorithm Implementation: ๐Ÿค– We've worked with various cutting-edge algorithms, including FCNN, DeepSort, MTAN, SFA 3D, UNetXST, and ViT, to improve vehicle perception.

  • Real Autonomous Vehicle: ๐Ÿš€ We've taken our knowledge and applied it to build a tangible autonomous driving vehicle. This real-world system uses Jetson Nano, Arduino, and Ultrasonic Sensors to detect lanes, avoid obstacles, and respond to traffic signals through deep learning and image segmentation.

Getting Started

Ready to dive into the world of autonomous driving and deep learning? Check out our project's code, data, and documentation:

  • Code - Explore the deep learning models and code used in the project.

  • Data - Access the datasets and data preprocessing scripts.

  • Documentation - Dive into our project documentation to understand the algorithms, models, and implementation details.

License

This project is open-source under the MIT License. Feel free to use, modify, and contribute to our work.

Acknowledgments

We'd like to express our gratitude to the open-source community, researchers, and developers who have paved the way for advancements in autonomous driving and deep learning.

Happy Coding and Safe Driving! ๐Ÿ›ฃ๏ธ๐Ÿ‘จโ€๐Ÿ’ป๐Ÿš—

About

Multi-model approach for autonomous driving ๐Ÿš—๐Ÿค–: A holistic exploration of traffic sign detection ๐Ÿ›‘๐Ÿšฆ, vehicle detection ๐Ÿš—๐Ÿ“ก, and lane detection ๐Ÿ›ฃ๏ธ๐Ÿ“ธ, powered by the magic of deep learning ๐Ÿง™โ€โ™‚๏ธ, within the captivating world of the Udacity Self-Driving Car Simulator ๐Ÿš€๐ŸŽฎ.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published