Welcome to my collection of perception-based projects using LiDAR, Camera, and Radar data. These projects explore how autonomous vehicles detect obstacles, track objects, and estimate movement using real-world sensor data and algorithms like RANSAC, Kalman Filters, and Euclidean Clustering.
In this project, I worked with LiDAR point cloud data to detect obstacles like vehicles and roadside objects.
The pipeline includes:
- Voxel Grid & ROI Filtering
- 3D RANSAC Plane Segmentation
- KD-Tree-Based Euclidean Clustering
- 3D Bounding Box Visualization
Here's a sample output where the street is shown in green, and detected obstacles are wrapped in red bounding boxes:
👉 More details in Lidar_Obstacle_Detection
This project tracks a leading vehicle using both camera images and LiDAR data to estimate Time-to-Collision (TTC).
Key highlights:
- Keypoint Detection & Matching (e.g., SIFT, ORB)
- Bounding Box Association between frames
- TTC Estimation from both Camera and LiDAR
See it in action — the green box tracks the vehicle, and TTC estimates are shown at the top:
👉More details in Project-2D-Feature_Matching
👉More details in Project-3D-Object-Tracking
Using FMCW radar simulation, this project identifies a moving target by estimating its range and velocity.
Core steps:
- FMCW Signal Simulation
- 2D FFT to extract Range & Doppler
- CFAR Detector for target identification
Output below: The peak in the plot shows a detected object ~81m away, moving at ~-20 m/s.
Lecture exercise to implement the sensor_fusion with RADAR
👉 More details in Radar/project/README.md
Each vehicle (except the ego car) is tracked using an Unscented Kalman Filter (UKF) that fuses LiDAR and Radar inputs for accurate tracking.
Highlights:
- Lidar & Radar Integration
- Real-Time Position & Velocity Estimation
- Predict-Update Cycles of UKF
👉 More details in Project_Unscented_Kalman_Filter





