ROS2 Camera-Based Autonomous Rover is a simulation-first mobile robotics project built using ROS 2 (Jazzy) and Gazebo Sim (gz-sim).
The project focuses on vision-based perception, obstacle reasoning, and clean ROS 2 architecture, with an emphasis on explainable autonomy rather than black-box navigation.
This repository is structured to scale from pure visualization → perception → decision-making → control.
This project was built to:
- Understand camera-based perception pipelines
- Practice clean ROS 2 package separation
- Develop obstacle avoidance without LiDAR
- Visualize what the robot “sees” and “decides”
- Build autonomy incrementally instead of end-to-end black boxes
The rover operates in a simulated environment and detects obstacles purely from RGB camera input, making it suitable for low-cost real-world robots.
- 🎥 RGB Camera Integration
- 👁️ Camera-Based Obstacle Detection
- 📐 Region-of-Interest (ROI) Processing
- 🧱 Edge-Based Obstacle Reasoning
- 🧠 Left / Center / Right Obstacle Classification
- 🖼️ Real-Time Perception Visualization
- 🧪 Fully Simulated in Gazebo Sim
🚧 Motion control and FSM-based navigation are intentionally not enabled yet to maintain perception clarity.
The obstacle avoidance logic follows a transparent vision pipeline:
- RGB Image Acquisition
- ROI Cropping (Front View)
- Grayscale Conversion
- Gaussian Blur
- Canny Edge Detection
- Spatial Analysis
- Left / Center / Right regions
- Obstacle Decision Output
LEFT,CENTER,RIGHT, orNONE
This approach prioritizes interpretability and debugging visibility.
The project uses a modular ROS 2 design:
- Gazebo Sim (gz-sim)
- Custom rover URDF/Xacro
- RGB camera sensor
- four_wheel_description
- Robot model
- Sensors
- Gazebo integration
- four_control
- Camera visualization
- Obstacle perception logic
- Decision reasoning (no control yet)
- OpenCV windows (camera + ROI edges)
- Gazebo Sim GUI
- RViz2 (optional)
This separation ensures:
- Easy debugging
- Clear responsibility boundaries
- Smooth transition to real hardware later
Obstacle detection is performed using image structure, not distance sensors.
Decision logic:
- CENTER obstacle → highest priority
- LEFT obstacle → free space likely on right
- RIGHT obstacle → free space likely on left
- NONE → clear path
The result is visualized directly on the processed image to ensure decision correctness before control is enabled.
- ROS 2 Jazzy
- Gazebo Sim (gz-sim)
- Python (rclpy)
- OpenCV
- cv_bridge
- URDF / Xacro
- RViz2
colcon build
source install/setup.bash
ros2 launch four_wheel_description rover.launch.pyros2 launch four_control visualization.launch.py