This project is a hands-on implementation to explore and understand the concepts behind Simultaneous Localization and Mapping (SLAM). It uses a simulated laser sensor to detect obstacles and create a map of the environment in real-time. The system continuously updates the map based on sensor data and accounts for uncertainty in the measurements, with added features for line segment extraction using seeded region growing.
Lidar Mapping Sim | Line Segment Extraction using Seed Region Growing Algorithm |
---|---|
![]() |
![]() |
The SLAM system works by simulating a laser sensor that scans the environment, detects obstacles, and generates a point cloud. The key features of this implementation are:
- Sensor Integration: A simulated laser sensor scans the environment and returns data about obstacles. The laser sensor adds Gaussian noise to simulate uncertainty in real-world sensor measurements.
- Feature Detection: The system detects line segments in the point cloud data using a seeded region growing algorithm, which is based on laser data. This allows the system to map not only individual obstacles but also linear structures (such as walls or straight edges).
- Data Collection: The sensor gathers distance and angle measurements for obstacles detected in the sensor's range. This data is used for mapping and for segment extraction.
- Line Segment Extraction: The system applies a seed-based region-growing method to extract line segments from the point cloud, improving the map by detecting continuous features in the environment.
- Map Update: The map is updated dynamically in real-time by adding red dots at detected obstacles and visualizing extracted line segments in green.
Component | Description |
---|---|
Sensor Integration | A simulated laser sensor scans the environment and returns data about obstacles. |
Feature Detection | A seeded region-growing algorithm is applied to detect line segments in the point cloud. |
Data Collection | The laser sensor gathers distance and angle measurements from obstacles detected in its range. |
Uncertainty Handling | Measurements are affected by uncertainty, modeled using a Gaussian distribution to simulate sensor noise. |
Distance Calculation | The distance to obstacles is calculated using the Euclidean distance formula: sqrt((x2 - x1)^2 + (y2 - y1)^2) . |
Data Storage | Detected obstacle positions and line segments are stored in a point cloud, ensuring no duplicate points. |
Map Update | The map is updated with red points for obstacles and green points for line segments based on sensor data. |
Visualization | The map is visualized using Pygame, where obstacles are shown as red dots and line segments as green. |
-
Line Segment Extraction with Seeded Region Growing:
- The system now includes a seeded region growing algorithm for extracting line segments from LiDAR data. This allows for more complex features, like walls or edges, to be detected and mapped in the environment.
-
Real-Time Map Updates:
- The map is continuously updated with sensor data, where obstacles are visualized as red dots, and detected line segments are drawn in green.
-
Dynamic Sensor Control:
- The system includes a feature where the laser sensor's data collection can be toggled based on mouse focus, providing real-time interaction during the simulation.
-
Uncertainty Model: The distance and angle measurements are perturbed using a Gaussian distribution, which is implemented as
uncertainty_add()
in the code. This adds randomness to the measurements, simulating real-world sensor noise.def uncertainty_add(distance, angle, sigma): mean = np.array([distance, angle]) covariance = np.diag(sigma ** 2) distance, angle = np.random.multivariate_normal(mean, covariance) return [distance, angle]
This project was inspired by and credits the YouTube channel Hobby Coding for providing valuable insights and tutorials that helped in understanding the concepts behind this implementation.