| EXPERIMENT TYPE | BAG TO USE |
|---|---|
real |
real_08-Dec_21_48_13 |
In this exercise, you will implement Extended Kalman Filter (EKF)–based localization for a robot using ROS 2.
You will use sensor fusion from odometry, IMU, and landmark detection to improve localization accuracy.
The system will first be tested in Gazebo simulation and then on a real TurtleBot3 robot.
- Develop a ROS 2 package that runs an EKF to estimate the robot’s pose.
- Localize the robot using landmark measurements.
- Fuse measurements from odometry and IMU for improved estimation accuracy.
- Concepts from lectures: motion models, sensor models, and EKF.
- Packages:
turtlebot3_simulationsturtlebot3_perception
- Install the perception package with:
git clone https://github.com/SESASR-Course/turtlebot3_perception.gitFollow the instructions in its README.md file to complete installation.
Implement probabilistic motion and measurement models as standalone Python functions.
-
Velocity-based motion model (sampling)
- Input: current state
x, commandu, and noise parameters. - Output: new pose
x'as a NumPy array. - Sample 500 poses from an initial state using two different noise settings:
- One emphasizing angular uncertainty.
- One emphasizing linear uncertainty.
- Plot the sample distribution and compute the Jacobian matrices
GandV.
- Input: current state
-
Landmark-based measurement model (sampling)
- Input: state
x, measurementz, and noise parameters. - Output: numpy array
[range, bearing]of detected landmarks. - Sample 1000 poses and plot their distribution.
- Compute and report the Jacobian matrix
H.
- Input: state
Implement an EKF node that estimates the robot’s position from landmark measurements.
-
State:
[x, y, θ]— robot pose in the global frame. -
Prediction step (20 Hz):
- Use the velocity motion model with control inputs
(v, ω)from/odom. - Implement prediction using a timer callback.
- Use the velocity motion model with control inputs
-
Update step:
- Landmarks are published on
/landmarks(or/camera/landmarkson the real robot)
with message typelandmark_msgs/msg/LandmarkArray. - For each landmark, perform the EKF update inside the subscription callback.
- Publish the estimated state on
/ekf(nav_msgs/msg/Odometry).
- Landmarks are published on
-
Landmark coordinates are provided in
turtlebot3_perception/config/landmarks.yaml.
Extend the EKF state to include linear and angular velocities:
[x, y, θ, v, ω]
- Create new prediction and Jacobian functions
g(u, x),G(u, x), andV(u, x). - Update step now includes:
- Landmark model (
ht_landmark) - Wheel encoder model (
ht_odom) using/odom - IMU model (
ht_imu) for updatingω
- Landmark model (
- Compute the Jacobians:
Ht_landmark,Ht_odom, andHt_imu
- Define and tune noise parameters (
std_odom,std_imu) to set the covariance matrixQ.
Run your EKF implementation (Task 1 or 2) on the real TurtleBot3 robot.
- Launch camera and landmark detection:
ros2 launch turtlebot3_perception camera.launch.py ros2 launch turtlebot3_perception apriltag.launch.py
- Landmarks are published on
/camera/landmarksat ~6 Hz. - Record
/odomand/ekftopics for later analysis and comparison.
- Provide a short explanation of your ROS 2 system architecture (nodes, topics, parameters).
- For Task 0:
- Include motion and measurement model plots.
- Discuss effects of different uncertainty parameters.
- Report computed Jacobians and their linearized forms.
- For Task 1:
- Compare
/ground_truth,/odom, and/ekfdata. - Plot each state component (
x,y,θ) over time. - Plot the 2D trajectory and compute RMSE and MAE.
- Compare
- For Task 2:
- Repeat the analysis and compare results with Task 1.
- For Task 3:
- Compare
/odomvs/ekfon the real robot and discuss the differences.
- Compare
Run the Gazebo simulation (with both turtlebot3_simulations and turtlebot3_perception installed):
ros2 launch turtlebot3_gazebo lab04.launch.pyThe environment includes white columns acting as landmarks (IDs shown in orange).
- Start the camera driver and landmark detector (as shown in Task 3).
- Landmarks will appear on
/camera/landmarksif detected.