R-VIO2 is a novel square root information-based robocentric visual-inertial navigation algorithm using a monocular camera and a single IMU for consistent 3D motion tracking. It is developed based on our robocentric VIO model, while different with our previous work R-VIO, we have derived and used i) our square-root robocentric formulation and ii) QR-based update combined with back substitution to improve the numerical stability and computational efficiency of the estimator. Moreover, the spatiotemporal calibration is performed online to robustify the performance of estimator in the presence of unknown parameter errors. Especially, this implementation can run in two modes: VIO or SLAM, where the former only estimates a sliding window of consecutive relative poses during the navigation (our RA-L2022 paper), while the latter additionally estimates a small set of map points in favor of localization and mapping (the frontend developed for our TRO2024 paper).
If you find this work relevant to or use it for your research, please consider citing the following papers:
- Zheng Huai and Guoquan Huang, Square-Root Robocentric Visual-Inertial Odometry with Online Spatiotemporal Calibration, IEEE Robotics and Automation Letters (RA-L), 2022: download.
@article{huai2022square,
title={Square-root robocentric visual-inertial odometry with online spatiotemporal calibration},
author={Huai, Zheng and Huang, Guoquan},
journal={IEEE Robotics and Automation Letters},
volume={7},
number={4},
pages={9961--9968},
year={2022},
publisher={IEEE}
}
- Zheng Huai and Guoquan Huang, A Consistent Parallel Estimation Framework for Visual-Inertial SLAM, IEEE Transactions on Robotics (T-RO), 2024: download.
@article{huai2024consistent,
title={A Consistent Parallel Estimation Framework for Visual-Inertial SLAM},
author={Huai, Zheng and Huang, Guoquan},
journal={IEEE Transactions on Robotics},
volume={40},
pages={3734--3755},
year={2024},
publisher={IEEE}
}
Download and install instructions can be found at: http://wiki.ros.org/kinetic/Installation/Ubuntu.
Download and install instructions can be found at: http://eigen.tuxfamily.org. Tested with v3.1.0.
Download and install instructions can be found at: http://opencv.org. Tested with v3.3.1.
First git clone
the repository and catkin_make
it. Especially, rvio2_mono
is used to run with rosbag in real time, while rvio2_mono_eval
is used for evaluation purpose which preloads the rosbag and reads it as a txt file. A config file and a launch file are required for running R-VIO2 (for example, rvio2_euroc.yaml
and euroc.launch
are for EuRoC dataset). The default mode is VIO, while you can switch to SLAM mode by setting the maximum number of SLAM features to nonzero from the config file (see rvio2_euroc.yaml
). To visualize the outputs, please use rviz
.
Terminal 1: roscore
Terminal 2: rviz (AND OPEN rvio2_rviz.rviz IN THE CONFIG FOLDER)
Terminal 3: rosbag play --pause V1_01_easy.bag (AND SKIP SOME DATA IF NEEDED)
Terminal 4: roslaunch rvio2 euroc.launch
Terminal 3: roslaunch rvio2 euroc_eval.launch (PRESET PATH_TO_ROSBAG IN euroc_eval.launch)
Note that this implementation currently requires the sensor platform to start from stationary. Therefore, when testing the Machine Hall
sequences you should skip the wiggling phase at the beginning. In particular, if you would like to run rvio2_mono_eval
, the rosbag data to be skipped can be set in the config file (see rvio2_euroc.yaml
).
This code is released under GNU General Public License v3 (GPL-3.0).