Skip to content

[ECCV 2024] MAP-ADAPT: Real-Time Quality-Adaptive Semantic 3D Maps

License

Notifications You must be signed in to change notification settings

GradientSpaces/MAP-ADAPT

Repository files navigation

MAP-ADAPT: Motion-Aware Projection for Adaptive Scene Reconstruction

Jianhao Zheng1 · Dániel Béla Baráth2 · Marc Pollefeys2, 3 · Iro Armeni1

European Conference on Computer Vision (ECCV) 2024

1Stanford University · 2ETH Zurich · 3Microsoft

arXiv ProjectPage License: MIT

Abstract

Creating 3D semantic reconstructions of environments is fundamental to many applications, especially when related to autonomous agent operation (e.g., goal-oriented navigation or object interaction and manipulation). Commonly, 3D semantic reconstruction systems capture the entire scene in the same level of detail. However, certain tasks (e.g., object interaction) require a fine-grained and high-resolution map, particularly if the objects to interact are of small size or intricate geometry. In recent practice, this leads to the entire map being in the same high-quality resolution, which results in increased computational and storage costs. To address this challenge, we propose MAP-ADAPT, a real-time method for quality-adaptive semantic 3D reconstruction using RGBD frames. MAP-ADAPT is the first adaptive semantic 3D mapping algorithm that, unlike prior work, generates directly a single map with regions of different quality based on both the semantic information and the geometric complexity of the scene. Leveraging a semantic SLAM pipeline for pose and semantic estimation, we achieve comparable or superior results to state-of-the-art methods on synthetic and real-world data, while significantly reducing storage and computation requirements.

To-do

  • Release code and script to generate HSSD test dataset.
  • Docker file for easier installation.
  • Evaluation code.

Installation

The followings are instruction to install MAP-ADAPT in Ubuntu 20.04 + ROS Noetic. These should also work with (Ubuntu 18.04 + ROS Melodic) with minimum change. We plan to release a docker version for users with different OS.

Prerequisites

  1. If not already done so, install ROS (Desktop-Full is recommended).

  2. If not already done so, create a catkin workspace with catkin tools:

sudo apt-get install python3-catkin-tools
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws
catkin init
catkin config --extend /opt/ros/noetic
catkin config --cmake-args -DCMAKE_BUILD_TYPE=Release
catkin config --merge-devel

Install MAP-ADAPT

  1. Install system dependencies:
sudo apt-get install python-wstool python-catkin-tools ros-kinetic-cmake-modules protobuf-compiler autoconf
  1. Move to your catkin workspace:
cd ~/catkin_ws/src
  1. Download repo using SSH or HTTPS:
git clone git@github.com:GradientSpaces/MAP-ADAPT.git  # SSH
git clone https://github.com/GradientSpaces/MAP-ADAPT.git  # HTTPS
  1. Download and install package dependencies using ros install:
  • If you created a new workspace.
wstool init . ./MAP-ADAPT/dep_ssh.rosinstall    # SSH
wstool update
wstool init . ./MAP-ADAPT/dep_https.rosinstall  # HTTPS
wstool update
  • If you use an existing workspace. Notice that some dependencies require specific branches that will be checked out.
wstool merge -t . ./MAP-ADAPT/dep_ssh.rosinstall
wstool update
  1. Compile and source:
catkin build map_adapt_ros
source ../devel/setup.bash

Getting Started

Demo Data We provide two demo dataset from HSSD dataset in google drive. Please download and unzip them to your preferred directory. map_adapt_ros/src/pc_semantic_geo_pub_node.cpp contains the code to load these data. If you want to use MAP-ADAPT with your own dataset with a different data format, please refer to this script and make your own data publisher.

Run MAP-ADAPT In map_adapt_ros/launch/run_hssd.launch, change the data_path, output_path and any other parameters you want to play with. Run the following command to start MAP-ADAPT:

roslaunch map_adapt_ros run_hssd.launch adaptive_mapping:=true load_semantic_probability:=true load_geo_complexity:=true\
          voxel_size:=0.08 truncation_distance:=0.2 pc_pub_rate:=20 use_gt_semantic:=false collection_name:=collection1

Note that this command activate both semantic-adaptive and geometry-complexity-adapt.

Contact

If you have any question, please contact Jianhao Zheng (jianhao@stanford.edu).

Ackownledgement

Our implementation is heavily based on Voxblox. We also refers to the code from Panoptic_mapping. We thank the authors for open sourcing their code. If you use the code that is based on their contribution, please cite them as well.

Citation

If you find our code and paper useful, please cite

@inproceedings{zheng2024map,
  title={Map-adapt: real-time quality-adaptive semantic 3D maps},
  author={Zheng, Jianhao and Barath, Daniel and Pollefeys, Marc and Armeni, Iro},
  booktitle={European Conference on Computer Vision},
  pages={220--237},
  year={2024},
  organization={Springer}
}

About

[ECCV 2024] MAP-ADAPT: Real-Time Quality-Adaptive Semantic 3D Maps

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published