Skip to content

cvg/FrontierNet

Repository files navigation

FrontierNet:
Learning Visual Cues to Explore

Boyang Sun · Hanzhi Chen · Stefan Leutenegger · Cesar Cadena · Marc Pollefeys · Hermann Blum

RA-L 2025
ArXiv | IEEE | Video | Webpage

example
FrontierNet learns to detect frontiers (the known–unknown boundary) and predict their information gains from visual appearance, enabling highly efficient autonomous exploration of unknown environments.

Quick Start

  • 🔧 Setup — Install dependencies and prepare the environment.
  • 🚀 Run the Demo — Try FrontierNet on example data(single image demo and full exploration demo).
  • 🛠️ Pipeline Configurations — Customize your pipeline.

Setup

First clone the repository, install the dependencies and download model weights.

git clone --recursive  https://github.com/cvg/FrontierNet && cd FrontierNet
conda create -n frontiernet python=3.11 -y && conda activate frontiernet
export CMAKE_POLICY_VERSION_MINIMUM=3.5 && pip install -r requirements.txt
bash download_weights.sh

Alternatively, download the checkpoint manually.

[Optional - click to expand]
  • Build and use UniK3D as depth priors (dependency should be already installed)
cd third_party/UniK3D/ && pip install -e .

Execution

Single Image Inference

Image from HM3D:

python demo_single_image.py --input_img examples/hm3d_1.jpg --out_dir output/ --config config/hm3d.yaml

Image from ScanNet++:

python demo_single_image.py --input_img examples/scannetpp_1.jpg --out_dir output/ --config config/scannetpp.yaml

Random Image (unknown camera):

python demo_single_image.py --input_img examples/internet_1.jpg --out_dir output/ --config config/any.yaml

By default, the pipeline uses Metric3Dv2 for depth. You can switch to UniK3D using:

... --depth_source UniK3D

Visualization

Visualize the output using:

python demo_plot.py --result_path output/<file_name>.npz

This first plots 2D result:

example

Then press any key to see 3D frontiers in the RGBD pointcloud:

example

Full-Scene Exploration

python demo_exploration.py --mesh examples/mv2HUxq3B53.glb  --config config/hm3d_exploration.yaml  --write_path output/exploration_state.json 

example

This launches two Open3D interactive visualizers: a smaller window showing the egocentric view and a larger one displaying the exploration progress from a top-down perspective. In the smaller view, you can manually move the camera using keyboard controls (detailed in the terminal output) and trigger exploration by pressing SPACE. The provided example scene is from HM3D dataset, and you can load other scenes by using a different .glb mesh file.

Note: This demo is a simplified Open3D-based illustration of the exploration pipeline. It sequentially executes each component (sensing, mapping, frontier detection and update, and planning) in a single-thread loop, step by step rather than in real time. This makes it easier to understand and debug the system’s logic, but also increases time and memory consumption. For full-performance, real-time exploration, a separate implementation such as in ROS is required.

Pipeline Configurations

Pipeline parameters are loaded from the config. An example file, hm3d_exploration.yaml, is provided, where you can adjust key parameters for different components. The system uses Wavemap for 3D mapping and OMPL for path planning. You can also modify the configuration to integrate alternative mapping or planning modules.

✅ TODO

  • Add exploration result replay.
  • Add support for finer-grained update intervals in the exploration demo.
  • Add planning pipeline by August.
  • Add support of UniK3D
  • Add support of Metric3D

⚠️ Known Limitations

  • Performance may degrade in outdoor scenes or highly cluttered indoor environments.
  • Predictions are less reliable when objects are very close to the camera.

📖 Citation

If you use any ideas from the paper or code from this repo, please consider citing:

@article{boysun2025frontiernet,
  author={Sun, Boyang 
          and Chen, Hanzhi 
          and Leutenegger, Stefan 
          and Cadena, Cesar and 
          Pollefeys, Marc and 
          Blum, Hermann},
  journal={IEEE Robotics and Automation Letters}, 
  title={FrontierNet: Learning Visual Cues to Explore}, 
  year={2025},
  volume={10},
  number={7},
  pages={6576-6583},
  doi={10.1109/LRA.2025.3569122}
}

📬 Contact

For questions, feedback, or collaboration, feel free to reach out Boyang Sun:
📧 boysun@ethz.ch 🌐 boysun045.github.io

About

[RA-L 2025] FrontierNet: Learning Visual Cues to Explore

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published