Jan Philipp Schneider1,2, Pratik Singh Bisht1, Ilya Chugunov2, Andreas Kolb1, Michael Moeller1,3, Felix Heide2,4
1University of Siegen
2Princeton University
3Lamarr Institute
4Torc Robotics
🎉 NeurIPS 2025 (spotlight) 🎉
"Neural Atlas Graphs enable high-quality dynamic scene decomposition and intuitive 2D appearance editing, with use-cases in autonomous driving and videography."
This repository contains the official implementation of Neural Atlas Graphs for Dynamic Scene Decomposition and Editing, a novel hybrid scene representation for learning editable high-resolution dynamic scenes. Neural Atlas Graphs (NAG) integrate the editability of neural atlases with the complex spatial reasoning of scene graphs, where each graph node is a view-dependent neural atlas. This allows for both intuitive 2D appearance editing and consistent 3D ordering and positioning of scene elements.
Please refer to the installation instructions for setting up the repository, its dependencies and data.
Given a proper python environment setup, one can run our method using:
python nag/scripts/run_nag.py --config-path [path-to-config]For more details, please refer to the training instructions.
During the training process and afterwards, the model is evaluated to produce outputs for all frames, calculating metrics, as well as scene decompositions for every object.
We are committed to full reproducibility of the results presented in our paper. All configuration files and training procedures are provided in this repository. We provide detailed instructions on how to reproduce our experiments in the reproducibility document. Further, we provide the datasets and an explanation how to set these up in our datasets setup document.
In the future, we plan to provide further scripts to convert additional Waymo segments and Davis sequences into our used formats. Create a GitHub issue if you are interested in this or have any questions.
We provide a Jupyter Notebook showcasing how to load a pre-trained NAG model, decompose scenes into objects, and perform texture editing. This notebook serves as a practical guide for utilizing the capabilities of Neural Atlas Graphs.
To briefly outline the code structure of our repository, we provide a high-level overview of the main components and their locations within the codebase.
Our model training and evaluations are encapsulated using a dedicated runner, which holds instances of the model, dataset, and all other training related components. The runner can be created to train a new model, or load an existing one to further evaluate it. As we are relying on pytorch lightning for training, we implemented a callback class, which control training progress and handles the evaluation of the model.General tools and utility functions are within a dedicated tools library, which need to be included using git-submodules.
Further, we briefly point out the location of the NAG core components within the repository.
- The NAG Model is located at nag/model/nag_functional_model.py yielding the compositions of all the nodes, and contains the rendering code.
- The forground nodes implementation is located at nag/model/view_dependent_image_plane_scene_node_3d.py and its base classes up to the definition of nag/model/learned_image_plane_scene_node_3d.py which includes the networks definitions.
- The background node is implemented within nag/model/view_dependent_background_image_plane_scene_node_3d.py and its base class nag/model/background_image_plane_scene_node_3d.py.
- The editing functionalities are implemented within a mixin class nag/model/texture_mappable_scene_node_3d.py.
- The camera is implemented within nag/model/learned_camera_scene_node_3d.py and its base class nag/model/timed_camera_scene_node_3d.py.
Surely, there is way more to explore & explain, so feel free to open issues or discussions on GitHub if you have any questions regarding the code structure or implementation details.
If you find our work useful in your research, please consider citing our paper:
@article{Schneider2025NAG,
author = {Jan Philipp Schneider and
Pratik Singh Bisht and
Ilya Chugunov and
Andreas Kolb and
Michael Moeller and
Felix Heide},
title = {Neural Atlas Graphs for Dynamic Scene Decomposition and Editing},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
volume = {38},
url = {https://neurips.cc/virtual/2025/poster/115926},
}Thanks for your interest in our work! We hope you find Neural Atlas Graphs as exciting and useful as we do. If you have any questions, suggestions, or feedback, please don't hesitate to reach out via GitHub issues or discussions.





















