Skip to content

CurryYuan/View2Cap

Repository files navigation

[CVPR 2025] Empowering Large Language Models with 3D Situation Awareness

arXiv

Overview

Driven by the great success of Large Language Models (LLMs) in the 2D image domain, their application in 3D scene understanding has emerged as a new trend. A key difference between 3D and 2D is that the situation of an egocentric observer in 3D scenes can change, resulting in different descriptions (e.g., "left" or `"right"). However, current LLM-based methods overlook the egocentric perspective and use datasets from a global viewpoint. To address this issue, we propose a novel approach to automatically generate a situation-aware dataset by leveraging the scanning trajectory during data collection and utilizing Vision-Language Models (VLMs) to produce high-quality captions and question-answer pairs. Furthermore, we introduce a situation grounding module to explicitly predict the position and orientation of the observer's viewpoint, thereby enabling LLMs to ground situation descriptions in 3D scenes. We evaluate our approach on several benchmarks, demonstrating that our method effectively enhances the 3D situational awareness of LLMs while significantly expanding existing datasets and reducing manual effort.

🔨 Preparation

  • We update our codebase to Chat-Scene.

  • Prepare the environment:

    conda create -n chat-scene python=3.9.17
    conda activate chat-scene
    conda install pytorch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 pytorch-cuda=11.8 -c pytorch -c nvidia
    pip install -r requirements.txt
  • Download LLM backbone:

    • We use Vicuna-7B v1.5 in our experiments, which can be downloaded from Hugging Face.

    • Change the llama_model_path in run.sh to the path of vicuna-7b-v1.5.

  • Download Our View2Cap dataset from OneDrive

  • Annotations and extracted features:

    Please follow the instructions in preprocess.

🤖 Training and Inference

  • Pre-training with view2cap dataset

    • Modify train.sh:
    • Run: bash scripts/train_view2cap.sh
  • Fine-tuning on downstream tasks

  • Inference

    • Modify eval.sh:

      val_tag="scanrefer#scan2cap#scanqa#sqa3d#multi3dref"
      evaluate=True
      pretrained_path="/path/to/pretrained_model.pth"
    • Run: bash scripts/eval.sh

📄 Citation

If you find this project useful in your research, please consider cite:

@InProceedings{Yuan_2025_CVPR,
    author    = {Yuan, Zhihao and Peng, Yibo and Ren, Jinke and Liao, Yinghong and Han, Yatong and Feng, Chun-Mei and Zhao, Hengshuang and Li, Guanbin and Cui, Shuguang and Li, Zhen},
    title     = {Empowering Large Language Models with 3D Situation Awareness},
    booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
    month     = {June},
    year      = {2025},
    pages     = {19435-19445}
}

Stay tuned for our project. 🔥

😊 Acknowledgement

Thanks to the open source of the following projects:

(Multi-modal) LLMs: LLaMA, Vicuna, VideoChat, LEO

3D Datasets: ScanNet, ScanRefer, ReferIt3D, Scan2Cap, ScanQA, SQA3D, Multi3dRefer

Detectors: PointGroup, Mask3D, DEVA

Representations: ULIP, Uni3D, DINOv2

3D Models: vil3dref, OpenScene

About

[CVPR 2025] Empowering Large Language Models with 3D Situation Awareness

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •