Generative 3D part amodal segmentation--decomposing a 3D shape into complete, semantically meaningful parts.
- 🚀 Initial Release: Published code, pretrained models, and interactive demo.
- 📌 Coming Soon:
- Integration of segmentation methods into the HoloPart pipeline.
Clone the repo:
git clone https://github.com/VAST-AI-Research/HoloPart.git
cd HoloPart
Create a conda environment (optional):
conda create -n holopart python=3.10
conda activate holopart
Install dependencies:
# pytorch (select correct CUDA version)
pip install torch torchvision --index-url https://download.pytorch.org/whl/{your-cuda-version}
# other dependencies
pip install -r requirements.txt
Upload a mesh with part segmentation. We recommend using these segmentation tools:
For a mesh file mesh.glb
and corresponding face mask mask.npy
, prepare your input using this Python code:
import trimesh
import numpy as np
mesh = trimesh.load("mesh.glb", force="mesh")
mask_npy = np.load("mask.npy")
mesh_parts = []
for part_id in np.unique(mask_npy):
mesh_part = mesh.submesh([mask_npy == part_id], append=True)
mesh_parts.append(mesh_part)
mesh_parts = trimesh.Scene(mesh_parts).export("input_mesh.glb")
The resulting input_mesh.glb is the prepared input for HoloPart.
python -m scripts.inference_holopart --mesh-input assets/example_data/000.glb
The required model weights will be automatically downloaded:
- HoloPart model from VAST-AI/HoloPart →
pretrained_weights/HoloPart
We would like to thank the following open-source projects and research works that made HoloPart possible:
- 🤗 Diffusers for their excellent diffusion model framework
- HunyuanDiT for DiT
- FlashVDM for their lightning vecset decoder
- 3DShape2VecSet for 3D shape representation
- TripoSG as our base model
We are grateful to the broader research community for their open exploration and contributions to the field of 3D generation.
@article{yang2025holopart,
title={HoloPart: Generative 3D Part Amodal Segmentation},
author={Yang, Yunhan and Guo, Yuan-Chen and Huang, Yukun and Zou, Zi-Xin and Yu, Zhipeng and Li, Yangguang and Cao, Yan-Pei and Liu, Xihui},
journal={arXiv preprint arXiv:2504.07943},
year={2025}
}