如果您熟悉中文,可以阅读中文版本的README
[March 19, 2025] Local Gradio App.py
[March 19, 2025] Gradio Optimization: Faster and More Stable 🔥🔥🔥
[March 15, 2025] Inference Time Optimization: 30% Faster
[March 13, 2025] Initial release with:
✅ Inference codebase
✅ Pretrained LHM-0.5B model
✅ Pretrained LHM-1B model
✅ Real-time rendering pipeline
✅ Huggingface Online Demo
- Core Inference Pipeline (v0.1) 🔥🔥🔥
- HuggingFace Demo Integration 🤗🤗🤗
- ModelScope Deployment
- Motion Processing Scripts
- Training Codes Release
Clone the repository.
git clone git@github.com:aigc3d/LHM.git
cd LHM
Install dependencies by script.
# cuda 11.8
sh ./install_cu118.sh
# cuda 12.1
sh ./install_cu121.sh
The installation has been tested with python3.10, CUDA 11.8 or CUDA 12.1.
Or you can install dependencies step by step, following INSTALL.md.
Download pretrained models from our OSS:
Model | Training Data | BH-T Layers | Link | Inference Time |
---|---|---|---|---|
LHM-0.5B | 5K Synthetic Data | 5 | OSS | 2.01 s |
LHM-0.5B | 300K Videos + 5K Synthetic Data | 5 | OSS | 2.01 s |
LHM-0.7B | 300K Videos + 5K Synthetic Data | 10 | OSS | 4.13 s |
LHM-1.0B | 300K Videos + 5K Synthetic Data | 15 | OSS | 6.57 s |
# Download prior model weights
wget https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/for_lingteng/LHM/LHM-0.5B.tar
tar -xvf LHM-0.5B.tar
wget https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/for_lingteng/LHM/LHM-1B.tar
tar -xvf LHM-1B.tar
# Download prior model weights
wget https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/for_lingteng/LHM/LHM_prior_model.tar
tar -xvf LHM_prior_model.tar
We provide the test motion examples, we will update the procssing scripts ASAP :).
# Download prior model weights
wget https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/for_lingteng/LHM/motion_video.tar
tar -xvf ./motion_video.tar
After downloading weights and data, the folder of the project structure seems like:
├── configs
│ ├── inference
│ ├── accelerate-train-1gpu.yaml
│ ├── accelerate-train-deepspeed.yaml
│ ├── accelerate-train.yaml
│ └── infer-gradio.yaml
├── engine
│ ├── BiRefNet
│ ├── pose_estimation
│ ├── SegmentAPI
├── example_data
│ └── test_data
├── exps
│ ├── releases
├── LHM
│ ├── datasets
│ ├── losses
│ ├── models
│ ├── outputs
│ ├── runners
│ ├── utils
│ ├── launch.py
├── pretrained_models
│ ├── dense_sample_points
│ ├── gagatracker
│ ├── human_model_files
│ ├── sam2
│ ├── sapiens
│ ├── voxel_grid
│ ├── arcface_resnet18.pth
│ ├── BiRefNet-general-epoch_244.pth
├── scripts
│ ├── exp
│ ├── convert_hf.py
│ └── upload_hub.py
├── tools
│ ├── metrics
├── train_data
│ ├── example_imgs
│ ├── motion_video
├── inference.sh
├── README.md
├── requirements.txt
python ./app.py
# bash ./inference.sh ./configs/inference/human-lrm-500M.yaml ./exps/releases/video_human_benchmark/human-lrm-500M/step_060000/ ./train_data/example_imgs/ ./train_data/motion_video/mimo1/smplx_params
# bash ./inference.sh ./configs/inference/human-lrm-1B.yaml ./exps/releases/video_human_benchmark/human-lrm-1B/step_060000/ ./train_data/example_imgs/ ./train_data/motion_video/mimo1/smplx_params
bash inference.sh ${CONFIG} ${MODEL_NAME} ${IMAGE_PATH_OR_FOLDER} ${MOTION_SEQ}
We provide some simple script to compute the metrics.
# download pretrain model into ./pretrained_models/
wget https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/for_lingteng/arcface_resnet18.pth
# Face Similarity
python ./tools/metrics/compute_facesimilarity.py -f1 ${gt_folder} -f2 ${results_folder}
# PSNR
python ./tools/metrics/compute_psnr.py -f1 ${gt_folder} -f2 ${results_folder}
# SSIM LPIPS
python ./tools/metrics/compute_ssim_lpips.py -f1 ${gt_folder} -f2 ${results_folder}
This work is built on many amazing research works and open-source projects:
Thanks for their excellent works and great contribution to 3D generation and 3D digital human area.
@inproceedings{qiu2025LHM,
title={LHM: Large Animatable Human Reconstruction Model for Single Image to 3D in Seconds},
author={Lingteng Qiu and Xiaodong Gu and Peihao Li and Qi Zuo
and Weichao Shen and Junfei Zhang and Kejie Qiu and Weihao Yuan
and Guanying Chen and Zilong Dong and Liefeng Bo
},
booktitle={arXiv preprint arXiv:2503.10625},
year={2025}
}