Lingteng Qiu*, Xiaodong Gu*, Peihao Li*, Qi Zuo*, Weichao Shen, Junfei Zhang, Kejie Qiu, Weihao Yuan
Guanying Chen+, Zilong Dong+, Liefeng Bo
如果您熟悉中文,可以阅读中文版本的README
[March 25, 2025] The online demo of ModelScope Space has been released: 500M model Only.
[March 24, 2025] Is SAM2 difficult to install😭😭😭? 👉 It is compatible with rembg!
[March 20, 2025] Release video motion processing pipeline
[March 19, 2025] Local Gradio App.py optimization: Faster and More Stable 🔥🔥🔥
[March 15, 2025] Inference Time Optimization: 30% Faster
[March 13, 2025] Initial release with:
✅ Inference codebase
✅ Pretrained LHM-0.5B model
✅ Pretrained LHM-1B model
✅ Real-time rendering pipeline
✅ Huggingface Online Demo
- Core Inference Pipeline (v0.1) 🔥🔥🔥
- HuggingFace Demo Integration 🤗🤗🤗
- ModelScope Deployment
- Motion Processing Scripts
- Training Codes Release
We provide a video that teaches us how to install LHM step by step on bilibili, submitted by 站长推荐推荐.
Clone the repository.
git clone git@github.com:aigc3d/LHM.git
cd LHM
Set Up a Virtual Environment Open Command Prompt (CMD), navigate to the project folder, and run:
python -m venv lhm_env
lhm_env\Scripts\activate
install_cu121.bat
python ./app.py
pip install rembg
sh ./install_cu118.sh
# cuda 12.1
sh ./install_cu121.sh
The installation has been tested with python3.10, CUDA 11.8 or CUDA 12.1.
Or you can install dependencies step by step, following INSTALL.md.
Please note that the model will be downloaded automatically if you do not download it yourself.
Model | Training Data | BH-T Layers | Link | Inference Time |
---|---|---|---|---|
LHM-0.5B | 5K Synthetic Data | 5 | OSS | 2.01 s |
LHM-0.5B | 300K Videos + 5K Synthetic Data | 5 | OSS | 2.01 s |
LHM-0.7B | 300K Videos + 5K Synthetic Data | 10 | OSS | 4.13 s |
LHM-1.0B | 300K Videos + 5K Synthetic Data | 15 | OSS | 6.57 s |
# Download prior model weights
wget https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/for_lingteng/LHM/LHM-0.5B.tar
tar -xvf LHM-0.5B.tar
wget https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/for_lingteng/LHM/LHM-1B.tar
tar -xvf LHM-1B.tar
# Download prior model weights
wget https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/LHM/LHM_prior_model.tar
tar -xvf LHM_prior_model.tar
We provide the test motion examples, we will update the processing scripts ASAP :).
# Download prior model weights
wget https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/LHM/motion_video.tar
tar -xvf ./motion_video.tar
After downloading weights and data, the folder of the project structure seems like:
├── configs
│ ├── inference
│ ├── accelerate-train-1gpu.yaml
│ ├── accelerate-train-deepspeed.yaml
│ ├── accelerate-train.yaml
│ └── infer-gradio.yaml
├── engine
│ ├── BiRefNet
│ ├── pose_estimation
│ ├── SegmentAPI
├── example_data
│ └── test_data
├── exps
│ ├── releases
├── LHM
│ ├── datasets
│ ├── losses
│ ├── models
│ ├── outputs
│ ├── runners
│ ├── utils
│ ├── launch.py
├── pretrained_models
│ ├── dense_sample_points
│ ├── gagatracker
│ ├── human_model_files
│ ├── sam2
│ ├── sapiens
│ ├── voxel_grid
│ ├── arcface_resnet18.pth
│ ├── BiRefNet-general-epoch_244.pth
├── scripts
│ ├── exp
│ ├── convert_hf.py
│ └── upload_hub.py
├── tools
│ ├── metrics
├── train_data
│ ├── example_imgs
│ ├── motion_video
├── inference.sh
├── README.md
├── requirements.txt
python ./app.py
# MODEL_NAME={LHM-500M, LHM-1B}
# bash ./inference.sh ./configs/inference/human-lrm-500M.yaml LHM-500M ./train_data/example_imgs/ ./train_data/motion_video/mimo1/smplx_params
# bash ./inference.sh ./configs/inference/human-lrm-1B.yaml LHM-1B ./train_data/example_imgs/ ./train_data/motion_video/mimo1/smplx_params
# animation
bash inference.sh ${CONFIG} ${MODEL_NAME} ${IMAGE_PATH_OR_FOLDER} ${MOTION_SEQ}
# export mesh
bash ./inference_mesh.sh ${CONFIG} ${MODEL_NAME}
-
Download model weights for motion processing.
wget -P ./pretrained_models/human_model_files/pose_estimate https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/LHM/yolov8x.pt wget -P ./pretrained_models/human_model_files/pose_estimate https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/LHM/vitpose-h-wholebody.pth
-
Install extra dependencies.
cd ./engine/pose_estimation pip install -v -e third-party/ViTPose pip install ultralytics
-
Run the script.
# python ./engine/pose_estimation/video2motion.py --video_path ./train_data/demo.mp4 --output_path ./train_data/custom_motion python ./engine/pose_estimation/video2motion.py --video_path ${VIDEO_PATH} --output_path ${OUTPUT_PATH}
-
Use the motion to drive the avatar.
# if not sam2? pip install rembg. # bash ./inference.sh ./configs/inference/human-lrm-500M.yaml LHM-500M ./train_data/example_imgs/ ./train_data/custom_motion/demo/smplx_params # bash ./inference.sh ./configs/inference/human-lrm-1B.yaml LHM-1B ./train_data/example_imgs/ ./train_data/custom_motion/demo/smplx_params bash inference.sh ${CONFIG} ${MODEL_NAME} ${IMAGE_PATH_OR_FOLDER} ${OUTPUT_PATH}/${VIDEO_NAME}/smplx_params
We provide some simple scripts to compute the metrics.
# download pretrain model into ./pretrained_models/
wget https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/LHM/arcface_resnet18.pth
# Face Similarity
python ./tools/metrics/compute_facesimilarity.py -f1 ${gt_folder} -f2 ${results_folder}
# PSNR
python ./tools/metrics/compute_psnr.py -f1 ${gt_folder} -f2 ${results_folder}
# SSIM LPIPS
python ./tools/metrics/compute_ssim_lpips.py -f1 ${gt_folder} -f2 ${results_folder}
We need a comfyui wrapper of our pipeline. If you are familiar with comfyui and would like to contribute to LHM, please contact muyuan.zq@alibaba-inc.com
This work is built on many amazing research works and open-source projects:
Thanks for their excellent works and great contribution to 3D generation and 3D digital human area.
We would like to express our sincere gratitude to 站长推荐推荐 for the installation tutorial video on bilibili.
@inproceedings{qiu2025LHM,
title={LHM: Large Animatable Human Reconstruction Model from a Single Image in Seconds},
author={Lingteng Qiu and Xiaodong Gu and Peihao Li and Qi Zuo
and Weichao Shen and Junfei Zhang and Kejie Qiu and Weihao Yuan
and Guanying Chen and Zilong Dong and Liefeng Bo
},
booktitle={arXiv preprint arXiv:2503.10625},
year={2025}
}