Closed
Description
I got it working on my Windows.
It might require Visual Studio Build Tools. However, I am not sure because my local already installed previously. If it needs, you can find from Visual Studio Build Tools.
Env
- WIndows 10
- Conda - Python 3.11
- 3090
- Cuda 12.1
Setup Steps
- conda create -n unique3d-py311 python=3.11
- conda activate unique3d-py311
- pip install torch torchvision torchaudio xformers --index-url https://download.pytorch.org/whl/cu121
- (Download the trition whl from https://huggingface.co/madbuda/triton-windows-builds, because I use python 311, so download whl build for py311 and install manually) pip install triton-2.1.0-cp311-cp311-win_amd64.whl
- pip install Ninja
- (Due the latest diffusers is not compatible to the code of repo, we MUST install this for diffusers manually) pip install diffusers==0.27.2
- pip install grpcio werkzeug tensorboard-data-server
- remove torch>=2.0.1, diffusers>=0.26.3, xformers, onnxruntime_gpu from requirement.txt
- pip install -r requirements.txt
- pip uninstall onnxruntime
- pip uninstall onnxruntime-gpu
- (We MUST uninstall onnxruntime and onnxruntime-gpu first, then install this given version manually) pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
Runtime
-
you need to create the output folder tmp\gradio under the running driver by yourself, for me, it is F:\tmp\gradio
-
You will see the error of onnxruntime
but it would not effect the generation of the mesh (maybe?)
(Update) In order to fix the TensorRT error, you need to download TensorRT bundle for windows from https://github.com/NVIDIA/TensorRT, and configure TensorRT-10.0.1.6\lib in your PATH env variable. meanwhile also need to configure CUDA and cuDNN, please refer to github page for more details.
Good luck~
Metadata
Metadata
Assignees
Labels
No labels