Popular repositories Loading
-
Wan2.2-Animate-14B-Quant-Compression
Wan2.2-Animate-14B-Quant-Compression PublicMixed-precision quantization scheme (16/8/4bit mixed quantization) for the Wan2.2-Animate-14B model. Compresses the original 35GB base model to 17GB, balancing inference performance and model size.
Python 1
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.