Mixed-precision quantization scheme (16/8/4bit mixed quantization) for the Wan2.2-Animate-14B model. Compresses the original 35GB base model to 17GB, balancing inference performance and model size.
quantization 8bit 4bit 16bit model-compression image2video wan2 wan2-2-animate wan2-animate mixed-precision-quantization 14b-model
-
Updated
Dec 22, 2025 - Python