A comprehensive custom node pack for ComfyUI that provides advanced tools for quantizing model weights to lower precision formats like FP16, BF16, and true FP8 types, with specialized support for ControlNet models.
This node pack provides powerful quantization tools directly within ComfyUI, including:
- Model To State Dict: Extracts the state dictionary from a model object and attempts to normalize keys.
- Quantize Model to FP8 Format: Converts model weights directly to
float8_e4m3fn
orfloat8_e5m2
format (requires CUDA). - Quantize Model Scaled: Applies simulated FP8 scaling (per-tensor or per-channel) and then casts the model to
float16
,bfloat16
, or keeps the original format. - Save As SafeTensor: Saves the processed state dictionary to a
.safetensors
file at a specified path.
- ControlNet FP8 Quantizer: Advanced FP8 quantization specifically designed for ControlNet models with precision-aware quantization, tensor calibration, and ComfyUI folder integration.
- ControlNet Metadata Viewer: Analyzes and displays ControlNet model metadata, tensor information, and structure for debugging and optimization.
-
Clone or download this repository into your ComfyUI's
custom_nodes
directory.- Example using git:
cd ComfyUI/custom_nodes git clone [https://github.com/YourUsername/YourRepoName.git](https://github.com/YourUsername/YourRepoName.git) ComfyUI-ModelQuantizer # Replace with your actual repo URL and desired folder name
- Alternatively, download the ZIP and extract it into
ComfyUI/custom_nodes/ComfyUI-ModelQuantizer
.
- Example using git:
-
Install dependencies:
cd ComfyUI/custom_nodes/ComfyUI-ModelQuantizer pip install -r requirements.txt
-
For ControlNet quantization, ensure your ControlNet models are in the correct folder:
ComfyUI/models/controlnet/ ├── control_v11p_sd15_canny.safetensors ├── control_v11p_sd15_openpose.safetensors └── ...
-
Restart ComfyUI.
- Category:
Model Quantization/Utils
- Function: Extracts state dict from a MODEL object, stripping common prefixes.
- Inputs:
model
: The inputMODEL
object.
- Outputs:
model_state_dict
: The extracted state dictionary.
- Category:
Model Quantization/FP8 Direct
- Function: Converts model weights directly to a specific FP8 format. Requires CUDA.
- Inputs:
model_state_dict
: The state dictionary to quantize.fp8_format
: The target FP8 format (float8_e5m2
orfloat8_e4m3fn
).
- Outputs:
quantized_model_state_dict
: The state dictionary with FP8 tensors.
- Category:
Model Quantization
- Function: Applies simulated FP8 value scaling and then casts to FP16, BF16, or keeps the original dtype. Useful for size reduction with good compatibility.
- Inputs:
model_state_dict
: The state dictionary to quantize.scaling_strategy
: How to simulate scaling (per_tensor
orper_channel
).processing_device
: Where to perform calculations (Auto
,CPU
,GPU
).output_dtype
: Final data type (Original
,float16
,bfloat16
). Defaults tofloat16
.
- Outputs:
quantized_model_state_dict
: The processed state dictionary.
- Category:
Model Quantization/Save
- Function: Saves the processed state dictionary to a
.safetensors
file. - Inputs:
quantized_model_state_dict
: The state dictionary to save.absolute_save_path
: The full path (including filename) where the model will be saved.
- Outputs: None (Output node).
- Category:
Model Quantization/ControlNet
- Function: Advanced FP8 quantization specifically designed for ControlNet models with precision-aware quantization and tensor calibration.
- Inputs:
controlnet_model
: Dropdown selection of ControlNet models frommodels/controlnet/
folderfp8_format
: FP8 format (float8_e4m3fn
recommended, orfloat8_e5m2
)quantization_strategy
:per_tensor
(faster) orper_channel
(better quality)activation_clipping
: Enable percentile-based outlier handling (recommended)custom_output_name
: Optional custom filename for outputcalibration_samples
: Number of samples for tensor calibration (10-1000, default: 100)preserve_metadata
: Preserve original metadata in output file
- Outputs:
status
: Operation status and result messagemetadata_info
: JSON-formatted metadata informationquantization_stats
: Detailed compression statistics and ratios
- Category:
Model Quantization/ControlNet
- Function: Analyzes and displays ControlNet model metadata, tensor information, and structure.
- Inputs:
controlnet_model
: Dropdown selection of ControlNet models frommodels/controlnet/
folder
- Outputs:
metadata
: JSON-formatted original metadatatensor_info
: Detailed tensor information including shapes, dtypes, and sizesmodel_analysis
: Model structure analysis including layer types and statistics
- Load a model using a standard loader (e.g.,
Load Checkpoint
). - Connect the
MODEL
output to theModel To State Dict
node. - Connect the
model_state_dict
output fromModel To State Dict
toQuantize Model Scaled
. - In
Quantize Model Scaled
, select your desiredscaling_strategy
and setoutput_dtype
tofloat16
(for size reduction). - Connect the
quantized_model_state_dict
output fromQuantize Model Scaled
to theSave Model as SafeTensor
node. - Specify the
absolute_save_path
in theSave Model as SafeTensor
node. - Queue the prompt.
- Restart ComfyUI or refresh loaders to find the saved model.
- Add
ControlNet FP8 Quantizer
node to your workflow. - Select your ControlNet model from the dropdown (automatically populated from
models/controlnet/
). - Configure settings:
- FP8 Format:
float8_e4m3fn
(recommended for most cases) - Strategy:
per_channel
(better quality) orper_tensor
(faster) - Activation Clipping:
True
(recommended for better quality)
- FP8 Format:
- Execute the workflow - quantized model automatically saved to
models/controlnet/quantized/
. - Use
ControlNet Metadata Viewer
to analyze original vs quantized models.
- Add multiple
ControlNet FP8 Quantizer
nodes. - Select different ControlNet models in each node.
- Use consistent settings across all nodes.
- Execute to process multiple models simultaneously.
- Precision-aware quantization with tensor calibration and percentile-based scaling
- Two FP8 formats:
float8_e4m3fn
(recommended) andfloat8_e5m2
- Quantization strategies: per-tensor (faster) and per-channel (better quality)
- Automatic ComfyUI integration with dropdown model selection
- Smart output management - quantized models saved to
models/controlnet/quantized/
- Comprehensive analysis with metadata viewer and detailed statistics
- Fallback logic for compatibility across different PyTorch versions
- ~50% size reduction with maintained quality
- Advanced tensor calibration using statistical analysis
- Activation clipping with outlier handling
- Metadata preservation with quantization information
- Error handling with graceful fallbacks
- Progress tracking and detailed logging
- Automatic model detection from
models/controlnet/
folder - Dropdown selection - no manual path entry needed
- Auto-generated filenames with format and strategy information
- Organized output in dedicated quantized subfolder
- Seamless workflow integration with existing ControlNet nodes
- PyTorch 2.0+ (for FP8 support, usually included with ComfyUI)
safetensors
>= 0.3.1tqdm
>= 4.65.0
tensorflow
>= 2.13.0 (optional, for advanced optimization)tensorflow-model-optimization
>= 0.7.0 (optional)
- CUDA-enabled GPU recommended for FP8 operations
- CPU fallback available for compatibility
- Ensure all dependencies are installed:
pip install -r requirements.txt
- Check that ControlNet models are in
ComfyUI/models/controlnet/
folder - Restart ComfyUI completely
- Check console for import errors
- Place ControlNet models in
ComfyUI/models/controlnet/
folder - Supported formats:
.safetensors
,.pth
- Check file permissions
- Use manual path input as fallback if needed
- "quantile() input tensor must be either float or double dtype": Fixed in latest version
- CUDA out of memory: Use CPU processing or reduce batch size
- FP8 not supported: Upgrade PyTorch to 2.0+ or use CPU fallback
- For best quality: Use
per_channel
+activation_clipping
+float8_e4m3fn
- For speed: Use
per_tensor
+ reducecalibration_samples
- Memory issues: Process models one at a time
MIT