-
Notifications
You must be signed in to change notification settings - Fork 69
Description
*************************** 1.command is ******************************
PROMPT='A girl holding a paper with words "Hello, world!"'
IMAGE_PATH='none' # Optional, 'none' or
SEED=1
ASPECT_RATIO=16:9
RESOLUTION=480p
OUTPUT_PATH=./outputs/output_3.mp4
Configuration
N_INFERENCE_GPU=2 # Parallel inference GPU count
CFG_DISTILLED=false # Inference with CFG distilled model, 2x speedup
SPARSE_ATTN=false # Inference with sparse attention (only 720p models are equipped with sparse attention). Please ensure flex-block-attn is installed
SAGE_ATTN=false # Inference with SageAttention
REWRITE=false # Enable prompt rewriting. Please ensure rewrite vLLM server is deployed and configured.
OVERLAP_GROUP_OFFLOADING=false # Only valid when group offloading is enabled, significantly increases CPU memory usage but speeds up inference
MODEL_PATH=ckpts # Path to pretrained model
CUDA_VISIBLE_DEVICES=6,7 torchrun --nproc_per_node=$N_INFERENCE_GPU generate.py
--prompt "$PROMPT"
--image_path $IMAGE_PATH
--resolution $RESOLUTION
--aspect_ratio $ASPECT_RATIO
--seed $SEED
--cfg_distilled $CFG_DISTILLED
--sparse_attn $SPARSE_ATTN
--use_sageattn $SAGE_ATTN
--rewrite $REWRITE
--output_path $OUTPUT_PATH
--save_pre_sr_video
--model_path $MODEL_PATH
--sr false
************************ 2.log *************************
transformer_version: 480p_t2v,transformer_dtype: torch.bfloat16,enable_sr: False
cache path ckpts/transformer/480p_t2v torch.bfloat16 cpu
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:13<00:00, 2.67s/it]
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:14<00:00, 2.84s/it]
2025-11-24 17:49:24.463 | WARNING | main:rank0_log:47 - Warning: Prompt rewriting is disabled. This may affect the quality of generated videos.
============================================================
🎬 HunyuanVideo Generation Task
User Prompt: A girl holding a paper with words "Hello, world!"
Rewritten Prompt:
Aspect Ratio: 16:9
Video Length: 121
Reference Image: None
Guidance Scale: 6.0
Guidance Embedded Scale: None
Shift: 5.0
Seed: 1
Video Resolution: 848 x 480
Attn mode: flash
Transformer dtype: torch.bfloat16
Sampling Steps: 50
Use Meanflow: False
0%| | 0/50 [00:00<?, ?it/s]The module 'HunyuanVideo_1_5_DiffusionTransformer' is group offloaded and moving it using .to() is not supported.
/mnt/raid0/supeng/program/hunyuan/HunyuanVideo-1.5/hyvideo/commons/init.py:199: UserWarning: flash is not available. Falling back to torch attention.
warnings.warn("flash is not available. Falling back to torch attention.")
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [13:57<00:00, 16.70s/it]The module 'HunyuanVideo_1_5_DiffusionTransformer' is group offloaded and moving it using .to() is not supported.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [13:57<00:00, 16.75s/it]
[rank1]:[W1124 18:04:05.077111082 ProcessGroupNCCL.cpp:1496] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
Saved video to: ./outputs/output_3.mp4
[rank0]:[W1124 18:04:08.565342723 ProcessGroupNCCL.cpp:1496] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
***** 效果是*********************
