Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add max_context_length to TextEncode node for LLM max tokens - experimental use #289

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Removed token counting for now due to added complexity
  • Loading branch information
fblissjr committed Jan 19, 2025
commit f38bc59714ea091a54288ffe03ea4b1a34b34997
21 changes: 1 addition & 20 deletions nodes.py
Original file line number Diff line number Diff line change
Expand Up @@ -1281,26 +1281,8 @@ def process(
"`prompt_template['template']` must contain a placeholder `{}` for the input text, "
f"got {prompt_template_dict['template']}"
)
# --- Apply and debug print the template ---
prompt_with_template = text_encoder_1.apply_text_to_template(
prompt, prompt_template_dict["template"]
)
log.debug(
f"HyVideoTextEncode: Prompt with template: {prompt_with_template}"
)

# --- Debug: Check for truncation and log token count ---
prompt_tokens = text_encoder_1.tokenizer(
prompt_with_template, return_length=True, return_tensors="pt"
)
token_count = prompt_tokens["length"][0].item()
if token_count > text_encoder_1.max_length:
log.info(
f"HyVideoTextEncode: Prompt with template is {token_count} tokens long, which is longer than max_context_length ({text_encoder_1.max_length}). It will be truncated."
)
else:
prompt_template_dict = None
prompt_with_template = prompt # No template applied

def encode_prompt(
self,
Expand Down Expand Up @@ -1441,8 +1423,7 @@ def encode_prompt(
negative_attention_mask_2,
) = encode_prompt(
self,
prompt_with_template, # Use the prompt_with_template here
# prompt,
prompt,
negative_prompt,
text_encoder_2,
clip_text_override=clip_text_override,
Expand Down