Skip to content

KV cache implementation for using llama models for text generation.#12195

Merged
comfyanonymous merged 2 commits intomasterfrom
temp_pr
Feb 1, 2026
Merged

KV cache implementation for using llama models for text generation.#12195
comfyanonymous merged 2 commits intomasterfrom
temp_pr

Conversation

@comfyanonymous
Copy link
Member

No description provided.

@comfyanonymous comfyanonymous merged commit 873de5f into master Feb 1, 2026
14 checks passed
@comfyanonymous comfyanonymous deleted the temp_pr branch February 1, 2026 02:11
simonri pushed a commit to simonri/ComfyUI-flash-attention-3 that referenced this pull request Feb 2, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant