Skip to content

feat(LoRA): allow LoRA layer patcher to continue past unknown layers #8059

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 30, 2025

Conversation

keturn
Copy link
Contributor

@keturn keturn commented May 28, 2025

Summary

Chroma users will want to apply FLUX LoRA to Chroma models, despite the differences.

This PR changes the default behavior of LayerPatcher to continue operating (with a warning) after attempting to patch a missing layer.

In the case of Chroma, we have a good idea of which layers those will be, so we also provide a way to disable the warning for the expected layers.

Related Issues / Discussions

QA Instructions

No code in core uses this yet, though do test that existing LoRA still work.

With this PR, the Chroma node should be able to run with FLUX LoRA.

Merge Plan

N/A

Checklist

  • The PR has a short but descriptive title, suitable for a changelog
  • Tests added / updated (if applicable)
  • Documentation added / updated (if applicable)
  • Updated What's New copy (if doing a release after this PR)

@github-actions github-actions bot added python PRs that change python files backend PRs that change backend files labels May 28, 2025
@keturn keturn force-pushed the feat/resilient-layerpatch branch from 841c8c2 to 10c8910 Compare May 28, 2025 18:37
@keturn
Copy link
Contributor Author

keturn commented May 28, 2025

Potential alternative: The Chroma node could attempt to trim those layers out of the LoRA before it's passed to the LayerPatcher, maybe?

@hipsterusername
Copy link
Member

Would it make sense to just more intentionally try to integrate Chroma support? I'm not tracking that model, but we are supportive of it if you think worthwhile

@keturn
Copy link
Contributor Author

keturn commented May 28, 2025

I think the Chroma model is going to have a big role in the Flux-derived-local-models space, so full Chroma support would be great. But I expect the outlook on this particular PR is much the same either way—I'm not sure that we'd do this "load a partially-compatible LoRA" problem any differently if it were in core.

Copy link
Collaborator

@psychedelicious psychedelicious left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that this more tolerant handling is how it should work anyways.

@psychedelicious psychedelicious enabled auto-merge (rebase) May 30, 2025 03:24
@psychedelicious psychedelicious force-pushed the feat/resilient-layerpatch branch from 10c8910 to 910f687 Compare May 30, 2025 03:24
@psychedelicious psychedelicious merged commit 6810843 into invoke-ai:main May 30, 2025
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend PRs that change backend files python PRs that change python files
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants