Skip to content

Add ChromaTransformer2DModel and ChromaPipeline#2

Open
CardSorting wants to merge 1 commit intohameerabbasi:chromafrom
CardSorting:chroma
Open

Add ChromaTransformer2DModel and ChromaPipeline#2
CardSorting wants to merge 1 commit intohameerabbasi:chromafrom
CardSorting:chroma

Conversation

@CardSorting
Copy link

This PR implements ChromaTransformer2DModel and ChromaPipeline as separate classes, addressing the feedback from DN6 about creating dedicated implementations rather than using variants within FluxTransformer2DModel.

Changes Made:

Core Implementation

  • ChromaTransformer2DModel: New transformer class specifically for Chroma, separate from FluxTransformer2DModel
  • ChromaPipeline: Dedicated pipeline without CLIP encoder (T5-only) for Chroma's requirements
  • Single file converter: Added support for loading Chroma checkpoints from safetensors files
  • Attention masking: Implemented proper attention masking for Chroma's specific requirements

Package Integration

  • Updated all __init__.py files to export new classes
  • Added proper imports in src/diffusers/__init__.py
  • Integrated with existing single file loading infrastructure

Example Usage

  • examples/chroma_generation.py: Comprehensive example script showing how to use Chroma model
  • Demonstrates both HuggingFace Hub and single file loading methods
  • Includes optimization techniques (CPU offloading, VAE slicing/tiling)

Technical Details:

  • Uses T5 XXL encoder instead of CLIP
  • Compatible with FLUX VAE
  • Supports FlowMatchEulerDiscreteScheduler
  • Implements proper device and dtype handling
  • Memory-efficient optimizations included

This implementation provides a clean, separate architecture for Chroma while maintaining compatibility with the existing diffusers ecosystem.

- Implement ChromaTransformer2DModel as a separate class from FluxTransformer2DModel
- Create dedicated ChromaPipeline without CLIP encoder (T5-only)
- Add single file converter for Chroma checkpoints
- Implement proper attention masking for Chroma requirements
- Export new classes in package structure

This addresses the feedback from DN6 about creating separate classes
rather than using variants within FluxTransformer2DModel.
hameerabbasi pushed a commit that referenced this pull request Jan 24, 2026
* flux2-klein

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Klein tests (#2)

* tests

* up

* tests

* up

* support step-distilled

* Apply suggestions from code review

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* doc string etc

* style

* more

* copies

* klein lora training scripts (huggingface#3)

* initial commit

* initial commit

* remove remote text encoder

* initial commit

* initial commit

* initial commit

* revert

* img2img fix

* text encoder + tokenizer

* text encoder + tokenizer

* update readme

* guidance

* guidance

* guidance

* test

* test

* revert changes not needed for the non klein model

* Update examples/dreambooth/train_dreambooth_lora_flux2_klein.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* fix guidance

* fix validation

* fix validation

* fix validation

* fix path

* space

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* style

* Update src/diffusers/pipelines/flux2/pipeline_flux2_klein.py

* Apply style fixes

* auto pipeline

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant