Official implementation of DreamO: A Unified Framework for Image Customization
- 2025.05.12: 🔥🔥 Support consumer-grade GPUs (16GB or 24GB) now, see here for instruction
- 2025.05.11: 🔥🔥 We have updated the model to mitigate over-saturation and plastic-face issue. The new version shows consistent improvements over the previous release. Please check it out!
- 2025.05.08: release codes and models
- 2025.04.24: release DreamO tech report.
# clone DreamO repo
git clone https://github.com/bytedance/DreamO.git
cd DreamO
# create conda env
conda create --name dreamo python=3.10
# activate env
conda activate dreamo
# install dependent packages
pip install -r requirements.txtpython app.pyWe observe strong compatibility between DreamO and the accelerated FLUX LoRA variant
(FLUX-turbo), and thus enable Turbo LoRA by default,
reducing inference to 12 steps (vs. 25+ by default). Turbo can be disabled via --no_turbo, though our evaluation shows mixed results;
we therefore recommend keeping Turbo enabled.
tips: If you observe limb distortion or poor text generation, try increasing the guidance scale; if the image appears overly glossy or over-saturated, consider lowering the guidance scale.
We have added support for 8-bit quantization and CPU offload to enable execution on consumer-grade GPUs. This requires the optimum-quanto library, and thus the PyTorch version in requirements.txt has been upgraded to 2.6.0. If you are using an older version of PyTorch, you may need to reconfigure your environment.
-
For users with 24GB GPUs, run
python app.py --int8to enable the int8-quantized model. -
For users with 16GB GPUs, run
python app.py --int8 --offloadto enable CPU offloading alongside int8 quantization. Note that CPU offload significantly reduces inference speed and should only be enabled when necessary.
This task is similar to IP-Adapter and supports a wide range of inputs including characters, objects, and animals. By leveraging VAE-based feature encoding, DreamO achieves higher fidelity than previous adapter methods, with a distinct advantage in preserving character identity.
Here, ID specifically refers to facial identity. Unlike the IP task, which considers both face and clothing, the ID task focuses solely on facial features. This task is similar to InstantID and PuLID. Compared to previous methods, DreamO achieves higher facial fidelity, but introduces more model contamination than the SOTA approach PuLID.
tips: If you notice the face appears overly glossy, try lowering the guidance scale.
This task supports inputs such as tops, bottoms, glasses, and hats, and enables virtual try-on with multiple garments. Notably, our training set does not include multi-garment or ID+garment data, yet the model generalizes well to these unseen combinations.
This task is similar to Style-Adapter and InstantStyle. Please note that style consistency is currently less stable compared to other tasks, and in the current version, style cannot be combined with other conditions. We are working on improvements in future releases—stay tuned.
You can use multiple conditions (ID, IP, Try-On) to generate more creative images. Thanks to the feature routing constraint proposed in the paper, DreamO effectively mitigates conflicts and entanglement among multiple entities.
You can try DreamO demo on HuggingFace.
This project strives to impact the domain of AI-driven image generation positively. Users are granted the freedom to create images using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.
If DreamO is helpful, please help to ⭐ the repo.
If you find this project useful for your research, please consider citing our paper.
If you have any comments or questions, please open a new issue or contact Yanze Wu and Chong Mou.




