Skip to content

UniCon: A Simple Approach to Unifying Diffusion-based Conditional Generation (ICLR 2025)

Notifications You must be signed in to change notification settings

lixirui142/UniCon

Repository files navigation

UniCon: A Simple Approach to Unifying Diffusion-based Conditional Generation (ICLR 2025)

Xirui Li, Charles Herrmann, Kelvin C.K. Chan, Yinxiao Li, Deqing Sun, Chao Ma, Ming-Hsuan Yang

Paper PDF Project Page

TL;DR: The proposed UniCon enables diverse generation behavior in one model for a target image-condition pair.

teaser.mp4
Abstract

Recent progress in image generation has sparked research into controlling these models through condition signals, with various methods addressing specific challenges in conditional generation. Instead of proposing another specialized technique, we introduce a simple, unified framework to handle diverse conditional generation tasks involving a specific image-condition correlation. By learning a joint distribution over a correlated image pair (e.g. image and depth) with a diffusion model, our approach enables versatile capabilities via different inference-time sampling schemes, including controllable image generation (e.g. depth to image), estimation (e.g. image to depth), signal guidance, joint generation (image & depth), and coarse control. Previous attempts at unification often introduce significant complexity through multi-stage training, architectural modification, or increased parameter counts. In contrast, our simple formulation requires a single, computationally efficient training stage, maintains the standard model input, and adds minimal learned parameters (15% of the base model). Moreover, our model supports additional capabilities like non-spatially aligned and coarse conditioning. Extensive results show that our single model can produce comparable results with specialized methods and better results than prior unified methods. We also demonstrate that multiple models can be effectively combined for multi-signal conditional generation.

Setup

  1. Clone the repository and install requirements.
git clone https://github.com/lixirui142/UniCon
cd UniCon
pip install -r requirements.txt
  1. Download pretrained UniCon model weights from here to "weights" dir by running:
python download_pretrained_weights.py

Now we have four unicon models (depth, edge, pose, id) based on SDv1-5.

Usage

Gradio Demo

We provide a gradio demo to showcase the usage of UniCon models. There are some examples to get you familiar with inference options for different tasks. To run the demo:

python gradio_unicon.py

TODO

  • Provide notebooks and python scripts for more inference cases.
  • Clean and release training code.

Citation

If you find this work useful for your research, please consider citing our paper:

@article{li2024unicon,
    title={A Simple Approach to Unifying Diffusion-based Conditional Generation},
    author={Li, Xirui and Herrmann, Charles and Chan, Kelvin CK and Li, Yinxiao and Sun, Deqing and Yang, Ming-Hsuan},
    booktitle={arXiv preprint arxiv:2410.11439},
    year={2024}
    }

About

UniCon: A Simple Approach to Unifying Diffusion-based Conditional Generation (ICLR 2025)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published