Skip to content

A SOTA open-source image editing model, which aims to provide comparable performance against the closed-source models like GPT-4o and Gemini 2 Flash.

License

Notifications You must be signed in to change notification settings

stepfun-ai/Step1X-Edit

Repository files navigation

Run on Replicate

🔥🔥🔥 News!!

  • Jun 17, 2025: 👋 Support for Teacache and parallel inference has been added.
  • May 22, 2025: 👋 Step1X-Edit now supports Lora finetuning on a single 24GB GPU now! A hand-fixing Lora for anime characters has also been released. Download Lora
  • Apr 30, 2025: 🎉 Step1X-Edit ComfyUI Plugin is available now, thanks for the community contribution! quank123wip/ComfyUI-Step1X-Edit & raykindle/ComfyUI_Step1X-Edit.
  • Apr 27, 2025: 🎉 With community support, we update the inference code and model weights of Step1X-Edit-FP8. meimeilook/Step1X-Edit-FP8 & rkfg/Step1X-Edit-FP8.
  • Apr 26, 2025: 🎉 Step1X-Edit is now live — you can try editing images directly in the online demo! Online Demo
  • Apr 25, 2025: 👋 We release the evaluation code and benchmark data of Step1X-Edit. Download GEdit-Bench
  • Apr 25, 2025: 👋 We release the inference code and model weights of Step1X-Edit. ModelScope & HuggingFace models.
  • Apr 25, 2025: 👋 We have made our technical report available as open source. Read
demo

Step1X-Edit: a unified image editing model performs impressively on various genuine user instructions.

🧩 Community Contributions

If you develop/use Step1X-Edit in your projects, welcome to let us know 🎉.

📑 Open-source Plan

  • Inference & Checkpoints
  • Online demo (Gradio)
  • Fine-tuning scripts
  • Multi-gpus Sequence Parallel inference
  • FP8 Quantified weight
  • ComfyUI

1. Introduction

We introduce a state-of-the-art image editing model, Step1X-Edit, which aims to provide comparable performance against the closed-source models like GPT-4o and Gemini2 Flash. More specifically, we adopt the Multimodal LLM to process the reference image and user's editing instruction. A latent embedding has been extracted and integrated with a diffusion image decoder to obtain the target image. To train the model, we build a data generation pipeline to produce a high-quality dataset. For evaluation, we develop the GEdit-Bench, a novel benchmark rooted in real-world user instructions. Experimental results on GEdit-Bench demonstrate that Step1X-Edit outperforms existing open-source baselines by a substantial margin and approaches the performance of leading proprietary models, thereby making significant contributions to the field of image editing. More details please refer to our technical report.

2. Model Usage

2.1 Requirements

The following table shows the requirements for running Step1X-Edit model (batch size = 1, with cfg) to edit images:

Model Peak GPU Memory (512 / 786 / 1024) 28 steps w flash-attn(512 / 786 / 1024)
Step1X-Edit 42.5GB / 46.5GB / 49.8GB 5s / 11s / 22s
Step1X-Edit-FP8 31GB / 31.5GB / 34GB 6.8s / 13.5s / 25s
Step1X-Edit + offload 25.9GB / 27.3GB / 29.1GB 49.6s / 54.1s / 63.2s
Step1X-Edit-FP8 + offload 18GB / 18GB / 18GB 35s / 40s / 51s
  • The model is tested on one H800 GPU.
  • We recommend to use GPUs with 80GB of memory for better generation quality and efficiency.

The table below presents the speedup of several efficient methods on the Step1X-Edit model.

Model Peak GPU Memory 28 steps
Step1X-Edit + TeaCache 49.6GB 16.78s
Step1X-Edit + xDiT (GPU=2) 50.2GB 12.81s
Step1X-Edit + xDiT (GPU=4) 52.9GB 8.17s
Step1X-Edit + TeaCache + xDiT (GPU=2) 50.7GB 8.94s
Step1X-Edit + TeaCache + xDiT (GPU=4) 54.2GB 5.82s
  • The model was tested on H800 series GPUs with a resolution of 1024.
  • TeaCache's default threshold of 0.2 provides a good balance between efficiency and performance.
  • xDiT employs both CFG Parallelism and Ring Attention when using 4 GPUs, but only utilizes CFG Parallelism when operating with 2 GPUs.

2.2 Dependencies and Installation

python >=3.10.0 and install torch >= 2.2 with cuda toolkit and corresponding torchvision. We test our model using torch==2.3.1 and torch==2.5.1 with cuda-12.1.

Install requirements:

pip install -r requirements.txt

Install flash-attn, here we provide a script to help find the pre-built wheel suitable for your system.

python scripts/get_flash_attn.py

The script will generate a wheel name like flash_attn-2.7.2.post1+cu12torch2.5cxx11abiFALSE-cp310-cp310-linux_x86_64.whl, which could be found in the release page of flash-attn.

Then you can download the corresponding pre-built wheel and install it following the instructions in flash-attn.

2.3 Inference Scripts

After downloading the model weights, you can use the following scripts to edit images:

bash scripts/run_examples.sh

The default script runs the inference code with non-quantified weights. If you want to save the GPU memory usage, you can 1) set the --quantized flag in the script, which will quantify the weights to fp8, or 2) set the --offload flag in the script to offload some modules to CPU.

This default script runs the inference code on example inputs. The results will look like:

results

For multi-GPU inference, you can use the following script:

bash scripts/run_examples_parallel.sh

You can change the number of GPUs (GPU), the configuration of xDiT (--ulysses_degree or --ring_degree or --cfg_degree), and whether to enable TeaCache acceleration (--teacache) in the script.

This default script runs the inference code on example inputs. The results will look like:

results

2.4 Gradio Scripts

Change the model_path in gradio_app.py to the local path of Step1X-Edit. Then run

python gradio_app.py

Then the gradio demo will run on localhost:32800.

3. Finetuning

3.1 Training scripts

The script ./scripts/finetuning.sh shows how to fine-tune the Step1X-Edit model. With our default strategy, it is possible to fine-tune Step1X-Edit with 1024 resolution on a single 24GB GPU. Our fine-tuning script is adapted from kohya-ss/sd-scripts.

bash ./scripts/finetuning.sh

The custom dataset is organized by ./library/data_configs/step1x_edit.toml. Here metadata_file contains all the training sampels, including the absolute paths of source images, absolute paths of target images and instructions.

The metadata_file should be a json file containing a dict as follows:

{
  <target image path, str>: {
    'ref_image_path': <source image path, str>
    'caption': <the editing instruction, str>
  }, 
  ...
}

3.2 Inference with Lora

Simply add --lora <path to your lora weights> when using inference.py. For example:

python inference.py --input_dir ./examples \
    --model_path /data/work_dir/step1x-edit/ \
    --json_path ./examples/prompt_cn.json \
    --output_dir ./output_cn \
    --seed 1234 --size_level 1024 \
    --lora 20250521_001-lora256-alpha128-fix-hand-per-epoch/step1x-edit_test.safetensors

To reproduce the cases below,

bash scripts/run_examples_fix_hand.sh

3.3 Performances

Here is the the GPU memory cost during training with lora rank as 64 and batchsize as 1:

Precision of DiT bf16 (512 / 786 / 1024) fp8 (512 / 786 / 1024)
GPU Memory 29.7GB / 31.6GB / 33.8GB 19.8GB / 21.3GB / 23.6GB

Here is an example for our pretrained Lora weights, which is designed for fixing corrupted hands of anime characters.

results

4. Benchmark

We release GEdit-Bench as a new benchmark, grounded in real-world usages is developed to support more authentic and comprehensive evaluation. This benchmark, which is carefully curated to reflect actual user editing needs and a wide range of editing scenarios, enables more authentic and comprehensive evaluations of image editing models. The evaluation process and related code can be found in GEdit-Bench/EVAL.md. Part results of the benchmark are shown below:

results

5. Citation

@article{liu2025step1x-edit,
      title={Step1X-Edit: A Practical Framework for General Image Editing}, 
      author={Shiyu Liu and Yucheng Han and Peng Xing and Fukun Yin and Rui Wang and Wei Cheng and Jiaqi Liao and Yingming Wang and Honghao Fu and Chunrui Han and Guopeng Li and Yuang Peng and Quan Sun and Jingwei Wu and Yan Cai and Zheng Ge and Ranchen Ming and Lei Xia and Xianfang Zeng and Yibo Zhu and Binxing Jiao and Xiangyu Zhang and Gang Yu and Daxin Jiang},
      journal={arXiv preprint arXiv:2504.17761},
      year={2025}
}

6. Acknowledgement

We would like to express our sincere thanks to the contributors of Kohya, SD3, FLUX, Qwen, xDiT, TeaCache, diffusers and HuggingFace teams, for their open research and exploration.

7. Disclaimer

The results produced by this image editing model are entirely determined by user input and actions. The development team and this open-source project are not responsible for any outcomes or consequences arising from its use.

8. LICENSE

Step1X-Edit is licensed under the Apache License 2.0. You can find the license files in the respective github and HuggingFace repositories.

About

A SOTA open-source image editing model, which aims to provide comparable performance against the closed-source models like GPT-4o and Gemini 2 Flash.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 7