DiffSynth Studio is a Diffusion engine. We have restructured architectures including Text Encoder, UNet, VAE, among others, maintaining compatibility with models from the open-source community while enhancing computational performance. We provide many interesting features. Enjoy the magic of Diffusion models!
- Aug 29, 2023. We propose DiffSynth, a video synthesis framework.
- Project Page.
- The source codes are released in EasyNLP.
- The technical report (ECML PKDD 2024) is released on arXiv.
- Oct 1, 2023. We release an early version of this project, namely FastSDXL. A try for building a diffusion engine.
- The source codes are released on GitHub.
- FastSDXL includes a trainable OLSS scheduler for efficiency improvement.
- Nov 15, 2023. We propose FastBlend, a powerful video deflickering algorithm.
- Dec 8, 2023. We decide to develop a new Project, aiming to release the potential of diffusion models, especially in video synthesis. The development of this project is started.
- Jan 29, 2024. We propose Diffutoon, a fantastic solution for toon shading.
- Project Page.
- The source codes are released in this project.
- The technical report (IJCAI 2024) is released on arXiv.
- June 13, 2024. DiffSynth Studio is transfered to ModelScope. The developers have transitioned from "I" to "we". Of course, I will still participate in development and maintenance.
- June 21, 2024. We propose ExVideo, a post-tuning technique aimed at enhancing the capability of video generation models. We have extended Stable Video Diffusion to achieve the generation of long videos up to 128 frames.
- Project Page.
- Source code is released in this repo. See
examples/ExVideo
. - Models are released on HuggingFace and ModelScope.
- Technical report is released on arXiv.
- Until now, DiffSynth Studio has supported the following models:
Create Python environment:
conda env create -f environment.yml
We find that sometimes conda
cannot install cupy
correctly, please install it manually. See this document for more details.
Enter the Python environment:
conda activate DiffSynthStudio
The Python examples are in examples
. We provide an overview here.
We trained an extended video synthesis model, which can generate 128 frames. examples/ExVideo
github_title.mp4
Generate high-resolution images, by breaking the limitation of diffusion models! examples/image_synthesis
512*512 | 1024*1024 | 2048*2048 | 4096*4096 |
---|---|---|---|
1024*1024 | 2048*2048 |
---|---|
Render realistic videos in a flatten style and enable video editing features. examples/Diffutoon
Diffutoon.mp4
Diffutoon_edit.mp4
Video stylization without video models. examples/diffsynth
winter_stone.mp4
Use Hunyuan-DiT to generate images with Chinese prompts. We also support LoRA fine-tuning of this model. examples/hunyuan_dit
Prompt: 少女手捧鲜花,坐在公园的长椅上,夕阳的余晖洒在少女的脸庞,整个画面充满诗意的美感
1024x1024 | 2048x2048 (highres-fix) |
---|---|
Prompt: 一只小狗蹦蹦跳跳,周围是姹紫嫣红的鲜花,远处是山脉
Without LoRA | With LoRA |
---|---|
python -m streamlit run DiffSynth_Studio.py