There are some mmagic demos in this folder. We provide python command line usage here to run these demos and more guidance could also be found in the documentation
Table of contents:
1. Download sample images or videos
2.1. Check supported tasks and models
2.2. Perform inference with command line
2.2.2. Conditional GANs example
2.2.3. Unconditional GANs example
2.2.4. Image Translation (Image2Image) example
2.2.7. Image Restoration example
2.2.8. Image Super-Resolution example
2.2.9. Video Super-Resolution example
2.2.10. Video Interpolation example
2.2.11. Image Colorization example
2.2.12. 3D-aware Generation example
We prepared some images and videos for you to run demo with. After MMagic is well installed, you could use demos in this folder to infer these data. Download with python script download_inference_resources.py.
# see all resources
python demo/download_inference_resources.py --print-all
# see all task types
python demo/download_inference_resources.py --print-task-type
# see resources of one specific task
python demo/download_inference_resources.py --print-task 'Inpainting'
# download all resources to default dir './resources'
python demo/download_inference_resources.py
# download resources of one task
python demo/download_inference_resources.py --task 'Inpainting'
# download to the directory you want
python demo/download_inference_resources.py --root-dir './resources'
print all supported models for inference.
python demo/mmagic_inference_demo.py --print-supported-models
print all supported tasks for inference.
python demo/mmagic_inference_demo.py --print-supported-tasks
print all supported models for one task, take 'Image2Image' for example.
python demo/mmagic_inference_demo.py --print-task-supported-models 'Text2Image'
You can use the following commands to perform inference with a MMagic model.
Usage of python API can also be found in this tutotial.
python demo/mmagic_inference_demo.py \
[--img] \
[--video] \
[--label] \
[--trimap] \
[--mask] \
[--result-out-dir] \
[--model-name] \
[--model-setting] \
[--model-config] \
[--model-ckpt] \
[--device ] \
[--extra-parameters]
Examples for each kind of task:
stable diffusion
python demo/mmagic_inference_demo.py \
--model-name stable_diffusion \
--text "A panda is having dinner at KFC" \
--result-out-dir demo_text2image_stable_diffusion_res.png
controlnet-canny
python demo/mmagic_inference_demo.py \
--model-name controlnet \
--model-setting 1 \
--text "Room with blue walls and a yellow ceiling." \
--control 'https://user-images.githubusercontent.com/28132635/230297033-4f5c32df-365c-4cf4-8e4f-1b76a4cbb0b7.png' \
--result-out-dir demo_text2image_controlnet_canny_res.png
controlnet-pose
python demo/mmagic_inference_demo.py \
--model-name controlnet \
--model-setting 2 \
--text "masterpiece, best quality, sky, black hair, skirt, sailor collar, looking at viewer, short hair, building, bangs, neckerchief, long sleeves, cloudy sky, power lines, shirt, cityscape, pleated skirt, scenery, blunt bangs, city, night, black sailor collar, closed mouth" \
--control 'https://user-images.githubusercontent.com/28132635/230380893-2eae68af-d610-4f7f-aa68-c2f22c2abf7e.png' \
--result-out-dir demo_text2image_controlnet_pose_res.png
controlnet-seg
python demo/mmagic_inference_demo.py \
--model-name controlnet \
--model-setting 3 \
--text "black house, blue sky" \
--control 'https://github-production-user-asset-6210df.s3.amazonaws.com/49083766/243599897-553a4c46-c61d-46df-b820-59a49aaf6678.png' \
--result-out-dir demo_text2image_controlnet_seg_res.png
python demo/mmagic_inference_demo.py \
--model-name biggan \
--model-setting 3 \
--label 1 \
--result-out-dir demo_conditional_biggan_res.jpg
python demo/mmagic_inference_demo.py \
--model-name styleganv1 \
--result-out-dir demo_unconditional_styleganv1_res.jpg
python demo/mmagic_inference_demo.py \
--model-name pix2pix \
--img ./resources/input/translation/gt_mask_0.png \
--result-out-dir ./resources/output/translation/demo_translation_pix2pix_res.png
python demo/mmagic_inference_demo.py \
--model-name deepfillv2 \
--img ./resources/input/inpainting/celeba_test.png \
--mask ./resources/input/inpainting/bbox_mask.png \
--result-out-dir ./resources/output/inpainting/demo_inpainting_deepfillv2_res.jpg
python demo/mmagic_inference_demo.py \
--model-name aot_gan \
--img ./resources/input/matting/GT05.jpg \
--trimap ./resources/input/matting/GT05_trimap.jpg \
--result-out-dir ./resources/output/matting/demo_matting_gca_res.png
python demo/mmagic_inference_demo.py \
--model-name nafnet \
--img ./resources/input/restoration/0901x2.png \
--result-out-dir ./resources/output/restoration/demo_restoration_nafnet_res.png
python demo/mmagic_inference_demo.py \
--model-name esrgan \
--img ./resources/input/restoration/0901x2.png \
--result-out-dir ./resources/output/restoration/demo_restoration_esrgan_res.png
python demo/mmagic_inference_demo.py \
--model-name ttsr \
--img ./resources/input/restoration/000001.png \
--ref ./resources/input/restoration/000001.png \
--result-out-dir ./resources/output/restoration/demo_restoration_ttsr_res.png
BasicVSR / BasicVSR++ / IconVSR / RealBasicVSR
python demo/mmagic_inference_demo.py \
--model-name basicvsr \
--video ./resources/input/video_restoration/QUuC4vJs_000084_000094_400x320.mp4 \
--result-out-dir ./resources/output/video_restoration/demo_video_restoration_basicvsr_res.mp4
EDVR
python demo/mmagic_inference_demo.py \
--model-name edvr \
--extra-parameters window_size=5 \
--video ./resources/input/video_restoration/QUuC4vJs_000084_000094_400x320.mp4 \
--result-out-dir ./resources/output/video_restoration/demo_video_restoration_edvr_res.mp4
TDAN
python demo/mmagic_inference_demo.py \
--model-name tdan \
--model-setting 2 \
--extra-parameters window_size=5 \
--video ./resources/input/video_restoration/QUuC4vJs_000084_000094_400x320.mp4 \
--result-out-dir ./resources/output/video_restoration/demo_video_restoration_tdan_res.mp4
python demo/mmagic_inference_demo.py \
--model-name flavr \
--video ./resources/input/video_interpolation/b-3LLDhc4EU_000000_000010.mp4 \
--result-out-dir ./resources/output/video_interpolation/demo_video_interpolation_flavr_res.mp4
python demo/mmagic_inference_demo.py \
--model-name inst_colorization \
--img https://github-production-user-asset-6210df.s3.amazonaws.com/49083766/245713512-de973677-2be8-4915-911f-fab90bb17c40.jpg \
--result-out-dir demo_colorization_res.png
python demo/mmagic_inference_demo.py \
--model-name eg3d \
--result-out-dir ./resources/output/eg3d-output
First, put your checkpoint path in ./checkpoints
, e.g. ./checkpoints/stylegan2_lions_512_pytorch_mmagic.pth
. For example,
mkdir checkpoints
cd checkpoints
wget -O stylegan2_lions_512_pytorch_mmagic.pth https://download.openxlab.org.cn/models/qsun1/DragGAN-StyleGAN2-checkpoint/weight//StyleGAN2-Lions-internet
Then, try on the script:
python demo/gradio_draggan.py
Launch the UI.
python demo/gradio_vico.py
Training
-
Submit your concept sample images to the interface and fill in the init_token and placeholder.
-
Click the Start Training button.
-
Your training results will be under the folder
./work_dirs/vico_gradio
.
Inference
Follow the instructions to download the pretrained weights (or use your own weights) and put them under the folder ./ckpts
mkdir ckpts
your folder structure should be like:
ckpts
└── barn.pth
└── batman.pth
└── clock.pth
...
Then launch the UI and you can use the pretrained weights to generate images.
-
Upload reference image.
-
(Optional) Customize advanced settings.
-
Click inference button.
First, run the script:
python demo/gradio_fastcomposer.py
Second, upload reference subject images.For example,
Then, add prompt like A man img and a man img sitting together
and press run
button.
Finally, you can get text-generated images.
- Download ToonYou and MotionModule checkpoint
#!/bin/bash
mkdir models && cd models
mkdir Motion_Module && mkdir DreamBooth_LoRA
gdown 1RqkQuGPaCO5sGZ6V6KZ-jUWmsRu48Kdq -O models/Motion_Module/
gdown 1ql0g_Ys4UCz2RnokYlBjyOYPbttbIpbu -O models/Motion_Module/
wget https://civitai.com/api/download/models/78775 -P models/DreamBooth_LoRA/ --content-disposition --no-check-certificate
- Modify the config file in
configs/animatediff/animatediff_ToonYou.py
models_path = '/home/AnimateDiff/models/'
- Then, try on the script:
# may need to install imageio[ffmpeg]:
# pip install imageio-ffmpeg
python demo/gradio_animatediff.py
- Select SD, MotionModule and DreamBooth checkpoints. Adjust inference parameters. Then input a selected prompt and its relative negative_prompt:
prompts = [
"best quality, masterpiece, 1girl, looking at viewer, blurry background, upper body, contemporary, dress",
"masterpiece, best quality, 1girl, solo, cherry blossoms, hanami, pink flower, white flower, spring season, wisteria, petals, flower, plum blossoms, outdoors, falling petals, white hair, black eyes,",
"best quality, masterpiece, 1boy, formal, abstract, looking at viewer, masculine, marble pattern",
"best quality, masterpiece, 1girl, cloudy sky, dandelion, contrapposto, alternate hairstyle,"
]
negative_prompts = [
"",
"badhandv4,easynegative,ng_deepnegative_v1_75t,verybadimagenegative_v1.3, bad-artist, bad_prompt_version2-neg, teeth",
"",
"",
]
# More test samples could be generated with other config files. Please check 'configs/animatediff/README.md'
- Click the 'Generate' button.