Skip to content

Commit 7e5e6f4

Browse files
committed
docs(img2vid, txt2vid): examples
1 parent d817c0a commit 7e5e6f4

File tree

2 files changed

+48
-9
lines changed

2 files changed

+48
-9
lines changed

backend/python/diffusers/backend_diffusers.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,7 @@
3434
CHUNK_SIZE=os.environ.get("CHUNK_SIZE", "8")
3535
FPS=os.environ.get("FPS", "7")
3636
DISABLE_CPU_OFFLOAD=os.environ.get("DISABLE_CPU_OFFLOAD", "0") == "1"
37+
FRAMES=os.environ.get("FRAMES", "64")
3738

3839
# If MAX_WORKERS are specified in the environment use it, otherwise default to 1
3940
MAX_WORKERS = int(os.environ.get('PYTHON_GRPC_MAX_WORKERS', '1'))
@@ -388,7 +389,7 @@ def GenerateImage(self, request, context):
388389
return backend_pb2.Result(message="Media generated successfully", success=True)
389390

390391
if self.txt2vid:
391-
video_frames = self.pipe(prompt, num_inference_steps=steps).frames
392+
video_frames = self.pipe(prompt, num_inference_steps=steps, num_frames=int(FRAMES)).frames
392393
export_to_video(video_frames, request.dst)
393394
return backend_pb2.Result(message="Media generated successfully", success=True)
394395

docs/content/model-compatibility/diffusers.md

Lines changed: 46 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -27,11 +27,9 @@ name: animagine-xl
2727
parameters:
2828
model: Linaqruf/animagine-xl
2929
backend: diffusers
30-
31-
# Force CPU usage - set to true for GPU
32-
f16: false
30+
cuda: true
31+
f16: true
3332
diffusers:
34-
cuda: false # Enable for GPU usage (CUDA)
3533
scheduler_type: euler_a
3634
```
3735
@@ -46,9 +44,9 @@ parameters:
4644
backend: diffusers
4745
step: 30
4846
f16: true
47+
cuda: true
4948
diffusers:
5049
pipeline_type: StableDiffusionPipeline
51-
cuda: true
5250
enable_parameters: "negative_prompt,num_inference_steps,clip_skip"
5351
scheduler_type: "k_dpmpp_sde"
5452
cfg_scale: 8
@@ -130,11 +128,10 @@ parameters:
130128
model: nitrosocke/Ghibli-Diffusion
131129
backend: diffusers
132130
step: 25
133-
131+
cuda: true
134132
f16: true
135133
diffusers:
136134
pipeline_type: StableDiffusionImg2ImgPipeline
137-
cuda: true
138135
enable_parameters: "negative_prompt,num_inference_steps,image"
139136
```
140137

@@ -156,9 +153,9 @@ backend: diffusers
156153
step: 50
157154
# Force CPU usage
158155
f16: true
156+
cuda: true
159157
diffusers:
160158
pipeline_type: StableDiffusionDepth2ImgPipeline
161-
cuda: true
162159
enable_parameters: "negative_prompt,num_inference_steps,image"
163160
cfg_scale: 6
164161
```
@@ -167,3 +164,44 @@ diffusers:
167164
(echo -n '{"file": "'; base64 ~/path/to/image.jpeg; echo '", "prompt": "a sky background","size": "512x512","model":"stablediffusion-depth"}') |
168165
curl -H "Content-Type: application/json" -d @- http://localhost:8080/v1/images/generations
169166
```
167+
168+
## img2vid
169+
170+
```yaml
171+
name: img2vid
172+
parameters:
173+
model: stabilityai/stable-video-diffusion-img2vid
174+
backend: diffusers
175+
step: 25
176+
# Force CPU usage
177+
f16: true
178+
cuda: true
179+
diffusers:
180+
pipeline_type: StableVideoDiffusionPipeline
181+
```
182+
183+
```bash
184+
(echo -n '{"file": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png?download=true","size": "512x512","model":"img2vid"}') |
185+
curl -H "Content-Type: application/json" -X POST -d @- http://localhost:8080/v1/images/generations
186+
```
187+
188+
## txt2vid
189+
190+
```yaml
191+
name: txt2vid
192+
parameters:
193+
model: damo-vilab/text-to-video-ms-1.7b
194+
backend: diffusers
195+
step: 25
196+
# Force CPU usage
197+
f16: true
198+
cuda: true
199+
diffusers:
200+
pipeline_type: VideoDiffusionPipeline
201+
cuda: true
202+
```
203+
204+
```bash
205+
(echo -n '{"prompt": "spiderman surfing","size": "512x512","model":"txt2vid"}') |
206+
curl -H "Content-Type: application/json" -X POST -d @- http://localhost:8080/v1/images/generations
207+
```

0 commit comments

Comments
 (0)