-
Notifications
You must be signed in to change notification settings - Fork 375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conditional *image* generation (img2img) #70
Comments
I'd be interested in seeing an answer to this as well. e.g. for the simple case of MNIST, how might we implement (or activate) class-conditional generation? i see class_cond = torch.randint(0, num_classes, [accelerator.num_processes, n_per_proc], generator=demo_gen).to(device) To something more "intentional", such as... class_cond = torch.remainder(torch.arange(0, accelerator.num_processes*n_per_proc-1), num_classes).reshape([accelerator.num_processes, n_per_proc]).int().to(device) ....? Update: yep! That worked! :-) |
Solution: I've made it work, here's the main steps:
|
Hi,
In order to add support for conditional image generation, in addition to the initial image embedding into
unet_cond
,(
extra_args['unet_cond'] = img_cond
) what should I put inextra_args['cross_cond']
andextra_args['cross_cond_padding']
?(before the loss calculation in the line:
losses = model.loss(reals, noise, sigma, aug_cond=aug_cond, **extra_args))
@crowsonkb
@nekoshadow1
@brycedrennan
Thanks !
The text was updated successfully, but these errors were encountered: