Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA out of memory #42

Open
abdelrahmanabdelghany opened this issue May 24, 2023 · 4 comments
Open

CUDA out of memory #42

abdelrahmanabdelghany opened this issue May 24, 2023 · 4 comments

Comments

@abdelrahmanabdelghany
Copy link

runing finetune-unet.py on colab but i get cuda out of memory issue. Is it possible to run this on colab ?

@chenbolin-master
Copy link

when running demo.py ,I also out of memory。

@yuchen1984
Copy link

yuchen1984 commented Jun 22, 2023

Same problem, what's the minimal GPU memory requirement for running the inference? I'm on 11GB GPU RAM. I also tried hacking the imSize to e.g 384x480 or 480x600 to work around the memory problem, but it doesn't work as the tensor dimensions in the demo model seems to be binded to 512x640

@abdelrahmanabdelghany
Copy link
Author

running on colab pro A100, I think the minimum is 18 GB

@kobybibas
Copy link

When adding --gradient_checkpointing --use_8bit_adam the model training consumes 15GB although I'm not sure how it affects the results

accelerate launch finetune-unet.py --pretrained_model_name_or_path="CompVis/stable-diffusion-v1-4" --instance_data_dir=demo/sample/train --output_dir=demo/custom-chkpts --resolution=512 --train_batch_size=1 --gradient_accumulation_steps=1 --learning_rate=1e-5 --num_train_epochs=500 --dropout_rate=0.0 --custom_chkpt=checkpoints/unet_epoch_20.pth --revision "ebb811dd71cdc38a204ecbdd6ac5d580f529fd8c" --gradient_checkpointing --use_8bit_adam

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants