-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training memory requirement #121
Comments
We use A100 80G GPUs to train Swin-L and ViT-L models. You can reduce the image size to 1333x800 or freeze the backbone during training. Besides, some techniques such as FSDP and FP16, can help you reduce training memory consumption. Please refer to the latest mmdetection v3 for more details. |
Reducing image size helps for a few batches but after a while it fails again. I will try freezing the backbone. About your last suggestion, does this repo work on MMDetectionV3? |
Sure, please refer to https://github.com/open-mmlab/mmdetection/tree/main/projects/CO-DETR |
Edited the comment. |
This config is used to finetune the Objects365 pretrained Swin-L on the COCO dataset. |
For new comers:
Thanks for the help @TempleX98. Feel free to close the issue at your convenience. |
Hi,
I'm trying to train your model on Kaggle with P100 w\16GB VRAM however I'm running out of memory. Can you share the memory requirements and if possible tips to reduce memory required?
Attached below is the model I'm trying to train. Instead of
train.sh
, I'm usingtrain.py.
The text was updated successfully, but these errors were encountered: