You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(ssd) root@logic:~/SA-SSD/tools# python train.py ../configs/car_cfg.py
2021-02-26 09:22:00,718 - INFO - Distributed training: False
2021-02-26 09:22:00,718 - INFO - Set random seed to 0
[40, 1600, 1408]
load 14357 Car database infos
After filter database:
load 10520 Car database infos
2021-02-26 09:22:05,502 - INFO - Start training
Traceback (most recent call last):
File "train.py", line 127, in
main()
File "train.py", line 117, in main
log_interval = cfg.log_config.interval
File "/root/SA-SSD/tools/train_utils/init.py", line 99, in train_model
log_interval = log_interval
File "/root/SA-SSD/tools/train_utils/init.py", line 57, in train_one_epoch
outputs = batch_processor(model, data_batch)
File "/root/SA-SSD/tools/train_utils/init.py", line 29, in batch_processor
losses = model(**data)
File "/root/anaconda3/envs/ssd/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/root/anaconda3/envs/ssd/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/root/anaconda3/envs/ssd/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/root/SA-SSD/mmdet/models/detectors/base.py", line 79, in forward
return self.forward_train(img, img_meta, **kwargs)
File "/root/SA-SSD/mmdet/models/detectors/single_stage.py", line 103, in forward_train
bbox_score = self.extra_head(conv6, guided_anchors)
File "/root/anaconda3/envs/ssd/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/root/SA-SSD/mmdet/models/single_stage_heads/ssd_rotate_head.py", line 435, in forward
x = self.convs(x)
File "/root/anaconda3/envs/ssd/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/root/anaconda3/envs/ssd/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/root/anaconda3/envs/ssd/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/root/anaconda3/envs/ssd/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA out of memory. Tried to allocate 310.00 MiB (GPU 0; 7.80 GiB total capacity; 4.05 GiB already allocated; 102.44 MiB free; 404.92 MiB cached)
The text was updated successfully, but these errors were encountered:
(ssd) root@logic:~/SA-SSD/tools# python train.py ../configs/car_cfg.py
2021-02-26 09:22:00,718 - INFO - Distributed training: False
2021-02-26 09:22:00,718 - INFO - Set random seed to 0
[40, 1600, 1408]
load 14357 Car database infos
After filter database:
load 10520 Car database infos
2021-02-26 09:22:05,502 - INFO - Start training
Traceback (most recent call last):
File "train.py", line 127, in
main()
File "train.py", line 117, in main
log_interval = cfg.log_config.interval
File "/root/SA-SSD/tools/train_utils/init.py", line 99, in train_model
log_interval = log_interval
File "/root/SA-SSD/tools/train_utils/init.py", line 57, in train_one_epoch
outputs = batch_processor(model, data_batch)
File "/root/SA-SSD/tools/train_utils/init.py", line 29, in batch_processor
losses = model(**data)
File "/root/anaconda3/envs/ssd/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/root/anaconda3/envs/ssd/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/root/anaconda3/envs/ssd/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/root/SA-SSD/mmdet/models/detectors/base.py", line 79, in forward
return self.forward_train(img, img_meta, **kwargs)
File "/root/SA-SSD/mmdet/models/detectors/single_stage.py", line 103, in forward_train
bbox_score = self.extra_head(conv6, guided_anchors)
File "/root/anaconda3/envs/ssd/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/root/SA-SSD/mmdet/models/single_stage_heads/ssd_rotate_head.py", line 435, in forward
x = self.convs(x)
File "/root/anaconda3/envs/ssd/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/root/anaconda3/envs/ssd/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/root/anaconda3/envs/ssd/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/root/anaconda3/envs/ssd/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA out of memory. Tried to allocate 310.00 MiB (GPU 0; 7.80 GiB total capacity; 4.05 GiB already allocated; 102.44 MiB free; 404.92 MiB cached)
The text was updated successfully, but these errors were encountered: