Description
Initial GPU Usage
| ID | GPU | MEM |
| 0 | 4% | 6% |
GPU Usage after emptying the cache
| ID | GPU | MEM |
| 0 | 6% | 6% |
Lightning automatically upgraded your loaded checkpoint from v1.9.5 to v2.0.5. To apply the upgrade to your files permanently, run python -m pytorch_lightning.utilities.upgrade_checkpoint --file geodock/weights/dips.ckpt
Completed embedding in 0.77 seconds.
Traceback (most recent call last):
File "/home/adsb/GeoDock/test.py", line 59, in
pred = geodock.dock(
File "/home/adsb/GeoDock/geodock/GeoDockRunner.py", line 72, in dock
dock(
File "/home/adsb/GeoDock/geodock/utils/docking.py", line 22, in dock
model_out = model(model_in)
File "/home/adsb/miniconda3/envs/gk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/adsb/GeoDock/geodock/model/GeoDock.py", line 66, in forward
lddt_logits, dist_logits, coords, rotat, trans = self.net(
File "/home/adsb/miniconda3/envs/gk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/adsb/GeoDock/geodock/model/modules/iterative_transformer.py", line 71, in forward
node, edge = self.graph_module(node_0 + recycled_node, edge_0 + recycled_edge)
File "/home/adsb/miniconda3/envs/gk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/adsb/GeoDock/geodock/model/modules/graph_module.py", line 284, in forward
node, edge = block(node, edge)
File "/home/adsb/miniconda3/envs/gk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/adsb/GeoDock/geodock/model/modules/graph_module.py", line 264, in forward
edge = edge + self.node_to_edge(node)
File "/home/adsb/miniconda3/envs/gk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/adsb/GeoDock/geodock/model/modules/graph_module.py", line 236, in forward
x = torch.cat([prod, diff], dim=-1)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 772.00 MiB (GPU 0; 11.99 GiB total capacity; 10.60 GiB already allocated; 0 bytes free; 10.77 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF