-
Notifications
You must be signed in to change notification settings - Fork 369
Description
✨ Short description of the bug [tl;dr]
today i tried to run jsma on an imagenet sample. Sample had the shape (1,3,224,224). JSMA code stuck a little bit in the approximation and then an error message popped up writing "JSMA needs to allocate 84,.. GiB of gpu memory" while my nvidia only had 6gb.
When looking into the code i could see a lot of clones, inits etc. which costs a lot of memory, computation device transfers etc. I think some smarter guys than me could be able to optimize the code to work on lower memory consumption.
💬 Detailed code and results
Traceback (most recent call last):
File "C:\Users\Domin\anaconda3\envs\NeuronalNetwork\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Domin\anaconda3\envs\NeuronalNetwork\lib\site-packages\torchattacks\attacks\jsma.py", line 116, in saliency_map
alpha = target_tmp.view(-1, 1, nb_features) + target_tmp.view(
File "C:\Users\Domin\anaconda3\envs\NeuronalNetwork\lib\site-packages\torch\utils_device.py", line 78, in torch_function
return func(*args, **kwargs)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 84.41 GiB. GPU 0 has a total capacity of 6.00 GiB of which 4.21 GiB is free. Of the allocated memory
704.63 MiB is allocated by PyTorch, and 29.37 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLO
C_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-vari
ables)