You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Consecutive application of the forward-pass of GraphUNet causes a steady increase of memory, which eventually leads to a Memory Overflow. Changing the function to_torch_csr_tensor inside the method GraphUNet.augment_adj to the function to_torch_coo_tensor seems to solve this issue.
Note: I was not able to reproduce this steady increase of memory by application of to_torch_csr_tensor alone.
importtorchfromtorch_geometric.nnimportGraphUNetif__name__=="__main__":
# Initialize GraphUNet-modelmodel=GraphUNet(3, 4, 1, depth=4)
# Define sufficient large input tensors to make impact of memory leak visibleN=9000feature_matrix=torch.rand([N, 3])
indices=torch.range(0, N-1, dtype=int)
edge_index=torch.stack((indices, indices))
cycles=500prompt=""whilenotprompt:
for_inrange(cycles):
model(feature_matrix, edge_index)
print(f"Performed {cycles} forward-calls")
prompt=input("Provide empty input to continue: ")
🐛 Describe the bug
Consecutive application of the forward-pass of GraphUNet causes a steady increase of memory, which eventually leads to a Memory Overflow. Changing the function
to_torch_csr_tensor
inside the methodGraphUNet.augment_adj
to the functionto_torch_coo_tensor
seems to solve this issue.Note: I was not able to reproduce this steady increase of memory by application of
to_torch_csr_tensor
alone.Versions
PyG version: 2.7.0
PyTorch version: 2.4.1+cpu
Python version: 3.10.12
The text was updated successfully, but these errors were encountered: