Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak in GraphUNet due to function to_torch_csr_tensor #9826

Open
acotino-ignitioncomputing opened this issue Dec 6, 2024 · 0 comments
Labels

Comments

@acotino-ignitioncomputing
Copy link

acotino-ignitioncomputing commented Dec 6, 2024

🐛 Describe the bug

Consecutive application of the forward-pass of GraphUNet causes a steady increase of memory, which eventually leads to a Memory Overflow. Changing the function to_torch_csr_tensor inside the method GraphUNet.augment_adj to the function to_torch_coo_tensor seems to solve this issue.
Note: I was not able to reproduce this steady increase of memory by application of to_torch_csr_tensor alone.

import torch
from torch_geometric.nn import GraphUNet

if __name__ == "__main__":
    # Initialize GraphUNet-model
    model = GraphUNet(3, 4, 1, depth=4)

    # Define sufficient large input tensors to make impact of memory leak visible
    N = 9000
    feature_matrix = torch.rand([N, 3])
    indices = torch.range(0, N - 1, dtype=int)
    edge_index = torch.stack((indices, indices))

    cycles = 500
    prompt = ""
    while not prompt:
        for _ in range(cycles):
            model(feature_matrix, edge_index)
        print(f"Performed {cycles} forward-calls")
        prompt = input("Provide empty input to continue: ")

Versions

PyG version: 2.7.0
PyTorch version: 2.4.1+cpu
Python version: 3.10.12

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant