Skip to content

pytorch GC #592

Closed
Closed
@x66ccff

Description

@x66ccff

I'm using torch in PythonCall. When I try to create tensors multiple times, even after reassigning the same tensor or setting it to nothing, I don't observe any decrease in GPU memory usage. This persists even after using GC.

using PythonCall
torch = pyimport("torch")
torch.cuda.is_available()
n=20000
 
a = torch.randn((1,n*n),device=torch.device("cuda"))  # VRAM increase here
a = torch.randn((1,n*n),device=torch.device("cuda"))  # VRAM also increase here
a = torch.randn((1,n*n),device=torch.device("cuda"))  # VRAM also increase here
a = nothing # useless
 
PythonCall.GC.gc() # useless
torch.cuda.empty_cache() # useless

Anyone can help?

julia Version 1.11.3
julia> torch.version
Python: '2.6.0+cu126'

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions