Skip to content

Add empty_cache for releasing GPU memory #892

Open
@sao2c

Description

@sao2c

When I run the code sample below, my GPU monitoring seems to show that the memory that was allocated during execution (in a jupyter notebook) is still allocated despite the fact that I've done all I know to do to have intermediate tensors disposed

#r "nuget:TorchSharp-cuda-linux"
#r "nuget:TorchSharp"

open TorchSharp

let test () =
     use d = torch.NewDisposeScope()
     use tt = torch.randn(50000,50000,device=torch.device("cuda:7"))
     tt.MoveToOuterDisposeScope()

let test2() =
     use d2 = torch.NewDisposeScope()
     use ttt = test()
     ()

let empty_result = test2()

I get a similar result when I do the same experiment in python with pytorch

import torch

def test():
     tt = torch.randn(50000,50000,device=torch.device('cuda:7'))
     return ttt

def test2():
     ttt = test()
     return 0

empty_result = test2()

but I can free the memory by calling torch.cuda.empty_cache(). @NiklasGustafsson says

The underlying library does keep a high-watermark of allocated GPU memory, so even when you dispose of tensors, the overall allocation won't necessary go down. I'll see how I can get empty_cache() implemented

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions