Skip to content

Memory leak when running GPU broadcast in a loop #327

Open
@tiemvanderdeure

Description

@tiemvanderdeure

I need to run GPU operations inside a loop, where the output of one iteration is used in the next one.

However, even very simple GPU broadcasts result in a memory leak and eventually I get OutOfMemoryError().

I am using oneAPI v.1.2.2 on WSL2 with Ubuntu on a Windows 10.

A very simple example that reproduces this is:

using oneAPI

gpu_array = oneAPI.zeros(Float32, 10_000_000)

for j in 1:5_000
    gpu_array .+= 1
end

Is there something I am missing here?

I can see the GPU memory fill up in the Task Manager:
billede

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workinglibrariesThings about libraries and how we use them.performanceGotta go fast.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions