Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add manually mem gc python api #8482

Merged
merged 43 commits into from
Jul 1, 2022
Merged

Conversation

lixiang007666
Copy link
Contributor

@lixiang007666 lixiang007666 commented Jun 24, 2022

This PR is done:

#8452 的后续,其中,在 C++ 层 VM Sync + GC 的有效性已经得到验证(graph上)。

image

Test:

脚本:

#import torch
import oneflow as torch
device = torch.device('cuda')

def get_gpu_mem_info(gpu_id=0):
    import pynvml
    pynvml.nvmlInit()
    if gpu_id < 0 or gpu_id >= pynvml.nvmlDeviceGetCount():
        print(r'gpu_id {} is not exist'.format(gpu_id))
        return 0, 0, 0

    handler = pynvml.nvmlDeviceGetHandleByIndex(gpu_id)
    meminfo = pynvml.nvmlDeviceGetMemoryInfo(handler)
    total = round(meminfo.total / 1024 / 1024, 2)
    used = round(meminfo.used / 1024 / 1024, 2)
    free = round(meminfo.free / 1024 / 1024, 2)
    print("total: ",total," used: ",used," free: ",free)

x = torch.randn(512, 3, 512, 512).to(device)
get_gpu_mem_info(1)

x = torch.randn(1, 3, 512, 512).to(device)
get_gpu_mem_info(1)

torch.cuda.empty_cache()
get_gpu_mem_info(1)

观察三次显存变化如下:

Torch:

total used free
7982.31 2520.0 5462.31
7982.31 2520.0 5462.31
7982.31 1004.0 6978.31

Oneflow:

total used free
7982.31 1976.0 6006.31
7982.31 1976.0 6006.31
7982.31 796.0 7186.31

当脚本执行结束,再次查看 nvidia-smi,固定的显存 used 才释放。

下面是多次执行申请释放的完整例子:

import oneflow
import oneflow.nn as nn

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv = oneflow.nn.Conv2d(3, 28, (3, 3))
        self.maxpool1 = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        self.maxpool2 = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
    def forward(self, x):
        x = self.conv(x)
        x = self.maxpool1(x)
        x = self.maxpool2(x)
        return x

# x = oneflow.randn(120, 3, 512, 512).to('cuda')
# x = x.cpu()

def train():
    net = Net().to('cuda')
    for i in range(100):
        with oneflow.no_grad():
            frames_batches = oneflow.randn(120, 3, 224, 224).to('cuda')
            pred = net(frames_batches)
            # oneflow.cuda.empty_cache()

if __name__ == '__main__':
    train()

当不加 oneflow.cuda.empty_cache() 时,GPU Memory Usage 为 3011 MiB。
使用 oneflow.cuda.empty_cache() 时,GPU Memory Usage 为 1675 MiB。

@strint

This comment was marked as resolved.

@strint

This comment was marked as resolved.

@lixiang007666

This comment was marked as resolved.

@lixiang007666 lixiang007666 requested review from oneflow-ci-bot and removed request for oneflow-ci-bot July 1, 2022 04:21
@github-actions
Copy link
Contributor

github-actions bot commented Jul 1, 2022

Static analysis with clang failed. PR label automerge has been removed

@github-actions github-actions bot removed the automerge label Jul 1, 2022
@github-actions
Copy link
Contributor

github-actions bot commented Jul 1, 2022

CI failed when running job: Build cpu. PR label automerge has been removed

@lixiang007666 lixiang007666 requested review from oneflow-ci-bot and removed request for oneflow-ci-bot July 1, 2022 07:23
@github-actions
Copy link
Contributor

github-actions bot commented Jul 1, 2022

Speed stats:

@github-actions
Copy link
Contributor

github-actions bot commented Jul 1, 2022

View latest API docs preview at: https://staging.oneflow.info/docs/Oneflow-Inc/oneflow/pr/8482/

@github-actions
Copy link
Contributor

github-actions bot commented Jul 1, 2022

Speed stats:
GPU Name: NVIDIA GeForce GTX 1080 

❌ OneFlow resnet50 time: 129.2ms (= 12922.3ms / 100, input_shape=[16, 3, 224, 224])
PyTorch resnet50 time: 144.3ms (= 14431.3ms / 100, input_shape=[16, 3, 224, 224])
✔️ Relative speed: 1.12 (= 144.3ms / 129.2ms)

OneFlow resnet50 time: 75.8ms (= 7582.3ms / 100, input_shape=[8, 3, 224, 224])
PyTorch resnet50 time: 84.7ms (= 8472.7ms / 100, input_shape=[8, 3, 224, 224])
✔️ Relative speed: 1.12 (= 84.7ms / 75.8ms)

OneFlow resnet50 time: 49.6ms (= 9922.4ms / 200, input_shape=[4, 3, 224, 224])
PyTorch resnet50 time: 57.2ms (= 11442.3ms / 200, input_shape=[4, 3, 224, 224])
✔️ Relative speed: 1.15 (= 57.2ms / 49.6ms)

OneFlow resnet50 time: 40.0ms (= 8007.4ms / 200, input_shape=[2, 3, 224, 224])
PyTorch resnet50 time: 45.6ms (= 9110.8ms / 200, input_shape=[2, 3, 224, 224])
✔️ Relative speed: 1.14 (= 45.6ms / 40.0ms)

OneFlow resnet50 time: 35.1ms (= 7015.9ms / 200, input_shape=[1, 3, 224, 224])
PyTorch resnet50 time: 40.3ms (= 8052.0ms / 200, input_shape=[1, 3, 224, 224])
✔️ Relative speed: 1.15 (= 40.3ms / 35.1ms)

OneFlow swin dataloader time: 0.260s (= 51.957s / 200, num_workers=1)
PyTorch swin dataloader time: 0.152s (= 30.476s / 200, num_workers=1)
Relative speed: 0.587 (= 0.152s / 0.260s)

OneFlow swin dataloader time: 0.076s (= 15.176s / 200, num_workers=4)
PyTorch swin dataloader time: 0.042s (= 8.385s / 200, num_workers=4)
Relative speed: 0.553 (= 0.042s / 0.076s)

OneFlow swin dataloader time: 0.041s (= 8.183s / 200, num_workers=8)
PyTorch swin dataloader time: 0.023s (= 4.585s / 200, num_workers=8)
Relative speed: 0.560 (= 0.023s / 0.041s)

❌ OneFlow resnet50 time: 144.1ms (= 14405.5ms / 100, input_shape=[16, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 168.9ms (= 16889.5ms / 100, input_shape=[16, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.17 (= 168.9ms / 144.1ms)

OneFlow resnet50 time: 93.4ms (= 9343.0ms / 100, input_shape=[8, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 111.8ms (= 11178.5ms / 100, input_shape=[8, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.20 (= 111.8ms / 93.4ms)

OneFlow resnet50 time: 72.0ms (= 14398.8ms / 200, input_shape=[4, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 87.9ms (= 17582.0ms / 200, input_shape=[4, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.22 (= 87.9ms / 72.0ms)

OneFlow resnet50 time: 58.1ms (= 11623.3ms / 200, input_shape=[2, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 76.8ms (= 15367.4ms / 200, input_shape=[2, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.32 (= 76.8ms / 58.1ms)

OneFlow resnet50 time: 49.2ms (= 9848.0ms / 200, input_shape=[1, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 74.3ms (= 14865.6ms / 200, input_shape=[1, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.51 (= 74.3ms / 49.2ms)

@lixiang007666 lixiang007666 merged commit e5df7ff into master Jul 1, 2022
@lixiang007666 lixiang007666 deleted the Add_manually_mem_gc_python_api branch July 1, 2022 22:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants