Skip to content

IO consumption in flow warp #878

Closed
Closed
@NK-CS-ZZL

Description

@NK-CS-ZZL

In the lines 32-33 of 'mmedit/models/common/flow_warp.py'
, the function "flow_warp" tries to allocate a piece of memory for a 2d grid tensor.
However, this tensor is built in common memory and immediately transferred to GPU. This operation is time-consuming and I suggest directly declaring the tensor on GPU as follows:
grid_y, grid_x = torch.meshgrid(torch.arange(0, h, device=x.device, dtype=x.dtype), torch.arange(0, w, device=x.device, dtype=x.dtype)) .
In this way, the running time is reduced from 145.55ms to 0.88ms (the input feature size is 1x13x128x108x60)

And here is an example of warping.
image

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions