Skip to content

ParallelDo with MKLDNN #8806

Closed
Closed
@pzelazko-intel

Description

@pzelazko-intel
  1. In MKLDNN convolution (PR MKLDNN conv2d kernel added #8451) I need to send some data from forward to backward Compute functions. I'm saving it into DeviceContext. It works well as long as I'm not running it in ParallelDo mode. In ParallelDo, Compute method is called in different threads simuleanously with same DeviceContext, which is failing obviously because of memory access failures. How could I transport data from forward do backward safely?

  2. I think that running MKLDNN Compute methods in parallel doesn't make sense, because MKLDNN uses internally OpenMP for parallel computations.
    For CPU place Paddle takes as many cores as possible: https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/get_places_op.cc#L51.
    We could change it so that when using MKLDNN, it would use only one device. But on the other hand, it won't be effective when we'll have both MKLDNN and GPU/Plain-CPU operators.
    It would be best if we could define device_count for each OP separately, but I don't know if it makes sense in the context of whole platform. Any thoughts?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions