Skip to content

Unreasonable output of dropout_op with GPU kernel #8654

Closed
@guoshengCS

Description

@guoshengCS

Bugs may exist in the GPU kernel of dropout_op.

I just found that dropout_op often outputs tensor with all 0s even with a big size. I test the dropout_op with GPU by simple codes (the dropout_prob is 0.1 and the input data is 1s with shape [64, 32, 512]), and print the max and min of the output tensor 10 times as following:

1.0 1.0
0.0 0.0
1.0 1.0
1.0 1.0
1.0 1.0
1.0 1.0
1.0 1.0
0.0 0.0
0.0 0.0
1.0 1.0

The test codes are as following:

import numpy as np

import paddle.v2 as paddle
import paddle.fluid as fluid


def program():
    x = fluid.layers.data(name='x', shape=[64, 32, 512], dtype='float32', append_batch_size=False)
    out = fluid.layers.dropout(x, dropout_prob=0.1, is_test=False)
    return out


def main():
    #place = fluid.CPUPlace()
    place = fluid.CUDAPlace(0)
    exe = fluid.Executor(place)
    out = program()
    data_input = {}
    in_tensor = fluid.LoDTensor()
    in_tensor.set(np.ones([64, 32, 512], dtype="float32"), place)
    data_input['x'] = in_tensor
    for i in range(10):
       out_ = exe.run(fluid.framework.default_main_program(), feed=data_input, fetch_list=[out])[0]
       print np.max(out_), np.min(out_)


if __name__ == "__main__":
    main()

Running the code with CPU gets the proper outputs.

1.0 0.0
1.0 0.0
1.0 0.0
1.0 0.0
1.0 0.0
1.0 0.0
1.0 0.0
1.0 0.0
1.0 0.0
1.0 0.0

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions