We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I think so. I would also like to know why the backward propagation for max pooling does not use the cuDNN API.
And the forward did not too? And Why is the condition in the if statement CUDNN_DISABLED? It's vary strange.
extern "C" void forward_local_avgpool_layer_gpu(maxpool_layer layer, network_state state) { #ifdef CUDNN_DISABLED if (!state.train && layer.stride == layer.size) { // cudnnPoolingBackward cudnnStatus_t maxpool_status; float alpha = 1, beta = 0; maxpool_status = cudnnPoolingForward( cudnn_handle(), layer.poolingDesc, &alpha, layer.srcTensorDesc, state.input, &beta, layer.dstTensorDesc, layer.output_gpu); //maxpool_status = cudnnDestroyPoolingDescriptor(poolingDesc); //cudnnDestroyTensorDescriptor(layer.srcTensorDesc); //cudnnDestroyTensorDescriptor(layer.dstTensorDesc); } else #endif { int h = layer.out_h; int w = layer.out_w; int c = layer.out_c; size_t n = h*w*c*layer.batch; forward_local_avgpool_layer_kernel <<<cuda_gridsize(n), BLOCK, 0, get_cuda_stream() >>> (n, layer.h, layer.w, layer.c, layer.stride_x, layer.stride_y, layer.size, layer.pad, state.input, layer.output_gpu); CHECK_CUDA(cudaPeekAtLastError()); } }
Originally posted by @zzk2021 in #8302 (comment)
The text was updated successfully, but these errors were encountered:
This looks unfinished.
Sorry, something went wrong.
No branches or pull requests
And the forward did not too? And Why is the condition in the if statement CUDNN_DISABLED? It's vary strange.
Originally posted by @zzk2021 in #8302 (comment)
The text was updated successfully, but these errors were encountered: