Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reflect mode parameter from NNlib #53

Closed
wants to merge 4 commits into from
Closed

Reflect mode parameter from NNlib #53

wants to merge 4 commits into from

Conversation

ayush1999
Copy link
Contributor

Added mode parameter in the external functions (src/conv.jl), (option to choose cross-correlation or convolution), which were using mode = 0 by default.
I will create a corresponding PR in Flux too, to adapt to the changes made in this PR.

@ayush1999 ayush1999 reopened this Jul 6, 2018
@ayush1999
Copy link
Contributor Author

@MikeInnes Won't it make more sense to get this merged to the julia-0.6 branch? (And similar for the corresponding PR in Flux)

@MikeInnes
Copy link
Member

I think we need to also expose this parameter from the conv! family of functions.

Can you also create a crosscor! function, similar to conv! that just calls conv! with flipkernel. I think we should document that rather than the flipkernel argument.

We'll need to implement this over both branches. If you target master, I can take care of merging into julia-0.6.

Copy link
Member

@MikeInnes MikeInnes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, this will need a couple of tests for crosscor though, analagous to the one for conv.

pad = 0, stride = 1, dilation = 1, flipkernel=true) where T =
conv2d!(y, x, w, padding = pad, stride = stride, dilation = dilation, flipkernel=flipkernel)

crosscor!(y::AbstractArray{T,4}, x::AbstractArray{T,4}, w::AbstractArray{T,4};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cross correlation should have flipkernel=false, or at least be different from conv!, no? I also think it'd be better to keep this generic to AbstractArrays and just forward whatever the arguments are to conv!.

x, w, pad = pad_, stride = stride_, dilation = dilation)
x, w, pad = pad_, stride = stride_, dilation = dilation, flipkernel=flipkernel)
else
crosscor!(similar(x, cdims(size(x), dilation_dims(w, dilation), pad_, stride_)),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't make sense to do an if/else here; you pass flipkernel through so removing it would behave the same way.

It would however be good to have an out-of-place crosscor function that calls crosscor!.

src/conv.jl Outdated
@@ -91,7 +101,7 @@ function dcdims(x::NTuple{4,Int}, w::NTuple{4,Int}, pad, stride)
((x[1] + 2 * pad[1] - w[1])÷stride[1] + 1,(x[2] + 2 * pad[2] - w[2])÷stride[2] + 1,w[3]*w[4],x[4])
end

function depthwiseconv(x::A, w::A; pad = 0, stride = 1, flipkernel = true) where A<:AbstractArray
function depthwiseconv(x::A, w::A; pad = 0, stride = 1, flipkernel = false) where A<:AbstractArray
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is flipkernel false by default for depthwise conv, but true for conv?

@avik-pal avik-pal mentioned this pull request Sep 20, 2018
@MikeInnes
Copy link
Member

Bump -- would be good to have this in for #67. Perhaps @avik-pal can take a look at finishing it up.

@ayush1999
Copy link
Contributor Author

@avik-pal Ping me if you need this finished. (I'll probably open a new fresh PR)

@staticfloat
Copy link
Contributor

Superseded by #94

ToucheSir added a commit that referenced this pull request Feb 13, 2023
save allocs during algorithm search
ToucheSir added a commit that referenced this pull request Feb 13, 2023
save allocs during algorithm search
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants