-
-
Notifications
You must be signed in to change notification settings - Fork 608
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Layer normalisation does not work for images #406
Comments
I think the Julia way of using images is not to represents images as 4-D array For example, julia> Flux.Data.MNIST.images() |> summary
"60000-element Array{Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2},1}" |
and how do you do channels in general, sometimes you have channels that don't represent images. Say I want to feed a stack of 4 images for Atari reinforcement learning using RGB so I would need 12 channels, how do you do it? it could be Array{Float32, 4} of size (84,84,12,batchsize) or... ? |
We do actually end up using 4D arrays for this since it what the convolutions take (and the format is documented more there). I suggest we just make |
@skariel Yes you're right and let me withdraw what I said... We do use So yes in a general deep learning framework, I think it might be more intuitive to convert the input images |
@MikeInnes sounds good, also the type would have to change from |
The layer uses the
normalise
(stateless) function as defined here. This function calculates mean and std ondims=1
but for images we needdims=(1,2,3)
leaving out only the batch dimension. The following function should work:also the type of
x
has to change in the function signature to allow for images, currentlyx::AbstractVecOrMat
fails for e.g.rand(Float32, 84,84,1,1)
since it allows only 1d or 2d arrays.The text was updated successfully, but these errors were encountered: