Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow more options for Gradient calculation #215

Merged
merged 4 commits into from
May 20, 2017
Merged

Allow more options for Gradient calculation #215

merged 4 commits into from
May 20, 2017

Conversation

oxinabox
Copy link
Collaborator

Solves #212

Possible issue with implementation node_name is now applied element-wise to AbstractVectors.
It makes the implementation short and simple but I'm not sure how I feel about it

@codecov-io
Copy link

codecov-io commented May 10, 2017

Codecov Report

Merging #215 into master will decrease coverage by 0.02%.
The diff coverage is 38.88%.

Impacted file tree graph

@@            Coverage Diff            @@
##           master    #215      +/-   ##
=========================================
- Coverage   62.92%   62.9%   -0.03%     
=========================================
  Files          48      48              
  Lines        3361    3367       +6     
=========================================
+ Hits         2115    2118       +3     
- Misses       1246    1249       +3
Impacted Files Coverage Δ
src/py.jl 0% <0%> (ø) ⬆️
src/core.jl 84.13% <100%> (+0.06%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0988ad7...3ab5a9f. Read the comment docs.

@oxinabox
Copy link
Collaborator Author

Not sure why coverage is decreasing.
Is py.jl getting skipped because it is another process?

@@ -1370,15 +1370,37 @@ Base.haskey(graph::Graph, name) = isnull(get_node_by_name(graph, name))



node_name(::Void) = nothing
node_name(xs::AbstractVector)=node_name.(xs)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couldn't you just call node_name.(x) whereever you expect an array? On 0.6 at least that should behave the right way for a single tensor.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you? I'll test this.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested it, doesn't work out.
Because we don't always know when we expect an array.
gradients(y, x, grad_y)
can have any of its 3 parameters as an array of Tensors or as a Tensor, (though if y is an array y_grad must also).

And on 0.5 at least nodename.(x) for x::Tensor comes back with:

  MethodError: no method matching start(::TensorFlow.Tensor{Int32})

So with that trialed, I will merge this now, as is.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this would only work on 0.6.

It may be a good idea to drop 0.5 fairly quickly, but that's up to you / Jon.

Copy link
Collaborator

@MikeInnes MikeInnes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yesterday I struggled to get this working and thought I'd commented here – but now I've tried again and it worked first time ¯\(ツ)

@malmaud seems like you may be busy, so I hope you don't mind if I approve this. If you don't object I may also take a look at some other of Lyndon's PRs also.

@malmaud
Copy link
Owner

malmaud commented May 17, 2017 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants