-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ReductionLayer #2089
ReductionLayer #2089
Conversation
536cbc6
to
8995235
Compare
ReductionLayer * jeffdonahue/reduction-layer: Add ReductionLayer to reduce any number of "tail" axes to a scalar value Conflicts: src/caffe/proto/caffe.proto
case ReductionParameter_ReductionOp_ASUM: | ||
*top_data = caffe_cpu_asum(dim_, bottom_data); | ||
break; | ||
case ReductionParameter_ReductionOp_SUM_OF_SQUARES: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SUM_OF_SQUARES
is a little unlike the other op names. SUMSQ
fits in with ASUM
. Your pick.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fair enough -- I'll change it to SUMSQ
Once the comments are addressed this looks fine, although there could be a test for reducing over a tail that isn't all dimensions. p.s. N-D blobs are nice. |
8995235
to
96ac452
Compare
Thanks for all the reviews @shelhamer! Tests for non-zero axis added; will merge after Travis. |
5228452
to
823d055
Compare
Currently implements operations SUM, MEAN, ASUM (sum of absolute values), and SUMSQ (sum of squares)
Wow... especially thanks for the suggestion to test other axes. I had a bug in Backward for |
Tests are more trustworthy than me. Good catch! |
Performs a "reduction" operation (currently
SUM
,MEAN
,ASUM
for sum of absolute values, andSUMSQ
for sum of squares) to turn a number of "tail" axes into a single scalar value. TheMEAN
operation, in combination with aloss_weight
, is useful for creating custom losses that don't have an obnoxious amount of output. For example, thisEuclideanLoss
:...is equivalent to this
Reduction
:(would be more efficient to do as a single
Reduction
withSUM_OF_SQUARES
and a certaincoeff
setting, but then you have to compute the batch size and everything is less pretty...)Eventually, this should support reduction along inner axes (e.g. support an
end_axis
), but that makes the implementation substantially more involved than this...