Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keras-like API Advanced Activations, dropout and noise layers #2222

Merged
merged 36 commits into from
Feb 8, 2018
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
24bc07b
Keras API for ELU
Quincy2014 Jan 24, 2018
cb68227
fix style check error
Quincy2014 Jan 24, 2018
8866a38
fix weightConverter
Quincy2014 Jan 24, 2018
0c92054
add one more unit test for ELU
Quincy2014 Jan 24, 2018
ac28817
remove blank line
Quincy2014 Jan 24, 2018
ecf21de
Keras API for LeakyReLU
Quincy2014 Jan 24, 2018
03e1ed5
remove useless empty lines in LeakyReLU
Quincy2014 Jan 25, 2018
866735d
add GaussianDropout
Quincy2014 Jan 25, 2018
c64eff5
add GaussianNoise
Quincy2014 Jan 25, 2018
50296a2
remove UID and unnecessary import
Quincy2014 Jan 25, 2018
3054fec
fix two Gaussian unit test
Quincy2014 Jan 25, 2018
c90ab99
add layer Masking
Quincy2014 Jan 26, 2018
d263320
add layer SpatialDropout1D
Quincy2014 Jan 26, 2018
9538c76
change 3D to 4D
Quincy2014 Jan 26, 2018
3ebaa40
Revert "change 3D to 4D"
Quincy2014 Jan 26, 2018
8e83192
change unit test from 4D to 3D
Quincy2014 Jan 26, 2018
ed6f307
add layer SpatialDropout2D
Quincy2014 Jan 26, 2018
218cd41
add layer PReLU. Unit test success without weight
Quincy2014 Jan 26, 2018
07fce1d
add 3D unit test for PReLU
Quincy2014 Jan 26, 2018
daac88b
add layer ParametricSoftPlus. Unit test success without weight
Quincy2014 Jan 26, 2018
437f478
add layer SpatialDropout3D
Quincy2014 Jan 26, 2018
8d2ecb0
add layer ThresholdedReLU
Quincy2014 Jan 26, 2018
081649f
fix the above problems
Quincy2014 Jan 29, 2018
411465f
fix problems
Quincy2014 Jan 29, 2018
b133247
add format lowercase to support both uppercase and lowercase
Quincy2014 Jan 30, 2018
f9f3b81
fix format problem
Quincy2014 Jan 30, 2018
082a310
SReLU
Quincy2014 Feb 2, 2018
6ce745c
add documentation and serializer
Quincy2014 Feb 6, 2018
8bfc875
remove a blank in documentation and change inputshape from var to val
Quincy2014 Feb 7, 2018
4392d45
delete four files
Quincy2014 Feb 8, 2018
5a75157
update
Quincy2014 Feb 8, 2018
32ff46e
modify
Quincy2014 Feb 8, 2018
9c596f2
modify problem
Quincy2014 Feb 8, 2018
f8beee3
modify
Quincy2014 Feb 8, 2018
6fabd80
update
Quincy2014 Feb 8, 2018
1ada8b3
modify style
Quincy2014 Feb 8, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
remove UID and unnecessary import
  • Loading branch information
Quincy2014 committed Feb 8, 2018
commit 50296a285b25d6576f3897197c6380588724cc53
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,6 @@

package com.intel.analytics.bigdl.nn.keras

import com.intel.analytics.bigdl._
import com.intel.analytics.bigdl.nn._
import com.intel.analytics.bigdl.nn.abstractnn._
import com.intel.analytics.bigdl.tensor.Tensor
import com.intel.analytics.bigdl.tensor.TensorNumericMath.TensorNumeric
Expand All @@ -26,14 +24,13 @@ import com.intel.analytics.bigdl.utils.Shape
import scala.reflect.ClassTag


@SerialVersionUID( - 6274543584907751212L)
class ELU[T: ClassTag](val alpha: Double = 1.0,
var inputShape: Shape = null
)(implicit ev: TensorNumeric[T])
extends KerasLayer[Tensor[T], Tensor[T], T](KerasLayer.addBatch(inputShape)) {

override def doBuild(inputShape: Shape): AbstractModule[Tensor[T], Tensor[T], T] = {
val layer = nn.ELU(
val layer = com.intel.analytics.bigdl.nn.ELU(
alpha = alpha,
inplace = false
)
Expand All @@ -52,4 +49,3 @@ object ELU {
inputShape)
}
}

Original file line number Diff line number Diff line change
Expand Up @@ -16,24 +16,20 @@

package com.intel.analytics.bigdl.nn.keras

import com.intel.analytics.bigdl._
import com.intel.analytics.bigdl.nn._
import com.intel.analytics.bigdl.nn.abstractnn._
import com.intel.analytics.bigdl.optim.Regularizer
import com.intel.analytics.bigdl.tensor.Tensor
import com.intel.analytics.bigdl.tensor.TensorNumericMath.TensorNumeric
import com.intel.analytics.bigdl.utils.Shape

import scala.reflect.ClassTag

@SerialVersionUID( 5198738230229027831L)
class GaussianDropout[T: ClassTag](val p: Double,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add scala doc for the new added layers as well.

var inputShape: Shape = null
)(implicit ev: TensorNumeric[T])
extends KerasLayer[Tensor[T], Tensor[T], T](KerasLayer.addBatch(inputShape)) {

override def doBuild(inputShape: Shape): AbstractModule[Tensor[T], Tensor[T], T] = {
val layer = nn.GaussianDropout(
val layer = com.intel.analytics.bigdl.nn.GaussianDropout(
rate = p
)
layer.asInstanceOf[AbstractModule[Tensor[T], Tensor[T], T]]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,24 +16,21 @@

package com.intel.analytics.bigdl.nn.keras

import com.intel.analytics.bigdl._
import com.intel.analytics.bigdl.nn._

import com.intel.analytics.bigdl.nn.abstractnn._
import com.intel.analytics.bigdl.optim.Regularizer
import com.intel.analytics.bigdl.tensor.Tensor
import com.intel.analytics.bigdl.tensor.TensorNumericMath.TensorNumeric
import com.intel.analytics.bigdl.utils.Shape

import scala.reflect.ClassTag

@SerialVersionUID( - 2224693793797534699L)
class GaussianNoise[T: ClassTag](val sigma: Double,
var inputShape: Shape = null
)(implicit ev: TensorNumeric[T])
extends KerasLayer[Tensor[T], Tensor[T], T](KerasLayer.addBatch(inputShape)) {

override def doBuild(inputShape: Shape): AbstractModule[Tensor[T], Tensor[T], T] = {
val layer = nn.GaussianNoise(
val layer = com.intel.analytics.bigdl.nn.GaussianNoise(
stddev = sigma
)
layer.asInstanceOf[AbstractModule[Tensor[T], Tensor[T], T]]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,6 @@

package com.intel.analytics.bigdl.nn.keras

import com.intel.analytics.bigdl._
import com.intel.analytics.bigdl.nn._
import com.intel.analytics.bigdl.nn.abstractnn._
import com.intel.analytics.bigdl.tensor.Tensor
import com.intel.analytics.bigdl.tensor.TensorNumericMath.TensorNumeric
Expand All @@ -26,21 +24,21 @@ import com.intel.analytics.bigdl.utils.Shape
import scala.reflect.ClassTag


@SerialVersionUID( - 1470253389268877486L)
class LeakyReLU[T: ClassTag](private val alpha: Double = 0.01,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this alpha is private? cc @zhichao-li

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls confirm this with the original author. we can open it if no objections.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the original nn/LeakyReLU, the negval is private so I set the alpha as private.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @psyyz10 @qiuxin2012 Any comments on this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems can delete private cc @psyyz10

var inputShape: Shape = null
)(implicit ev: TensorNumeric[T])
extends KerasLayer[Tensor[T], Tensor[T], T](KerasLayer.addBatch(inputShape)) {

override def doBuild(inputShape: Shape): AbstractModule[Tensor[T], Tensor[T], T] = {
val layer = nn.LeakyReLU(
val layer = com.intel.analytics.bigdl.nn.LeakyReLU(
negval = alpha,
inplace = false
)
layer.asInstanceOf[AbstractModule[Tensor[T], Tensor[T], T]]
}
}


object LeakyReLU {

def apply[@specialized(Float, Double) T: ClassTag](
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,6 @@ class ELUSpec extends KerasBaseSpec{
seq.add(elu)
checkOutputAndGrad(seq.asInstanceOf[AbstractModule[Tensor[Float], Tensor[Float], Float]],
kerasCode)

}

"ELU 3D" should "be the same as Keras" in {
Expand All @@ -54,7 +53,5 @@ class ELUSpec extends KerasBaseSpec{
seq.add(elu)
checkOutputAndGrad(seq.asInstanceOf[AbstractModule[Tensor[Float], Tensor[Float], Float]],
kerasCode)

}
}

Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,6 @@ class LeakyReLUSpec extends KerasBaseSpec{
seq.add(leakyrelu)
checkOutputAndGrad(seq.asInstanceOf[AbstractModule[Tensor[Float], Tensor[Float], Float]],
kerasCode)

}

"LeakyReLU 3D" should "be the same as Keras" in {
Expand Down