Skip to content
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions doc/fluid/api/nn/activation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,8 @@ activation
.. toctree::
:maxdepth: 1

activation/ELU.rst
activation/GELU.rst
activation/Hardshrink.rst
activation/ReLU.rst
activation/LogSigmoid.rst
39 changes: 0 additions & 39 deletions doc/fluid/api_cn/nn_cn/ReLU_cn.rst

This file was deleted.

4 changes: 4 additions & 0 deletions doc/fluid/api_cn/nn_cn/activation_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,11 @@ activation
.. toctree::
:maxdepth: 1

activation_cn/ELU_cn.rst
activation_cn/GELU_cn.rst
activation_cn/Hardshrink_cn.rst
activation_cn/ReLU_cn.rst
activation_cn/LeakyReLU_cn.rst
activation_cn/LogSoftmax_cn.rst
activation_cn/Sigmoid_cn.rst
activation_cn/LogSigmoid_cn.rst
41 changes: 41 additions & 0 deletions doc/fluid/api_cn/nn_cn/activation_cn/ELU_cn.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
.. _cn_api_nn_ELU:

ELU
-------------------------------
.. py:class:: paddle.nn.ELU(x, alpha=1.0, name=None)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

【ELU】的参数里没有【x】

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

ELU激活层(ELU Activation Operator)

根据 https://arxiv.org/abs/1511.07289 对输入Tensor中每个元素应用以下计算。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里直接放链接不太好,建议改成和GELU一样的格式,可以参考:
【根据 Exponential Linear Units <https://arxiv.org/abs/1511.07289> 对输入Tensor中每个元素应用以下计算。】

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


.. math::
ELU(x) = max(0, x) + min(0, \alpha * (e^{x} − 1))
其中,:math:`x` 为输入的 Tensor

参数
::::::::::
- alpha (float, 可选) - ELU的alpha值,默认值为1.0。
- name (str, 可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。

形状:
::::::::::
- input: 任意形状的Tensor。
- output: 和input具有相同形状的Tensor。

代码示例
:::::::::

.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = paddle.to_tensor(np.array([[-1,6],[1,15.6]]))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

逗号后面加个空格?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

m = paddle.nn.ELU(0.2)
out = m(x)
# [[-0.12642411 6. ]
# [ 1. 15.6 ]]
54 changes: 54 additions & 0 deletions doc/fluid/api_cn/nn_cn/activation_cn/GELU_cn.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
.. _cn_api_fluid_layers_gelu:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个地方,应该是【_cn_api_nn_GELU】

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


GELU
-------------------------------
.. py:class:: paddle.nn.GELU(approximate=False, name=None)
GELU激活层(GELU Activation Operator)

更多细节请参考 `Gaussian Error Linear Units <https://arxiv.org/abs/1606.08415>`。

如果使用近似计算:

.. math::
GELU(x) = 0.5 * x * (1 + tanh(\sqrt{\frac{2}{\pi}} * (x + 0.044715x^{3})))
如果不使用近似计算:

.. math::
GELU(x) = 0.5 * x * (1 + erf(\frac{x}{\sqrt{2}}))
其中,:math:`x` 为输入的 Tensor

参数
::::::::::
- approximate (bool, 可选) - 是否使用近似计算,默认值为 False。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里最好描述下False的行为,可以参考:
【 - approximate (bool, 可选) - 是否使用近似计算,默认值为 False,即不使用近似计算。】

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

- name (str, 可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。

形状:
::::::::::
- input: 任意形状的Tensor。
- output: 和input具有相同形状的Tensor。

代码示例
:::::::::

.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
data = np.random.randn(2, 3).astype("float32")
x = paddle.to_tensor(data)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里可否换为非随机的输入?随机的输入最后的输出无法具有参考性~

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

m = paddle.nn.GELU()
out = m(x)
data
# array([[ 0.87165993, -1.0541513 , -0.37214822],
# [ 0.15647964, 0.32496083, 0.33045998]], dtype=float32)
out
# array([[ 0.70456535, -0.15380788, -0.13207214],
# [ 0.08796856, 0.20387867, 0.2080159 ]], dtype=float32)
36 changes: 36 additions & 0 deletions doc/fluid/api_cn/nn_cn/activation_cn/LogSigmoid_cn.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
.. _cn_api_nn_LogSigmoid:

LogSigmoid
-------------------------------
.. py:class:: paddle.nn.LogSigmoid(x, name=None)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里也没有【x】

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Logsigmoid激活层。计算公式如下:

.. math::
Logsigmoid(x) = \log \frac{1}{1 + e^{-x}}
其中,:math:`x` 为输入的 Tensor

参数
::::::::::
- name (str, 可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。

形状:
::::::::::
- input: 任意形状的Tensor。
- output: 和input具有相同形状的Tensor。

代码示例
:::::::::

.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = paddle.to_tensor(np.array([1.0, 2.0, 3.0, 4.0]))
m = paddle.nn.LogSigmoid()
out = m(x) # [0.7310586, 0.880797, 0.95257413, 0.98201376]
36 changes: 36 additions & 0 deletions doc/fluid/api_cn/nn_cn/activation_cn/ReLU_cn.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
.. _cn_api_nn_ReLU:

ELU
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里应该是【ReLU】;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

-------------------------------
.. py:class:: paddle.nn.ReLU(x, name=None)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里也没有 【x】

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

ReLU激活层(Rectified Linear Unit)。计算公式如下:

.. math::
ReLU(x) = max(0, x)
其中,:math:`x` 为输入的 Tensor

参数
::::::::::
- name (str, 可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。

形状:
::::::::::
- input: 任意形状的Tensor。
- output: 和input具有相同形状的Tensor。

代码示例
:::::::::

.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = paddle.to_tensor(np.array([-2, 0, 1]).astype('float32'))
m = paddle.nn.ReLU()
out = m(x) # [0., 0., 1.]
39 changes: 38 additions & 1 deletion doc/fluid/api_cn/nn_cn/elu_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,43 @@

elu
-------------------------------
:doc_source: paddle.fluid.layers.elu

.. py:function:: paddle.nn.functional.elu(x, alpha=1.0, name=None)
elu激活层(ELU Activation Operator)

根据 https://arxiv.org/abs/1511.07289 对输入Tensor中每个元素应用以下计算。

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

根据 Exponential Linear Units <https://arxiv.org/abs/1511.07289> 对输入Tensor中每个元素应用以下计算。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

.. math::
elu(x) = max(0, x) + min(0, \alpha * (e^{x} − 1))
其中,:math:`x` 为输入的 Tensor

参数:
::::::::::
- x (Tensor) - 输入的 ``Tensor`` ,数据类型为:float32、float64。
- alpha (float, 可选) - ELU的alpha值,默认值为1.0。
- name (str, 可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。

返回
::::::::::
``Tensor`` ,数据类型和形状同 ``x`` 一致。

代码示例
::::::::::

.. code-block:: python
import paddle
import paddle.nn.functional as F
import numpy as np
paddle.disable_static()
x = paddle.to_tensor(np.array([[-1,6],[1,15.6]]))
out = F.elu(x, alpha=0.2)
# [[-0.12642411 6. ]
# [ 1. 15.6 ]]
50 changes: 49 additions & 1 deletion doc/fluid/api_cn/nn_cn/gelu_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,54 @@

gelu
-------------------------------
:doc_source: paddle.fluid.layers.gelu

.. py:function:: paddle.nn.functional.gelu(x, approximate=False, name=None)
gelu激活层(GELU Activation Operator)

逐元素计算 Gelu激活函数。更多细节请参考 `Gaussian Error Linear Units <https://arxiv.org/abs/1606.08415>`_ 。

如果使用近似计算:

.. math::
gelu(x) = 0.5 * x * (1 + tanh(\sqrt{\frac{2}{\pi}} * (x + 0.044715x^{3})))
如果不使用近似计算:

.. math::
gelu(x) = 0.5 * x * (1 + erf(\frac{x}{\sqrt{2}}))
其中,:math:`x` 为输入的 Tensor

参数:
::::::::::
- x (Tensor) - 输入的 ``Tensor`` ,数据类型为:float32、float64。
- approximate (bool, 可选) - 是否使用近似计算,默认值为 False。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里最好明确下False的行为,如:
【- approximate (bool, 可选) - 是否使用近似计算,默认值为 False,表示不使用近似计算。】

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

- name (str, 可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。

返回
::::::::::
``Tensor`` ,数据类型和形状同 ``x`` 一致。

代码示例
::::::::::

.. code-block:: python
import paddle
import paddle.nn.functional as F
import numpy as np
paddle.disable_static()
data = np.random.randn(2, 3).astype("float32")
x = paddle.to_tensor(data)
out = F.gelu(x)
data
# array([[ 0.87165993, -1.0541513 , -0.37214822],
# [ 0.15647964, 0.32496083, 0.33045998]], dtype=float32)
out
# array([[ 0.70456535, -0.15380788, -0.13207214],
# [ 0.08796856, 0.20387867, 0.2080159 ]], dtype=float32)
32 changes: 31 additions & 1 deletion doc/fluid/api_cn/nn_cn/logsigmoid_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,36 @@

logsigmoid
-------------------------------
:doc_source: paddle.fluid.layers.logsigmoid

.. py:function:: paddle.nn.functional.logsigmoid(x, name=None)

logsigmoid激活层。计算公式如下:

.. math::

logsigmoid(x) = \log \frac{1}{1 + e^{-x}}

其中,:math:`x` 为输入的 Tensor

参数
::::::::::
- x (Tensor) - 输入的 ``Tensor`` ,数据类型为:float32、float64。
- name (str, 可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。

返回
::::::::::
``Tensor`` ,数据类型和形状同 ``x`` 一致。

代码示例
::::::::::

.. code-block:: python

import paddle
import paddle.nn.functional as F
import numpy as np

paddle.disable_static()

x = paddle.to_tensor(np.array([1.0, 2.0, 3.0, 4.0]))
out = F.logsigmoid(x) # [0.7310586, 0.880797, 0.95257413, 0.98201376]
Loading