Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[xdoctest] reformat example code with google style No.241-245 #56359

Merged
merged 6 commits into from
Aug 25, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
refactor: refine detail
  • Loading branch information
PommesPeter committed Aug 17, 2023
commit 403205c8e3b2557a228c7c474fa2e6b1413c6228
35 changes: 18 additions & 17 deletions python/paddle/incubate/autograd/functional.py
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,7 @@ class Jacobian:

>>> def func(x, y):
... return paddle.matmul(x, y)
...
...
>>> x = paddle.to_tensor([[1., 2.], [3., 4.]])
>>> J = paddle.incubate.autograd.Jacobian(func, [x, x])
>>> print(J[:, :])
Expand Down Expand Up @@ -284,21 +284,22 @@ class Hessian:

Examples:

.. code-block:: python

>>> import paddle

>>> def reducer(x):
... return paddle.sum(x * x)
...
>>> x = paddle.rand([2, 2])
>>> h = paddle.incubate.autograd.Hessian(reducer, x)
>>> print(h[:])
Tensor(shape=[4, 4], dtype=float32, place=Place(gpu:0), stop_gradient=False,
[[2., 0., 0., 0.],
[0., 2., 0., 0.],
[0., 0., 2., 0.],
[0., 0., 0., 2.]])
.. code-block:: python

>>> import paddle

>>> def reducer(x):
... return paddle.sum(x * x)
...
>>> x = paddle.rand([2, 2])
>>> h = paddle.incubate.autograd.Hessian(reducer, x)
>>> print(h[:])
Tensor(shape=[4, 4], dtype=float32, place=Place(gpu:0), stop_gradient=False,
[[2., 0., 0., 0.],
[0., 2., 0., 0.],
[0., 0., 2., 0.],
[0., 0., 0., 2.]])
SigureMo marked this conversation as resolved.
Show resolved Hide resolved

"""

def __init__(self, func, xs, is_batched=False):
Expand Down Expand Up @@ -615,7 +616,7 @@ def _separate(xs):
.. code-block:: python

>>> import paddle
>>> from paddle.autograd.functional import _separate
>>> from paddle.incubate.autograd.functional import _separate

>>> def func(x, y):
... return x * y
Expand Down
7 changes: 4 additions & 3 deletions python/paddle/incubate/autograd/primreg.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,7 @@ def op_position_inputs(op):

Examples:
.. code-block:: python

>>> from paddle.incubate.autograd.primops import _simple_binop
PommesPeter marked this conversation as resolved.
Show resolved Hide resolved
>>> from paddle.fluid.layer_helper import LayerHelper
>>> from paddle.incubate.autograd.primreg import REGISTER_FN
Expand Down Expand Up @@ -131,7 +132,7 @@ def op_position_output(op):
>>> @REGISTER_FN('div_p', 'X', 'Y', 'Z')
>>> def div(x, y, out=None):
... return _simple_binop(LayerHelper('div_p', **locals()))
...

The registered output is ['Z'] for div_p and accordingly this
function will return output Z.

Expand Down Expand Up @@ -328,7 +329,7 @@ def REGISTER_JVP(op_type):
>>> @REGISTER_JVP('add_p')
>>> def add_jvp(op, x_dot, y_dot):
... return primops.add(x_dot, y_dot)
...

"""
if not isinstance(op_type, str):
raise TypeError(f'op_type must be str, but got {type(op_type)}.')
Expand Down Expand Up @@ -366,7 +367,7 @@ def REGISTER_TRANSPOSE(op_type):
>>> @REGISTER_TRANSPOSE('add_p')
>>> def add_transpose(op, z_bar):
... return z_bar, z_bar
...

"""
if not isinstance(op_type, str):
raise TypeError(f'op_type must be str, but got {type(op_type)}.')
Expand Down