Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Hackathon 5th No.33】为 Paddle 新增 atleast_1d / atleast_2d / atleast_3d API #679

Merged
merged 3 commits into from
Oct 12, 2023

Conversation

megemini
Copy link
Contributor

@megemini megemini commented Oct 2, 2023

PR types

Others

PR changes

Docs

Description

【Hackathon 5th No.33】为 Paddle 新增 atleast_1d / atleast_2d / atleast_3d API

请评审!

@paddle-bot
Copy link

paddle-bot bot commented Oct 2, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请检查PR提交格式和内容是否完备,具体请参考示例模版
Your PR has been submitted. Thanks for your contribution!
Please check its format and content. For this, you can refer to Template and Demo.


参数:

- inputs: (Tensor|list(Tensor)) - 输入的一至多个 Tensor。数据类型支持:float32、float64、int32、int64。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

数据类型支持:float32、float64、int32、int64。

数据类型上:float16, uint16, float32, float64, int8, int16, int32, int64, uint8, complex64, complex128, bfloat16,这些都能支持么?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个我再确认一下,因为 manipulation.py 里面很多都是 float32、float64、int32、int64 类似的类型,所以这里是对齐此类方法 ~

后面确认后也会加到单测里面 ~

@megemini
Copy link
Contributor Author

Update 20231012

简单测试了一下数据类型的支持情况:

In [32]: atleast_1d(float16, uint16, float32, float64, int8, int16, int32, int64, uint8, complex64, complex128, bfloat16)
    ...: 
Out[32]: 
[Tensor(shape=[1], dtype=float16, place=Place(cpu), stop_gradient=True,
        [0.30004883]),
 Tensor(shape=[1], dtype=bfloat16, place=Place(cpu), stop_gradient=True,
        [23.]),
 Tensor(shape=[1], dtype=float32, place=Place(cpu), stop_gradient=True,
        [3.]),
 Tensor(shape=[1], dtype=float64, place=Place(cpu), stop_gradient=True,
        [23.]),
 Tensor(shape=[1], dtype=int8, place=Place(cpu), stop_gradient=True,
        [2]),
 Tensor(shape=[1], dtype=int16, place=Place(cpu), stop_gradient=True,
        [2]),
 Tensor(shape=[1], dtype=int32, place=Place(cpu), stop_gradient=True,
        [2]),
 Tensor(shape=[1], dtype=int64, place=Place(cpu), stop_gradient=True,
        [2]),
 Tensor(shape=[1], dtype=uint8, place=Place(cpu), stop_gradient=True,
        [2]),
 Tensor(shape=[1], dtype=complex64, place=Place(cpu), stop_gradient=True,
        [(1+1j)]),
 Tensor(shape=[1], dtype=complex128, place=Place(cpu), stop_gradient=True,
        [(1+1j)]),
 Tensor(shape=[1], dtype=bfloat16, place=Place(cpu), stop_gradient=True,
        [0.29882812])]

In [33]: atleast_2d(float16, uint16, float32, float64, int8, int16, int32, int64, uint8, complex64, complex128, bfloat16)
    ...: 
Out[33]: 
[Tensor(shape=[1, 1], dtype=float16, place=Place(cpu), stop_gradient=True,
        [[0.30004883]]),
 Tensor(shape=[1, 1], dtype=bfloat16, place=Place(cpu), stop_gradient=True,
        [[23.]]),
 Tensor(shape=[1, 1], dtype=float32, place=Place(cpu), stop_gradient=True,
        [[3.]]),
 Tensor(shape=[1, 1], dtype=float64, place=Place(cpu), stop_gradient=True,
        [[23.]]),
 Tensor(shape=[1, 1], dtype=int8, place=Place(cpu), stop_gradient=True,
        [[2]]),
 Tensor(shape=[1, 1], dtype=int16, place=Place(cpu), stop_gradient=True,
        [[2]]),
 Tensor(shape=[1, 1], dtype=int32, place=Place(cpu), stop_gradient=True,
        [[2]]),
 Tensor(shape=[1, 1], dtype=int64, place=Place(cpu), stop_gradient=True,
        [[2]]),
 Tensor(shape=[1, 1], dtype=uint8, place=Place(cpu), stop_gradient=True,
        [[2]]),
 Tensor(shape=[1, 1], dtype=complex64, place=Place(cpu), stop_gradient=True,
        [[(1+1j)]]),
 Tensor(shape=[1, 1], dtype=complex128, place=Place(cpu), stop_gradient=True,
        [[(1+1j)]]),
 Tensor(shape=[1, 1], dtype=bfloat16, place=Place(cpu), stop_gradient=True,
        [[0.29882812]])]

In [34]: atleast_3d(float16, uint16, float32, float64, int8, int16, int32, int64, uint8, complex64, complex128, bfloat16)
    ...: 
Out[34]: 
[Tensor(shape=[1, 1, 1], dtype=float16, place=Place(cpu), stop_gradient=True,
        [[[0.30004883]]]),
 Tensor(shape=[1, 1, 1], dtype=bfloat16, place=Place(cpu), stop_gradient=True,
        [[[23.]]]),
 Tensor(shape=[1, 1, 1], dtype=float32, place=Place(cpu), stop_gradient=True,
        [[[3.]]]),
 Tensor(shape=[1, 1, 1], dtype=float64, place=Place(cpu), stop_gradient=True,
        [[[23.]]]),
 Tensor(shape=[1, 1, 1], dtype=int8, place=Place(cpu), stop_gradient=True,
        [[[2]]]),
 Tensor(shape=[1, 1, 1], dtype=int16, place=Place(cpu), stop_gradient=True,
        [[[2]]]),
 Tensor(shape=[1, 1, 1], dtype=int32, place=Place(cpu), stop_gradient=True,
        [[[2]]]),
 Tensor(shape=[1, 1, 1], dtype=int64, place=Place(cpu), stop_gradient=True,
        [[[2]]]),
 Tensor(shape=[1, 1, 1], dtype=uint8, place=Place(cpu), stop_gradient=True,
        [[[2]]]),
 Tensor(shape=[1, 1, 1], dtype=complex64, place=Place(cpu), stop_gradient=True,
        [[[(1+1j)]]]),
 Tensor(shape=[1, 1, 1], dtype=complex128, place=Place(cpu), stop_gradient=True,
        [[[(1+1j)]]]),
 Tensor(shape=[1, 1, 1], dtype=bfloat16, place=Place(cpu), stop_gradient=True,
        [[[0.29882812]]])]

除了 uint16 会默认转换为 bfloat16 之外,暂时未发现问题。由此更新文档,并增加单测需要测试的数据类型。

@luotao1 请评审~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants