-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【Hackathon 5th No.31】为 Paddle 新增 column_stack / row_stack / dstack / hstack / vstack API -part #59127
Merged
Merged
【Hackathon 5th No.31】为 Paddle 新增 column_stack / row_stack / dstack / hstack / vstack API -part #59127
Changes from 7 commits
Commits
Show all changes
20 commits
Select commit
Hold shift + click to select a range
9e82e3c
[Init] add stack extension api
megemini 0306098
[Add] unittest for stack extension
megemini 032e3bb
[Update] docstring
megemini 5837495
[Add] column and row stack docstring
megemini f74e989
[Add] add set_device
megemini 02c883d
[Add] expose api with __init__
megemini af155df
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
megemini 3728a62
[Change] use concat for column_stack
megemini 5136f77
[Change] test_with_pir_api
megemini b8f6da9
[Change] ir_backward
megemini 3fe3ee5
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
megemini 63cddd7
[Change] test for pir
megemini 60459fc
[Change] pir grad from y to x
megemini d5cfe51
[Change] not check grad for old ir
megemini b5209ec
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
megemini 4cf0da6
[Change] only check pir
megemini c11976a
[Change] remove redundance test case
megemini 48748ca
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
megemini cc34a7a
[Update] test for win32
megemini 28fa67f
[Update] remove print debuug
megemini File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个和torch的计算结果是一样的吗
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
借用一下 torch 官方的例子:
应该一样吧 ~
stack 没有 atleast_xd 的那种输入问题,stack 的输入只有一个 🤗 ~
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
因为我看你这个计算逻辑和torch有点区别:
如果有不同的计算逻辑,需要说明更合理性。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里是指:
torch 的第二行
return cat(aligned_tensors, 1)
我这里用的
return paddle.hstack(arrays, name=name)
?hstack
确实对于ndim = 0
有特殊处理,但是实际上这里输入的ndim
一定是大于0
的,因此,与return cat(aligned_tensors, 1)
是一样的啊 ~那我还是改一下吧 ... ... 😅
p.s. 想起来了,当时用 hstack 而不是 concat 是因为: column_stack 和 row_stack 其实是 hstack 和 vstack 对等的实现,row_stack 用的 vstack,所以 column_stack 用的 hstack ~ 不过用 hstack 确实可能存在性能损失,已修改 ~ 👍