Skip to content
This repository was archived by the owner on Jan 24, 2024. It is now read-only.

Add the Optimize API. #750

Merged
merged 3 commits into from
Apr 20, 2022
Merged

Add the Optimize API. #750

merged 3 commits into from
Apr 20, 2022

Conversation

wzzju
Copy link
Collaborator

@wzzju wzzju commented Apr 20, 2022

Add the Optimize API. By which, we can move pass optimizations from Paddle to CINN.

@paddle-bot-old
Copy link

Thanks for your contribution!

CtfGo
CtfGo previously approved these changes Apr 20, 2022
Copy link
Contributor

@CtfGo CtfGo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for Optimize interface

Xreki
Xreki previously approved these changes Apr 20, 2022
Copy link
Collaborator

@Xreki Xreki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@wzzju wzzju dismissed stale reviews from Xreki and CtfGo via 74f465e April 20, 2022 13:33
Copy link
Contributor

@CtfGo CtfGo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Collaborator

@Xreki Xreki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@wzzju wzzju merged commit eedb801 into PaddlePaddle:develop Apr 20, 2022
haozech added a commit that referenced this pull request Apr 27, 2022
* fix fusion pass (#737)

* Fix Fusion Pass

* fix auto-tune header export (#747)

* Add the op mapper for fill_any_like. (#745)

* Add the Optimize API. (#750)

* Add the Optimize API.

* Fix some pass applying errors in gemm_rewriter_test.

* Use DefaultTrainingOptimizeOptions instead of DefaultOptimizeOptions.

* Use cublas gemm instead of matmul. (#735)

* Rewrite the left single matmul, use cublas call instead.

* Simplify the code logic.

* Add FLAGS_cinn_use_new_fusion_pass to switch to the new op fusion and fusion merge pass. (#751)

* Add scatter_add op (#738)

* add scatter_add base op

* add some annotation

* rename index_assign to scatter_assign (#739)

* rename index_assign to scatter_assign

* opmapper keep index_assign

* add gather/scatter_add opmapper and change attr dtype to int64 (#743)

* add gather/scatter_add opmapper and change attr dtype to int64

* Fix Fusion Pass And Lowering For BN And Laplace Model (#746)

* Fix Fusion Pass For BN And Laplace Model

* change arg arr of cuda kernel to vector<void*> (#757)

* WIP switch ir schedule (#740)

* update temp buffer and enable vectorize/unroll after ir schedule (#748)

* Add new schedule and fix bugs (#752)

* add new schedule

* fix bugs

add conv2d schedule (#760)

Co-authored-by: sunli <sunli_hit@outlook.com>
Co-authored-by: TeFeng Chen <ctfeng66@163.com>
Co-authored-by: Zhen Wang <wangzhen31@baidu.com>
Co-authored-by: Yiqun Liu <liuyiqun01@baidu.com>
Co-authored-by: jiangcheng <thisjiang@qq.com>
Co-authored-by: wangone <2279939962@qq.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants