Skip to content
This repository was archived by the owner on Jan 24, 2024. It is now read-only.

Conversation

Xreki
Copy link
Collaborator

@Xreki Xreki commented Apr 21, 2022

添加2个FLAGS,方便调试:

  1. FLAGS_cinn_use_new_fusion_pass,切换到新的OpFusionPass + FusionMerge,默认值为false
    image

  2. FLAGS_cinn_sync_run,在每个Instruction->Run调用介绍后加入stream同步,用于debug,默认值为false

@paddle-bot-old
Copy link

Thanks for your contribution!

@Xreki Xreki requested review from SunNy820828449 and wzzju April 21, 2022 06:38
@Xreki Xreki force-pushed the switch_new_fusion_pass branch from 6001ac2 to 3fbd0aa Compare April 25, 2022 07:42
@Xreki Xreki force-pushed the switch_new_fusion_pass branch from 3edc77a to 7ead49b Compare April 25, 2022 08:53
wzzju
wzzju previously approved these changes Apr 25, 2022
@wzzju wzzju dismissed their stale review April 25, 2022 12:36

Not Approve.

Copy link
Collaborator

@wzzju wzzju left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Xreki Xreki merged commit 2cfff93 into PaddlePaddle:develop Apr 26, 2022
@Xreki Xreki changed the title Switch to the new op fusion and fusion merge pass. Add FLAGS_cinn_use_new_fusion_pass to switch to the new op fusion and fusion merge pass. Apr 26, 2022
@Xreki Xreki deleted the switch_new_fusion_pass branch April 26, 2022 01:52
haozech added a commit that referenced this pull request Apr 27, 2022
* fix fusion pass (#737)

* Fix Fusion Pass

* fix auto-tune header export (#747)

* Add the op mapper for fill_any_like. (#745)

* Add the Optimize API. (#750)

* Add the Optimize API.

* Fix some pass applying errors in gemm_rewriter_test.

* Use DefaultTrainingOptimizeOptions instead of DefaultOptimizeOptions.

* Use cublas gemm instead of matmul. (#735)

* Rewrite the left single matmul, use cublas call instead.

* Simplify the code logic.

* Add FLAGS_cinn_use_new_fusion_pass to switch to the new op fusion and fusion merge pass. (#751)

* Add scatter_add op (#738)

* add scatter_add base op

* add some annotation

* rename index_assign to scatter_assign (#739)

* rename index_assign to scatter_assign

* opmapper keep index_assign

* add gather/scatter_add opmapper and change attr dtype to int64 (#743)

* add gather/scatter_add opmapper and change attr dtype to int64

* Fix Fusion Pass And Lowering For BN And Laplace Model (#746)

* Fix Fusion Pass For BN And Laplace Model

* change arg arr of cuda kernel to vector<void*> (#757)

* WIP switch ir schedule (#740)

* update temp buffer and enable vectorize/unroll after ir schedule (#748)

* Add new schedule and fix bugs (#752)

* add new schedule

* fix bugs

add conv2d schedule (#760)

Co-authored-by: sunli <sunli_hit@outlook.com>
Co-authored-by: TeFeng Chen <ctfeng66@163.com>
Co-authored-by: Zhen Wang <wangzhen31@baidu.com>
Co-authored-by: Yiqun Liu <liuyiqun01@baidu.com>
Co-authored-by: jiangcheng <thisjiang@qq.com>
Co-authored-by: wangone <2279939962@qq.com>
zhhsplendid pushed a commit that referenced this pull request Jun 9, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants