Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AutoScheduler] New layout rewrite option: Weight pre-transpose #6750

Merged
merged 15 commits into from
Nov 2, 2020

Conversation

jcf94
Copy link
Contributor

@jcf94 jcf94 commented Oct 24, 2020

In Ansor, we have an optimization called "Layout rewrite", which modifies the input weight of a specific op according to its schedule to get better performance, a previous PR about this is #6297 .

This PR brings another option for this feature, besides directly modify the input placeholder, we now support to insert a transpose stage between the placeholder and compute op.

Others:

  • Add a CopyOnWrite methods for transform_steps
  • Some small fix

cc @merrymercy @comaniac @minminsun

@junrushao
Copy link
Member

Hey, thank you for the contribution! May I know the difference between layout rewrite and this weight pre-transpose? Looks like weight pre-transpose can be done in compile time, so why do we insert a stage instead? Thanks a lot!

* \brief Several options for applying layout rewrite.
* This is a optimization to rewrite the shape of input tensor according to the schedule we get.
*/
enum class LayoutRewriteOption : int {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

enum class LayoutRewriteOption : uint8 should be enough.

@FrozenGene
Copy link
Member

Hey, thank you for the contribution! May I know the difference between layout rewrite and this weight pre-transpose? Looks like weight pre-transpose can be done in compile time, so why do we insert a stage instead? Thanks a lot!

This could keep the weight shape is the same as before. However, I am curious too what benefit this will bring in.

include/tvm/auto_scheduler/transform_step.h Outdated Show resolved Hide resolved
python/tvm/auto_scheduler/compute_dag.py Outdated Show resolved Hide resolved
src/auto_scheduler/compute_dag.cc Outdated Show resolved Hide resolved
@merrymercy merrymercy self-assigned this Oct 27, 2020
@merrymercy
Copy link
Member

merrymercy commented Oct 27, 2020

The code looks good to me. But the naming is not intuitive. For example, the "RewriteWithPlaceholder" option in your code actually accept "PreTransposed" tensors as inputs.
The key difference between the two options is whether adding new stages. We can make the names much clearer as follows.

/*!
 * \brief Options for applying layout rewrite.
 * This is an optimization to rewrite the layout of input tensors according to the schedule we get.
 */
enum class LayoutRewriteOption : int {
  /*! \brief Do not process layout rewrite. */
  NoRewrite = 0,
  /*! \brief Insert layout transformation stages for input placeholders in the compute DAG */
  InsertTransformStage = 1,
  /*!
   * \brief Do not insert layout transformation stages and assume the input placeholders
   * are pre-transformed.
   * \note The lowered function with this option does not accept the origial input shapes,
   * so this option must be used along with a layout conversion pass in Relay.
   */
  RewriteForPreTransformed = 2,
};

In addition, I prefer "transform" over "transpose" because we can support other kinds of rewriting besides the current simple "transpose".

@jcf94
Copy link
Contributor Author

jcf94 commented Oct 27, 2020

@junrushao1994 @FrozenGene Thanks!
Our former implementation of layout rewrite is used for end to end relay integration, and we'll modify the input tensor directly since all these can be seen as parts of the pre-process.
Currently we got a simpler demand, we would like to get the well scheduled kernel from TVM and apply it to some where else(out of TVM runtime). We want to keep the input/output size the same, so it can be used as a out-of-the-box kernel, while at the same time we can still benefit from the layout rewrite optimization.

Copy link
Contributor

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have other comments. Thanks!

@merrymercy
Copy link
Member

@jcf94 Please fix the test cases.

@merrymercy
Copy link
Member

The problem is not about random seed. It is because the condition of np.testing.assert_allclose is too strict.

@jcf94
Copy link
Contributor Author

jcf94 commented Nov 1, 2020

The problem is not about random seed. It is because the condition of np.testing.assert_allclose is too strict.

Emm ... That may also be a problem, I'm thinking that these float operations should be in theory absolutely the same, even with a very strict precision.

At least the schedules generated with fixed random seed 0 can pass these check.

@merrymercy merrymercy merged commit 9c2d68d into apache:main Nov 2, 2020
@jcf94 jcf94 deleted the weight_pre_pack branch November 3, 2020 01:31
@comaniac
Copy link
Contributor

comaniac commented Nov 3, 2020

@merrymercy @jcf94 the tests added by this PR seem flaky. Please see https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/main/124/pipeline

@comaniac
Copy link
Contributor

comaniac commented Nov 3, 2020

It turns out that #6828 didn't really disable the flaky tests. It simply comments out the function call in __main__, but pytest will look for all functions with test_ as the prefix. I'll file another PR to disable them.

@comaniac
Copy link
Contributor

comaniac commented Nov 3, 2020

#6841 filed.

trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Dec 2, 2020
…he#6750)

* Add pre transpose support for layout rewrite

* Update

* Bug fix

* Bug fix

* Update

* Bug fix

* CI Fix

* Update

* Update

* Re-trigger CI

* Update

* Update test_auto_scheduler_layout_rewrite.py

* Update test_auto_scheduler_layout_rewrite.py

* Update task_scheduler ut, re-trigger CI

Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Dec 4, 2020
…he#6750)

* Add pre transpose support for layout rewrite

* Update

* Bug fix

* Bug fix

* Update

* Bug fix

* CI Fix

* Update

* Update

* Re-trigger CI

* Update

* Update test_auto_scheduler_layout_rewrite.py

* Update test_auto_scheduler_layout_rewrite.py

* Update task_scheduler ut, re-trigger CI

Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Dec 4, 2020
…he#6750)

* Add pre transpose support for layout rewrite

* Update

* Bug fix

* Bug fix

* Update

* Bug fix

* CI Fix

* Update

* Update

* Re-trigger CI

* Update

* Update test_auto_scheduler_layout_rewrite.py

* Update test_auto_scheduler_layout_rewrite.py

* Update task_scheduler ut, re-trigger CI

Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants