Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build: Run Spark SQL tests for 3.4 #166

Merged
merged 18 commits into from
Mar 12, 2024
Merged

Conversation

sunchao
Copy link
Member

@sunchao sunchao commented Mar 5, 2024

Which issue does this PR close?

Closes #8.

Rationale for this change

We want to leverage SQL tests in Spark itself to increase our test coverage. We should run them with Comet enabled and make sure the tests pass all the existing checks.

What changes are included in this PR?

This PR enables us to run SQL tests in Spark itself in Comet. To enable Comet for Spark, a diff file is introduced to patch Spark (version 3.4.2) so that we can:

  • Enable Comet in Spark (this is controlled via an environment variable ENABLE_COMET)
  • Modify certain tests that check certain operators in Spark. For those, we need to change the code and add handling for the equivalent Comet operators too.
  • Skip certain tests for unsupported features etc.

How are these changes tested?

shell: bash
run: |
cd apache-spark
git apply ../dev/diffs/${{inputs.spark-version}}.diff
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can help review this once it's ready.

On the surface, I am concerned to maintain 1k+ patch to make this work. It would be problematic to maintain that. Is there any successful example that has similar setup?

Is it possible to add spark-sql test jar to the project and run tests directly against the test jar with comet enabled and some incompatible tests excluded? That setup simulates how end users uses comet.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems difficult to use the test jar approach. Even if we are able to enable Comet for the Spark tests, we'd need to make modifications for many of them as shown in the diff.

On the other hand, the diff is tied to a particular Spark version like 3.4.x, and rarely need to be updated (from our experience). We only need to create a new diff with a new Spark release which typically happens every 6 months to 1 year.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see.

On the other hand, the diff is tied to a particular Spark version like 3.4.x, and rarely need to be updated (from our experience)

But once modification is needed, it would be problematic to update the patches directly. If we are going to go with this approach. I'd like to propose some improvement to refine the maintaining process:

  1. It would be ideal that we can host all these patches in a dedicated branch(s) per spark release in a forked spark repo. The repo should be public and ideally is hosted under one organization.
  2. It might not be appropriate to reference the forked branch directly in an apache project nor I can think of a repo that can be served for this purpose. Therefore, I think maybe we can host the dedicated branch in your personal spark repo: sunchao/spark to start?
  3. Once modifications are needed. The modifications should go to the forked branches first. When modification is merged, it would be pretty straightforward to generate the patch using git command.
  4. Submit a PR with the dedicated patch in this repo.

By hosting patches in a dedicated branch, I think we can track all the modifications in history.

Of course, we should include a README in dev/diffs about how the diffs are generated.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think it'd be useful to have a forked repo tracking the Comet changes to Spark. Maybe we can just use branches in this repo? We could run tests to validate the changes too through Github CI.

Alternatively we can use my personal Spark fork too but it just doesn't seem like the ideal place (for instance, who should be able to update the repo?).

cc @viirya @kazuyukitanimura @alamb for more inputs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For updating the diff, I think we can just create a doc to explain how or small script to automate. Basically what we need to cover is

  1. apply the diff to Spark
  2. resolve the conflict
  3. take updated diff

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean to have a branch in Comet repo which has forked Spark code with this diff? I think it is better than a personal repo so more people can maintain the branch.

Yes, something like:

comet
  - main
  - spark-3.4.2
  - spark-3.5.1
  ...

where spark-3.4.2 and spark-3.5.1 are Spark fork with the diff applied. We will need to keep the branch updated since sometimes Comet will introduce breaking changes that require Spark changes. Comparing to personal repo, it is easier to maintain.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we only test with released Spark version, I think the diff is basically no need to changed at all (as the Spark code is not changed), except we have something in Comet which needs to update the diff. It makes me wonder if we need to have the whole Spark codebase only for the diff. 🤔

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the diff is basically no need to changed at all

They may need to be updated when Comet introduces some changes (for instance, an extra parameter for CometBatchScanExec) that require Spark side change. One advantage of having the branches is we are able to track the history of all these. IMO it is something good to have but not essential.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where spark-3.4.2 and spark-3.5.1 are Spark fork with the diff applied

If it's allowed, then it would be ideal.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @viirya @kazuyukitanimura @alamb for more inputs.

I believe having a fork of the spark code (rather than a diff that is applied to a local checkout) would be eaiser to understand / maintain over the long run.

I think the key would be to make sure what is going on with the branches is well documented (especially the rationale)

@codecov-commenter
Copy link

codecov-commenter commented Mar 5, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

❗ No coverage uploaded for pull request base (main@72398a6). Click here to learn what that means.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #166   +/-   ##
=======================================
  Coverage        ?   33.30%           
  Complexity      ?      767           
=======================================
  Files           ?      107           
  Lines           ?    35372           
  Branches        ?     7657           
=======================================
  Hits            ?    11782           
  Misses          ?    21144           
  Partials        ?     2446           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@advancedxy
Copy link
Contributor

❗ No coverage uploaded for pull request base (main@72398a6).

@sunchao you may need to rebase the latest main to compare coverage difference.

@sunchao
Copy link
Member Author

sunchao commented Mar 5, 2024

@sunchao you may need to rebase the latest main to compare coverage difference.

Oh thanks for letting me know. I'll address the test failures first, and then do the rebasing.

+ // ConstantPropagation etc.
+ .set(SQLConf.OPTIMIZER_EXCLUDED_RULES.key, ConvertToLocalRelation.ruleName)
+
+ val v = System.getenv("ENABLE_COMET")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to reuse SQLTestUtils.scala?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will need some changes given TestHive doesn't inherit from SQLTestUtils. I'll just leave it for now given the amount of code duplicated is relatively small.

@sunchao sunchao marked this pull request as ready for review March 7, 2024 19:50
@sunchao
Copy link
Member Author

sunchao commented Mar 7, 2024

cc @viirya @advancedxy @kazuyukitanimura this is ready for review now.

I'll create a separate Github issue to track the work of creating a branch for the diff changes.

@sunchao
Copy link
Member Author

sunchao commented Mar 7, 2024

I applied the diff change to my own fork of Spark 3.4.2 here to make it easier to review.

- name: Run Spark sql/catalyst tests
run: |
cd apache-spark
ENABLE_COMET=true build/sbt catalyst/test
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like all the jobs are similar, I think we define a new dimension into the matrix, such as:

      matrix:
        os: [ubuntu-latest]
        java-version: [11]
        spark-version: [{short: '3.4', full: '3.4.2'}]
        spark-test-modules:
          - {name: "catalyst", sbt-options: "catalyst/test"}
          - ...

For the name part, I think we can remove the spark-sql prefix so the job name can be short.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me give it a try. In build/sbt, for some cases we need to pass arguments surrounded by ", while for some others we don't. I'm not sure if this can be handled properly.

- name: Run Spark sql/hive-1 tests
run: |
cd apache-spark
ENABLE_COMET=true build/sbt hive/test -Dtest.exclude.tags=org.apache.spark.tags.ExtendedHiveTest
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hive-1 and hive-2's test time are unbalanced. I think we should also exclude tests with tag org.apache.spark.tags.SlowHiveTest in hive-1 too and test these tests in hive-2.

I'm not sure how to balance tests in sql-core-{1,2,3} though.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah wasn't aware there is SlowHiveTest. Let me split it out. For sql-core-1, it currently takes ~1h which probably is the longest among all tests. We can check the time distribution later and potentially remove some unrelated tests (e.g., streaming).

@advancedxy
Copy link
Contributor

I left some comments on this and sunchao/spark@f7c15aa.
LGTM generally.

Copy link
Contributor

@kazuyukitanimura kazuyukitanimura left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

}

+ private def loadCometExtension(sparkContext: SparkContext): Seq[String] = {
+ if (sparkContext.getConf.getBoolean(CometConf.COMET_ENABLED.key, false)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it intentional not to use isCometEnabled as default value? i.e.
sparkContext.getConf.getBoolean(CometConf.COMET_ENABLED.key, isCometEnabled)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it shouldn't matter. When isCometEnabled is true, CometConf.COMET_ENABLED.key will be set to true before loadCometExtension is called. Also, we cannot refer to isCometEnabled here.

- name: Run Spark sql/hive-2 tests
run: |
cd apache-spark
ENABLE_COMET=true build/sbt "hive/testOnly *.HiveSparkSubmitSuite *.VersionsSuite *.HiveDDLSuite *.HiveCatalogedDDLSuite *.HiveSerDeSuite *.HiveQuerySuite *.SQLQuerySuite"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we need to add hive profile?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems not necessary. I verified the pipeline does execute Hive tests.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, I also looked at the pipeline and verified it locally.

+
+import org.apache.spark.sql.test.SQLTestUtils
+
+case class DisableComet(reason: String) extends Tag("DisableComet")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we use reason? I don't find it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm following DisableAdaptiveExecution here. I guess it is only used for commenting.

Copy link
Member

@viirya viirya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. Only two minor comments.

@sunchao
Copy link
Member Author

sunchao commented Mar 12, 2024

@advancedxy there are still some issues when enabling shuffle in Spark SQL tests. I'll address them separately later in a follow-up. Let me know what you think of the latest change.

@sunchao
Copy link
Member Author

sunchao commented Mar 12, 2024

(the CI failure is unrelated)

+ op.isInstanceOf[SortExec] ||
+ (op.isInstanceOf[CometExec] &&
+ op.asInstanceOf[CometExec].originalPlan.find(_.isInstanceOf[SortExec]).isDefined)
+ }.isDefined == sortLeft,
Copy link
Contributor

@advancedxy advancedxy Mar 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmmm, I don't think you fixed this part.

ShuffleExchangeExec in left, SortExec in the right

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops I reverted it together when reverting the shuffle related changes.

@advancedxy
Copy link
Contributor

I'll address them separately later in a follow-up. Let me know what you think of the latest change.

That sounds good to me. I took anther look, only one minor comment, LGTM otherwise.

Copy link
Contributor

@advancedxy advancedxy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, pending CI passes.

@sunchao sunchao merged commit 6bedce4 into apache:main Mar 12, 2024
49 checks passed
@sunchao
Copy link
Member Author

sunchao commented Mar 12, 2024

Thanks, merged

@sunchao sunchao deleted the spark-sql-tests branch March 12, 2024 17:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Run Spark SQL tests in CI
6 participants