Skip to content

Commit

Permalink
[KYUUBI apache#4749] Fix flaky test issues in SchedulerPoolSuite
Browse files Browse the repository at this point in the history
### _Why are the changes needed?_

To fix issue apache#4713, a PR  apache#4714 was submitted, but it had Flaky test issues. After 50 local tests, it succeeded 38 times and failed 12 times.
This PR addresses the issue of flaky tests.

### _How was this patch tested?_
- [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible

- [ ] Add screenshots for manual tests if appropriate

- [x] [Run test](https://kyuubi.readthedocs.io/en/master/develop_tools/testing.html#running-tests) locally before make a pull request

Closes apache#4749 from huangzhir/fixtest-schedulerpool.

Closes apache#4749

2d2e140 [huangzhir] call KyuubiSparkContextHelper.waitListenerBus() to make sure there are no more events in the spark event queue
52a34d2 [fwang12] [KYUUBI apache#4746] Do not recreate async request executor if has been shutdown
d4558ea [huangzhir] Merge branch 'master' into fixtest-schedulerpool
44c4cef [huangzhir] make sure the SparkListener has received the finished events for job1 and job2.
8a753e9 [huangzhir] make sure job1 started before job2
e66ede2 [huangzhir] fixbug TEST SchedulerPoolSuite  a false positive result

Lead-authored-by: huangzhir <306824224@qq.com>
Co-authored-by: fwang12 <fwang12@ebay.com>
Signed-off-by: Cheng Pan <chengpan@apache.org>
  • Loading branch information
2 people authored and pan3793 committed Apr 21, 2023
1 parent 8d424ef commit 2c55a1f
Showing 1 changed file with 3 additions and 0 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ import java.util.concurrent.Executors

import scala.concurrent.duration.SECONDS

import org.apache.spark.KyuubiSparkContextHelper
import org.apache.spark.scheduler.{SparkListener, SparkListenerJobEnd, SparkListenerJobStart}
import org.scalatest.concurrent.PatienceConfiguration.Timeout
import org.scalatest.time.SpanSugar.convertIntToGrainOfTime
Expand Down Expand Up @@ -101,6 +102,8 @@ class SchedulerPoolSuite extends WithSparkSQLEngine with HiveJDBCTestHelper {
})
threads.shutdown()
threads.awaitTermination(20, SECONDS)
// make sure the SparkListener has received the finished events for job1 and job2.
KyuubiSparkContextHelper.waitListenerBus(spark)
// job1 should be started before job2
assert(job1StartTime < job2StartTime)
// job2 minShare is 2(total resource) so that job1 should be allocated tasks after
Expand Down

0 comments on commit 2c55a1f

Please sign in to comment.