Skip to content

Commit

Permalink
[KYUUBI apache#4713][TEST] Fix false positive result in SchedulerPool…
Browse files Browse the repository at this point in the history
…Suite

### _Why are the changes needed?_

fix issuse apache#4713

### _How was this patch tested?_
- [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible

- [ ] Add screenshots for manual tests if appropriate

- [X] [Run test](https://kyuubi.readthedocs.io/en/master/develop_tools/testing.html#running-tests) locally before make a pull request

Closes apache#4714 from huangzhir/fixtest-schedulerpool.

Closes apache#4713

e66ede2 [huangzhir] fixbug TEST SchedulerPoolSuite  a false positive result

Authored-by: huangzhir <306824224@qq.com>
Signed-off-by: Cheng Pan <chengpan@apache.org>
  • Loading branch information
huangzhir authored and pan3793 committed Apr 17, 2023
1 parent 3ac8df8 commit 57b0611
Showing 1 changed file with 11 additions and 7 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@ package org.apache.kyuubi.engine.spark

import java.util.concurrent.Executors

import scala.concurrent.duration.SECONDS

import org.apache.spark.scheduler.{SparkListener, SparkListenerJobEnd, SparkListenerJobStart}
import org.scalatest.concurrent.PatienceConfiguration.Timeout
import org.scalatest.time.SpanSugar.convertIntToGrainOfTime
Expand Down Expand Up @@ -80,6 +82,7 @@ class SchedulerPoolSuite extends WithSparkSQLEngine with HiveJDBCTestHelper {
threads.execute(() => {
priority match {
case 0 =>
// job name job2
withJdbcStatement() { statement =>
statement.execute("SET kyuubi.operation.scheduler.pool=p0")
statement.execute("SELECT java_method('java.lang.Thread', 'sleep', 1500l)" +
Expand All @@ -92,17 +95,18 @@ class SchedulerPoolSuite extends WithSparkSQLEngine with HiveJDBCTestHelper {
statement.execute("SELECT java_method('java.lang.Thread', 'sleep', 1500l)" +
" FROM range(1, 3, 1, 2)")
}
// make sure this job name job1
Thread.sleep(1000)
}
})
}
threads.shutdown()
eventually(Timeout(20.seconds)) {
// We can not ensure that job1 is started before job2 so here using abs.
assert(Math.abs(job1StartTime - job2StartTime) < 1000)
// Job1 minShare is 2(total resource) so that job2 should be allocated tasks after
// job1 finished.
assert(job2FinishTime - job1FinishTime >= 1000)
}
threads.awaitTermination(20, SECONDS)
// because after job1 submitted, sleep 1s, so job1 should be started before job2
assert(job1StartTime < job2StartTime)
// job2 minShare is 2(total resource) so that job1 should be allocated tasks after
// job2 finished.
assert(job2FinishTime < job1FinishTime)
} finally {
spark.sparkContext.removeSparkListener(listener)
}
Expand Down

0 comments on commit 57b0611

Please sign in to comment.