Skip to content

[SPARK-8728] Add configuration for limiting the maximum number of active stages in a fair scheduling queue #7119

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from

Conversation

sirpkt
Copy link
Contributor

@sirpkt sirpkt commented Jun 30, 2015

It only takes predefined number of Schedulables in getSortedTaskSetQueue.

'maxRunning' is added in fair scheduler configuration to limit the maximum number of concurrently running stages.
If 'maxRunning' is not set, no limitation is applied.

@AmplabJenkins
Copy link

Can one of the admins verify this patch?

scheduleTaskAndVerifyId(4, rootPool, 1)
scheduleTaskAndVerifyId(5, rootPool, 2)

verifyNoRemainedTask(rootPool)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please rename this to noTasksRemain, refactor that method slightly to produce the Boolean nextTaskSetToSchedule.isEmpty, then change this line to assert(noTasksRemain).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better yet, make it noTasksRemainIn(pool: Pool).

@andrewor14
Copy link
Contributor

@kayousterhout @markhamstra is this something we want?

@andrewor14
Copy link
Contributor

Given that there is not sufficient interest in this feature, @sirpkt can you close this PR?

@asfgit asfgit closed this in ce5fd40 Dec 17, 2015
@devoncrouse
Copy link

@andrewor14 I'm interested; was actually thinking of "maxShare" to go with the min. Lower-weighted pools will still consume the entire cluster when others are idle, and must finish processing partitions before shares can be returned to the higher weight. It makes a lot of sense to be able to restrict a pool and leave capacity immediately available for the rest of the cluster.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants