Skip to content

[SPARK-25262][K8S] Better support configurability of Spark scratch space when using Kubernetes #22256

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 5 commits into from

Conversation

rvesse
Copy link
Member

@rvesse rvesse commented Aug 28, 2018

What changes were proposed in this pull request?

This change improves how Spark on Kubernetes creates the local directories used for Spark scratch space i.e. SPARK_LOCAL_DIRS/spark.local.dirs

Currently Spark on Kubernetes creates each defined local directory, or a single default directory if none defined, as a Kubernetes emptyDir volume mounted into the containers. The problem is that emptyDir directories are backed by the node storage and so for some compute environments e.g. diskless any "local" storage is actually provided by some remote file system that may actually harm performance when jobs use it heavily.

Kubernetes provides the option to have emptyDir volumes backed by tmpfs i.e. RAM on the nodes so we introduce a boolean spark.kubernetes.local.dirs.tmpfs option that when true causes the created emptyDir volumes to use memory.

A second related problem is that because Spark on Kubernetes always generates emptyDir volumes users have no way to use alternative volume types that may be available in their cluster.

No new options specific to this problem are introduced but the code is modified to detect when the pod spec already defines an appropriately named volume and to avoid creating emptyDir volumes in this case. This uses the convention of the existing code that volumes for scratch space are named spark-local-dirs-N numbered from 1-N based on the number of entries defined in the SPARK_LOCAL_DIRS/spark.local.dirs setting. This is done in anticipation of the pod template feature form SPARK-24434 (PR #22146) being merged since that will allow users to define custom volumes more easily.

Tasks:

  • Support using tmpfs volumes
  • Support using pre-existing volumes
  • Unit tests
  • Documentation

How was this patch tested?

Unit tests added to the relevant feature step to exercise the new configuration option and to check that pre-existing volumes are used. Plan to add further unit tests to check some other corner cases.

rvesse added 2 commits August 28, 2018 14:52
- Skip creating an emptyDir volume if the pod spec already defines an
  appropriate volume.  This change is in preparation for SPARK-24434
  changes
- Provide the ability to specify that local dirs on K8S should be backed
  by tmpfs
Adds unit tests to the LocalDirsFeatureStepSuite to account for changes
made to make local dirs configurable.  Also adjusts parts of the logic
in LocalDirsFeatureStep to make sure that we appropriately mount
pre-defined volumes.
@liyinan926
Copy link
Contributor

@mccheah.

rvesse added 2 commits August 29, 2018 12:13
- Avoid creating volume mounts if pod template already defines them
- Refuse to create pod if template has conflicting volume mount
  definitions present
- Unit tests for the above
- Scalastyle corrections
Adds documentation of how K8S uses local storage and how to configure it
for different environments.
@rvesse rvesse changed the title [SPARK-25262][K8S][WIP] Better support configurability of Spark scratch space when using Kubernetes [SPARK-25262][K8S] Better support configurability of Spark scratch space when using Kubernetes Aug 29, 2018
@rvesse
Copy link
Member Author

rvesse commented Aug 29, 2018

@skonto Here is the PR for the SPARK_LOCAL_DIRS behaviour customisation we were discussing in the context of SPARK-24434.

I have minimised config to adding a single new setting for the simple case of just wanting to use tmpfs for the local storage and relied upon the forthcoming pod templates functionality to allow deeper customisation.

@skonto
Copy link
Contributor

skonto commented Aug 29, 2018

Cool @rvesse! This should the size limit right? I just noticed that: kubernetes/kubernetes#63126

.addToVolumes(localDirVolumes: _*)
// Don't want to re-add volumes that already existed in the incoming spec
// as duplicate definitions will lead to K8S API errors
.addToVolumes(localDirVolumes.filter(v => !hasVolume(pod, v.getName)): _*)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checking current volumes in a feature step isn't consistent with the additive design of the feature builder pattern. @mccheah to comment

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of this conflicting volume mount and conflicting volumes seems out of place here. If we're anticipating using the pod template file, keep in mind that the pod template feature is specifically not designed to do any validation. What kinds of errors are we hoping to avoid by doing the deduplication here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The implementation is still additive in that it will add to existing elements in the pod spec as needed but respect what is already present.

If your pod spec contains duplicate volumes/volume mounts then K8S will reject it as invalid e.g.

The Pod "rvesse-test" is invalid: spec.volumes[1].name: Duplicate value: "spark-local-dirs-1"

Therefore it is necessary to explicitly avoid duplicating things already present in the template

If the aim is to replace adding further config options with the pod template feature then the existing builders do need to be more intelligent in what they do to avoid generating invalid pod specs. This is regardless of whether the template feature is opinionated about validation, even if the template feature doesn't do validation, Spark code itself should be ensuring that it generates valid specs as far as it is able to. Obviously it can't detect every possible invalid spec that it might generate if the templates aren't being validated but it can avoid introducing easily avoidable invalid specs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is regardless of whether the template feature is opinionated about validation, even if the template feature doesn't do validation, Spark code itself should be ensuring that it generates valid specs as far as it is able to.

This is a stance that as far I'm aware, we specifically chose not to take in the pod template feature. If one is using the pod template feature then Spark won't provide any guarantees that the pod it makes will be well-formed. When spark submit deploys the pod to the cluster the API will return a clear enough error informing the user to make the appropriate corrections to their pod template.

@onursatici I just checked the pod template files PR, I didn't see this specifically called out - should this be documented?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mccheah yeap we should document that, will add

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a stance that as far I'm aware, we specifically chose not to take in the pod template feature. If one is using the pod template feature then Spark won't provide any guarantees that the pod it makes will be well-formed. When spark submit deploys the pod to the cluster the API will return a clear enough error informing the user to make the appropriate corrections to their pod template.

Sure, but we still need to be realistic about how the template feature will be used. It is supposed to enable power users to customise the pods for their environments. If there is an area like this where there is a clear use case to allow customisation we should be enabling that rather than saying sorry we're going to generate invalid pods regardless. Obviously the power user is assuming the risk of creating a pod template that meaningfully combines with Sparks generated pod to yield a valid runtime environment.

Clearly my stance here is controversial and likely needs a broader discussion on the dev list.

I can reduce this PR to just the config to enable tmpfs backed emptyDir volumes if that is preferred?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that might be better @rvesse

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, will do that Monday

FYI I notice @onursatici has now made some similar tweaks in his latest commit - a4fde0c - notice several feature steps there now have editOrNewX() or addToX() so that they combine with rather than overriding the template

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is different in that we're looking for specific volumes that have been set up by previous feature steps or outside logic. Preferably every step is self-contained in that it doesn't have to look up specific values set by previous steps.

For example this logic would break if we applied the templating after this step, or if a different step after this one added the volumes that are being looked up here.

Whereas editOrNew and addTo... at worst only change the ordering on some of the fields depending on when the step is invoked in the sequence.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mccheah @ifilonenko OK, I have opened PR #22323 with just the tmpfs enabling changes

@rvesse
Copy link
Member Author

rvesse commented Aug 30, 2018

@skonto I haven't done anything specific for the size limit ATM. From the K8S docs tmpfs backed emptyDir usage counts towards your containers memory limits so you can just set spark.executor.memory to a larger amount as needed.

From the discussion you linked you can explicitly set a size limit for the volume but I wanted to avoid adding multiple configuration options if possible. Since this PR allows for template defined volumes to be used you could define a volume in your template with the sizeLimit applied to it once you have pod templates PR available

@SparkQA
Copy link

SparkQA commented Aug 30, 2018

Test build #4308 has finished for PR 22256 at commit 8762ac1.

  • This patch fails Scala style tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@AmplabJenkins
Copy link

Can one of the admins verify this patch?

asfgit pushed a commit that referenced this pull request Sep 6, 2018
## What changes were proposed in this pull request?

The default behaviour of Spark on K8S currently is to create `emptyDir` volumes to back `SPARK_LOCAL_DIRS`.  In some environments e.g. diskless compute nodes this may actually hurt performance because these are backed by the Kubelet's node storage which on a diskless node will typically be some remote network storage.

Even if this is enterprise grade storage connected via a high speed interconnect the way Spark uses these directories as scratch space (lots of relatively small short lived files) has been observed to cause serious performance degradation.  Therefore we would like to provide the option to use K8S's ability to instead back these `emptyDir` volumes with `tmpfs`. Therefore this PR adds a configuration option that enables `SPARK_LOCAL_DIRS` to be backed by Memory backed `emptyDir` volumes rather than the default.

Documentation is added to describe both the default behaviour plus this new option and its implications.  One of which is that scratch space then counts towards your pods memory limits and therefore users will need to adjust their memory requests accordingly.

*NB* - This is an alternative version of PR #22256 reduced to just the `tmpfs` piece

## How was this patch tested?

Ran with this option in our diskless compute environments to verify functionality

Author: Rob Vesse <rvesse@dotnetrdf.org>

Closes #22323 from rvesse/SPARK-25262-tmpfs.
@rvesse
Copy link
Member Author

rvesse commented Sep 7, 2018

Closed in favour of #22323 which has been merged

@rvesse rvesse closed this Sep 7, 2018
Jeffwan pushed a commit to Jeffwan/spark that referenced this pull request Oct 1, 2019
The default behaviour of Spark on K8S currently is to create `emptyDir` volumes to back `SPARK_LOCAL_DIRS`.  In some environments e.g. diskless compute nodes this may actually hurt performance because these are backed by the Kubelet's node storage which on a diskless node will typically be some remote network storage.

Even if this is enterprise grade storage connected via a high speed interconnect the way Spark uses these directories as scratch space (lots of relatively small short lived files) has been observed to cause serious performance degradation.  Therefore we would like to provide the option to use K8S's ability to instead back these `emptyDir` volumes with `tmpfs`. Therefore this PR adds a configuration option that enables `SPARK_LOCAL_DIRS` to be backed by Memory backed `emptyDir` volumes rather than the default.

Documentation is added to describe both the default behaviour plus this new option and its implications.  One of which is that scratch space then counts towards your pods memory limits and therefore users will need to adjust their memory requests accordingly.

*NB* - This is an alternative version of PR apache#22256 reduced to just the `tmpfs` piece

Ran with this option in our diskless compute environments to verify functionality

Author: Rob Vesse <rvesse@dotnetrdf.org>

Closes apache#22323 from rvesse/SPARK-25262-tmpfs.
Jeffwan pushed a commit to Jeffwan/spark that referenced this pull request Feb 28, 2020
The default behaviour of Spark on K8S currently is to create `emptyDir` volumes to back `SPARK_LOCAL_DIRS`.  In some environments e.g. diskless compute nodes this may actually hurt performance because these are backed by the Kubelet's node storage which on a diskless node will typically be some remote network storage.

Even if this is enterprise grade storage connected via a high speed interconnect the way Spark uses these directories as scratch space (lots of relatively small short lived files) has been observed to cause serious performance degradation.  Therefore we would like to provide the option to use K8S's ability to instead back these `emptyDir` volumes with `tmpfs`. Therefore this PR adds a configuration option that enables `SPARK_LOCAL_DIRS` to be backed by Memory backed `emptyDir` volumes rather than the default.

Documentation is added to describe both the default behaviour plus this new option and its implications.  One of which is that scratch space then counts towards your pods memory limits and therefore users will need to adjust their memory requests accordingly.

*NB* - This is an alternative version of PR apache#22256 reduced to just the `tmpfs` piece

Ran with this option in our diskless compute environments to verify functionality

Author: Rob Vesse <rvesse@dotnetrdf.org>

Closes apache#22323 from rvesse/SPARK-25262-tmpfs.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants