Skip to content

Added some coverage for executors and test cases for secret mounting and init-containers #15

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Jan 5, 2018

Conversation

liyinan926
Copy link
Member

@liyinan926 liyinan926 force-pushed the master branch 4 times, most recently from 8ec3662 to e92ef5b Compare January 3, 2018 03:08
@foxish
Copy link
Member

foxish commented Jan 4, 2018

@liyinan926 given the recently uncovered bug, I'm curious - did these tests detect that issue?

@@ -64,6 +65,7 @@ private[spark] class SparkDockerImageBuilder
buildImage("spark-base", BASE_DOCKER_FILE)
buildImage("spark-driver", DRIVER_DOCKER_FILE)
buildImage("spark-executor", EXECUTOR_DOCKER_FILE)
buildImage("spark-init-container", INIT_CONTAINER_DOCKER_FILE)
Copy link
Member

@foxish foxish Jan 4, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please also add an option to allow the test to directly use an init container supplied externally from outside the test code, similar to the options for the driver and executor.

Copy link
Member

@foxish foxish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly LGTM - just a couple of clarifications.

val TEST_SECRET_NAME = "test-secret"
val TEST_SECRET_KEY = "test-key"
val TEST_SECRET_VALUE = "test-data"
val TEST_SECRET_MOUNT_PATH = "/wtc/secrets"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo meant to be etc?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, fixed.


checkTestSecretInContainer(pod.getSpec.getContainers.get(0))

if (withInitContainer) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this option being exercised currently?

Copy link
Member Author

@liyinan926 liyinan926 Jan 4, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, none of the tests uses an init-container yet.

@liyinan926
Copy link
Member Author

@foxish the current set of tests won't uncover the bug because none of them really trigger the use of an init-container. We need tests that use a remote dependency so an init-container is added and used.

Copy link
Member

@kimoonkim kimoonkim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me overall, pending @foxish's comments.

Did you test this by running it locally? I wonder if we want to have a Jenkins job trigger on PRs to this repo.

@kimoonkim
Copy link
Member

rerun integration tests please

2 similar comments
@kimoonkim
Copy link
Member

rerun integration tests please

@kimoonkim
Copy link
Member

rerun integration tests please

@kimoonkim
Copy link
Member

A Jenkins run has passed with this PR, http://spark-k8s-jenkins.pepperdata.org:8080/job/PR-dev-spark-integration/2/console.

But it failed to post a status here. Mabye @foxish can give the Jenkins account the write access to this repo?

Setting status of 6a1353f to SUCCESS with url http://spark-k8s-jenkins.pepperdata.org:8080/job/PR-dev-spark-integration/2/ and message: 'Build completed successfully.
'
Using context: Integration Tests
FileNotFoundException means that the credentials Jenkins is using is probably wrong. Or the user account does not have write access to the repo.
java.io.FileNotFoundException: {"message":"Not Found","documentation_url":"https://developer.github.com/v3/repos/statuses/#create-a-status"}

@@ -74,6 +75,10 @@ private[spark] class KubernetesSuite extends FunSuite with BeforeAndAfterAll wit
runSparkPiAndVerifyCompletion()
}

test("Run SparkPi with an argument.") {
runSparkPiAndVerifyCompletion(appArgs = Array("5"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! Fixed.

@kimoonkim
Copy link
Member

rerun integration tests please

@liyinan926
Copy link
Member Author

@kimoonkim @foxish I just added a couple of test cases that use a remote example jar hosted at https://storage.googleapis.com/spark-k8s-integration-tests/jars/spark-examples_2.11-2.3.0.jar. The tests will trigger the use of the init-container. See e2d20c9.

@liyinan926 liyinan926 changed the title Added some coverage for executors and test cases for secret mounting Added some coverage for executors and test cases for secret mounting and init-containers Jan 5, 2018
@kimoonkim
Copy link
Member

Jenkins job is now running on this PR and commenting on it!

@foxish
Copy link
Member

foxish commented Jan 5, 2018

Awesome.. thanks Kimoon & Yinan! Will take another look shortly.


test("Run SparkPi using the remote example jar.") {
sparkAppConf.set("spark.kubernetes.initContainer.image",
System.getProperty("spark.docker.test.initContainerImage", "spark-init-container:latest"))
Copy link
Member

@foxish foxish Jan 5, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we call this spark-init elsewhere?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, renamed to spark-init.

})
}

test("Run SparkPi using the remote example jar with a test secret mounted into the driver and " +
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this test using the remote example jar? I can't find it being referenced in the test itself.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

@liyinan926
Copy link
Member Author

rerun integration tests please

@kimoonkim
Copy link
Member

The latest Jenkins run failed with "spark-base/entrypoint.sh: no such file or directory":

*** RUN ABORTED ***
com.spotify.docker.client.exceptions.DockerException: ProgressMessage{id=null, status=null, stream=null, error=lstat spark-base/entrypoint.sh: no such file or directory, progress=null, progressDetail=null}
at com.spotify.docker.client.LoggingBuildHandler.progress(LoggingBuildHandler.java:33)
at com.spotify.docker.client.DefaultDockerClient.build(DefaultDockerClient.java:1157)
at org.apache.spark.deploy.k8s.integrationtest.docker.SparkDockerImageBuilder.buildImage(SparkDockerImageBuilder.scala:70)
at org.apache.spark.deploy.k8s.integrationtest.docker.SparkDockerImageBuilder.buildSparkDockerImages(SparkDockerImageBuilder.scala:64)
at org.apache.spark.deploy.k8s.integrationtest.backend.minikube.MinikubeTestBackend.initialize(MinikubeTestBackend.scala:31)
at org.apache.spark.deploy.k8s.integrationtest.KubernetesSuite.beforeAll(KubernetesSuite.scala:44)
at org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
at org.apache.spark.deploy.k8s.integrationtest.KubernetesSuite.beforeAll(KubernetesSuite.scala:35)
at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
at org.apache.spark.deploy.k8s.integrationtest.KubernetesSuite.org$scalatest$BeforeAndAfter$$super$run(KubernetesSuite.scala:35)
...

@foxish
Copy link
Member

foxish commented Jan 5, 2018

Ah! This might be because the dockerfiles have changed in apache/spark#20154

@liyinan926
Copy link
Member Author

Oh, yes, it now requires ARG img_path.

@foxish
Copy link
Member

foxish commented Jan 5, 2018

I'm not sure how the builder plugin does this. I'm starting to think that since we'll be using the build-push-docker-images script to create release-images also in future, we should just invoke that to build images even for minikube. The script could be an e2e-minikube.sh, similar to e2e-cloud.sh in #16

@liyinan926
Copy link
Member Author

I'm investigating if I can add build arguments with the docker client.

@liyinan926
Copy link
Member Author

liyinan926 commented Jan 5, 2018

@foxish @kimoonkim It doesn't seem to be picking up my docker build argument changes at 5decbce. The build seemed still using the code already in the repo.

Copy link
Member

@foxish foxish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, pending a manual/automated run of the tests themselves.

@foxish
Copy link
Member

foxish commented Jan 5, 2018

@liyinan926, can you also add the init-container image option to the e2e/runner.py's invocation of the test, so that we supply the init-container to the test code, similar to https://github.com/apache-spark-on-k8s/spark-integration/blob/master/e2e/runner.sh#L115

@liyinan926
Copy link
Member Author

@foxish done.

@foxish
Copy link
Member

foxish commented Jan 5, 2018

Merging. Thanks!

@foxish foxish merged commit 3ec521f into apache-spark-on-k8s:master Jan 5, 2018
@liyinan926
Copy link
Member Author

Manually ran the tests locally and all passed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants