-
Notifications
You must be signed in to change notification settings - Fork 9
Added some coverage for executors and test cases for secret mounting and init-containers #15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
8ec3662
to
e92ef5b
Compare
@liyinan926 given the recently uncovered bug, I'm curious - did these tests detect that issue? |
@@ -64,6 +65,7 @@ private[spark] class SparkDockerImageBuilder | |||
buildImage("spark-base", BASE_DOCKER_FILE) | |||
buildImage("spark-driver", DRIVER_DOCKER_FILE) | |||
buildImage("spark-executor", EXECUTOR_DOCKER_FILE) | |||
buildImage("spark-init-container", INIT_CONTAINER_DOCKER_FILE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also add an option to allow the test to directly use an init container supplied externally from outside the test code, similar to the options for the driver and executor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly LGTM - just a couple of clarifications.
val TEST_SECRET_NAME = "test-secret" | ||
val TEST_SECRET_KEY = "test-key" | ||
val TEST_SECRET_VALUE = "test-data" | ||
val TEST_SECRET_MOUNT_PATH = "/wtc/secrets" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo meant to be etc
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, fixed.
|
||
checkTestSecretInContainer(pod.getSpec.getContainers.get(0)) | ||
|
||
if (withInitContainer) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this option being exercised currently?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, none of the tests uses an init-container yet.
@foxish the current set of tests won't uncover the bug because none of them really trigger the use of an init-container. We need tests that use a remote dependency so an init-container is added and used. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me overall, pending @foxish's comments.
Did you test this by running it locally? I wonder if we want to have a Jenkins job trigger on PRs to this repo.
rerun integration tests please |
2 similar comments
rerun integration tests please |
rerun integration tests please |
A Jenkins run has passed with this PR, http://spark-k8s-jenkins.pepperdata.org:8080/job/PR-dev-spark-integration/2/console. But it failed to post a status here. Mabye @foxish can give the Jenkins account the write access to this repo?
|
@@ -74,6 +75,10 @@ private[spark] class KubernetesSuite extends FunSuite with BeforeAndAfterAll wit | |||
runSparkPiAndVerifyCompletion() | |||
} | |||
|
|||
test("Run SparkPi with an argument.") { | |||
runSparkPiAndVerifyCompletion(appArgs = Array("5")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. It seems appArgs are ignored in the spark-submit command line. Please see https://github.com/apache-spark-on-k8s/spark-integration/blob/master/integration-test/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesTestComponents.scala#L107
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! Fixed.
rerun integration tests please |
@kimoonkim @foxish I just added a couple of test cases that use a remote example jar hosted at https://storage.googleapis.com/spark-k8s-integration-tests/jars/spark-examples_2.11-2.3.0.jar. The tests will trigger the use of the init-container. See e2d20c9. |
Jenkins job is now running on this PR and commenting on it! |
Awesome.. thanks Kimoon & Yinan! Will take another look shortly. |
|
||
test("Run SparkPi using the remote example jar.") { | ||
sparkAppConf.set("spark.kubernetes.initContainer.image", | ||
System.getProperty("spark.docker.test.initContainerImage", "spark-init-container:latest")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't we call this spark-init
elsewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, renamed to spark-init
.
}) | ||
} | ||
|
||
test("Run SparkPi using the remote example jar with a test secret mounted into the driver and " + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this test using the remote example jar? I can't find it being referenced in the test itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
rerun integration tests please |
The latest Jenkins run failed with "spark-base/entrypoint.sh: no such file or directory":
|
Ah! This might be because the dockerfiles have changed in apache/spark#20154 |
Oh, yes, it now requires |
I'm not sure how the builder plugin does this. I'm starting to think that since we'll be using the build-push-docker-images script to create release-images also in future, we should just invoke that to build images even for minikube. The script could be an e2e-minikube.sh, similar to e2e-cloud.sh in #16 |
I'm investigating if I can add build arguments with the docker client. |
@foxish @kimoonkim It doesn't seem to be picking up my docker build argument changes at 5decbce. The build seemed still using the code already in the repo. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, pending a manual/automated run of the tests themselves.
@liyinan926, can you also add the init-container image option to the e2e/runner.py's invocation of the test, so that we supply the init-container to the test code, similar to https://github.com/apache-spark-on-k8s/spark-integration/blob/master/e2e/runner.sh#L115 |
@foxish done. |
Merging. Thanks! |
Manually ran the tests locally and all passed. |
@kimoonkim @foxish