Description
What happened:
In an effort to make use of a configMap for metrics.properties the following was specified in the driver spec:
driver:
configMaps:
- name: metricsproperties
path: /home
However, the pod was launched without the configMap mounted. The documentation states that the following:
The Kubernetes Operator for "Apache Spark comes with an optional mutating admission webhook for customizing Spark driver and executor pods based on the specification in SparkApplication objects, e.g., mounting user-specified ConfigMaps and volumes, and setting pod affinity/anti-affinity, and adding tolerations.
By default the helm chart automatically sets up the webhook. The old helm chart had the following name for the webhook service hardcoded:
helm/charts@6b0bbba#diff-2ecb319330580dabceaf98668ee5b316L125
This hardcoded value can also be found during the webhook-init job that calls /usr/bin/gencerts.sh which looks like this:
# Note the CN is the DNS name of the service of the webhook.
openssl req -new -key ${TMP_DIR}/server-key.pem -out ${TMP_DIR}/server.csr -subj "/CN=spark-webhook.${NAMESPACE}.svc" -config ${TMP_DIR}/server.conf
Updates to the helm chart break apiserver calls to the webhook service:
helm/charts@6b0bbba#diff-63733843ff7febc22d286c29b04fe34eR5
because the templated service name was set to be dynamic:
- If the mutatingwebhookconfiguration does not match the webhook service, the apiserver fails to make calls.
- if the certs that are generated do not match the name of the webhook service, the TLS handshake fails.
Quick Fix
Revert to using the hardcoded spark-webhook
webhook service name and respective TLS cert generated CN spark-webhook.${NAMESPACE}.svc
Activity