-
Notifications
You must be signed in to change notification settings - Fork 28.6k
EC2 configurable workers #612
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Can one of the admins verify this patch? |
Do you mind creating a JIRA for this on the spark issue tracker? It would be good to call it something like: "Allow multiple instances per node with SPARK-EC2" |
Thanks - this is a nice feature. I played with this locally and it worked. |
I merged this. I also opened up a new JIRA to just allow us to launch multiple executors for a given app on the same node: https://issues.apache.org/jira/browse/SPARK-1706 That will provide a better general solution for users wanting to do this. |
Added option to configure number of worker instances and to set SPARK_MASTER_OPTS Depends on: mesos/spark-ec2#46 Author: Allan Douglas R. de Oliveira <allan@chaordicsystems.com> Closes #612 from douglaz/ec2_configurable_workers and squashes the following commits: d6c5d65 [Allan Douglas R. de Oliveira] Added master opts parameter 6c34671 [Allan Douglas R. de Oliveira] Use number of worker instances as string on template ba528b9 [Allan Douglas R. de Oliveira] Added SPARK_WORKER_INSTANCES parameter (cherry picked from commit 4669a84) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
Thanks. |
Added option to configure number of worker instances and to set SPARK_MASTER_OPTS Depends on: mesos/spark-ec2#46 Author: Allan Douglas R. de Oliveira <allan@chaordicsystems.com> Closes apache#612 from douglaz/ec2_configurable_workers and squashes the following commits: d6c5d65 [Allan Douglas R. de Oliveira] Added master opts parameter 6c34671 [Allan Douglas R. de Oliveira] Use number of worker instances as string on template ba528b9 [Allan Douglas R. de Oliveira] Added SPARK_WORKER_INSTANCES parameter
The names are currently used when HadoopKerberosKeytabResolverStep tries to safe the kerberos delegation token into a kubernete secret. However, the current camel case values will cause a io.fabric8.kubernetes.client.KubernetesClientException stating the following: a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
When update /etc/hosts file, it should has root permation.
Ke 4.6.8.0 newten cherrypick * KE-41667 CDP 7.1 support hive client 3.1 (apache#612) * KE-41667 [FOLLOW UP] fix error (apache#614) * AL-8127 fix snyk, upgrade tomcat-embed-core from 9.0.68 to 9.0.72 (apache#609) AL-8127 fix snyk, upgrade tomcat-embed-core from 9.0.68 to 9.0.72 --------- Co-authored-by: jlf <longfei.jiang@kyligence.io>
Ke 4.6.8.0 newten cherrypick * KE-41667 CDP 7.1 support hive client 3.1 (apache#612) * KE-41667 [FOLLOW UP] fix error (apache#614) * AL-8127 fix snyk, upgrade tomcat-embed-core from 9.0.68 to 9.0.72 (apache#609) AL-8127 fix snyk, upgrade tomcat-embed-core from 9.0.68 to 9.0.72 --------- Co-authored-by: jlf <longfei.jiang@kyligence.io>
Added option to configure number of worker instances and to set SPARK_MASTER_OPTS
Depends on: mesos/spark-ec2#46