Skip to content

[SPARK-6636] Use public DNS hostname everywhere in spark_ec2.py #5302

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

aasted
Copy link
Contributor

@aasted aasted commented Mar 31, 2015

No description provided.

@AmplabJenkins
Copy link

Can one of the admins verify this patch?

@JoshRosen
Copy link
Contributor

Good catch. This change should be safe since we already SSH to instances using the public_dns_name.

I'll merge this into master (1.4.0) and branch-1.3 (1.3.2) and will use the description from the JIRA issue as the commit message. Thanks!

asfgit pushed a commit that referenced this pull request Apr 7, 2015
The spark_ec2.py script uses public_dns_name everywhere in the script except for testing ssh availability, which is done using the public ip address of the instances. This breaks the script for users who are deploying the cluster with a private-network-only security group. The fix is to use public_dns_name in the remaining place.

Author: Matt Aasted <aasted@twitch.tv>

Closes #5302 from aasted/master and squashes the following commits:

60cf6ee [Matt Aasted] [SPARK-6636] Use public DNS hostname everywhere in spark_ec2.py

(cherry picked from commit 6f0d55d)
Signed-off-by: Josh Rosen <joshrosen@databricks.com>
@asfgit asfgit closed this in 6f0d55d Apr 7, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants