Skip to content

[SPARK-1979] Added Error Handling if user passes application params with... #930

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

ncounterspecialist
Copy link
Contributor

Added error message to user when used --arg for passing application parameters.

@AmplabJenkins
Copy link

Can one of the admins verify this patch?

@pwendell
Copy link
Contributor

pwendell commented Jun 4, 2014

What if the user application legitimately has a flag called --arg? Won't this cause it to break?

Also (just wondering) - why do you think users would be setting --arg... just from previous submissions scripts in Spark?

@pwendell
Copy link
Contributor

pwendell commented Jun 4, 2014

I think right now of the user uses --arg before specifying the app jar it will say "unrecognized option --arg". This might be the best behavior we can achieve.

@ncounterspecialist
Copy link
Contributor Author

  1. What if the user application legitimately has a flag called --arg? Won't this cause it to break?
    So there are already reserved keywords like --
    --name
    --master ....
    which user apps cannot use because spark considers them as special words. So asking the user to use different name rather than using --arg would be a good idea .Also in case spark starts using --arg for some special meaning user applications using --arg as application parameters will break.
  2. Also (just wondering) - why do you think users would be setting --arg... just from previous submissions scripts in Spark?

I have scheduling framework written for firing jobs to spark. After upgrading to spark 1.0.0 my application was broken and it took me quite a lot time to find out why it was not taking parameters. So any one upgrading from previous version to 1.0.0 will face the same problem and they have to debug to find out whats failing. So an error message would be very useful.

@pwendell
Copy link
Contributor

pwendell commented Jun 4, 2014

@pankajarora12 no, those aren't reserved keywords. This was specifically designed not to have reserved keywords. For instance, this is valid:

./bin/spark-submit --name "My app" --master "local[4]" myJar.jar --name "This goes to the app"

So just to be clear you were upgrading from master --> 1.0?

@vanzin
Copy link
Contributor

vanzin commented Jun 4, 2014

What if the user application legitimately has a flag called --arg?

Perhaps the command line parser should add a case for "--", meaning "stop parsing arguments", at the same time making it more compatible with what people are used to. :-)

@ncounterspecialist
Copy link
Contributor Author

@pwendell
/bin/spark-submit --name "My app" --master "local[4]" myJar.jar --name "This goes to the app"

If I pass --master twice, earlier --master argument passed gets overridden by the one following it. That means you cannot use --master as an application argument.

The command i fired was

/opt/spark-1.0.0/bin/spark-submit --master yarn /opt/tms/java/app-1.0-SNAPSHOT-jar-with-dependencies.jar --master yarn1 --deploy-mode cluster --verbose --class com.guavus.scala.MainClass  yarn-cluster  /data/output  /data/collector/1/output/http/2014/05/11/22/50/72/  VS-NN --driver-memory 9099M --executor-memory 9099M --num-executors 1 --executor-cores 1

and master field took the value as yarn1 also --master yarn1 did not get passed to our App.

BTW I upgraded from tag v1.0.0-rc2 to v1.0.0.

@andrewor14
Copy link
Contributor

Hi @pankajarora12, this issue is outdated since #1801 went in. Would you mind closing this?

@SparkQA
Copy link

SparkQA commented Sep 5, 2014

Can one of the admins verify this patch?

@asfgit asfgit closed this in eae81b0 Sep 12, 2014
chuckchen pushed a commit to chuckchen/spark that referenced this pull request Jun 25, 2015
Add mapPartitionsWithIndex
(cherry picked from commit c514cd1)

Signed-off-by: Reynold Xin <rxin@apache.org>
wangyum pushed a commit that referenced this pull request May 26, 2023
* [CARMEL-5955] Drop NOT NULL constraint when alter column type

* refresh
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants