Skip to content

[SPARK-12345][Mesos] Properly filter out SPARK_HOME in the Mesos REST server #10359

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

dragos
Copy link
Contributor

@dragos dragos commented Dec 17, 2015

Fix problem with #10332, this one should fix Cluster mode on Mesos

@dragos dragos changed the title [SPARK-12345][Mesos] Properly filter out SPARK_HOME, see SPARK-12345 [SPARK-12345][Mesos] Properly filter out SPARK_HOME in the Mesos REST server Dec 17, 2015
@dragos
Copy link
Contributor Author

dragos commented Dec 17, 2015

ok to test

@@ -99,7 +99,7 @@ private[mesos] class MesosSubmitRequestServlet(
// cause spark-submit script to look for files in SPARK_HOME instead.
// We only need the ability to specify where to find spark-submit script
// which user can user spark.executor.home or spark.home configurations.
val environmentVariables = request.environmentVariables.filter(!_.equals("SPARK_HOME"))
val environmentVariables = request.environmentVariables.filterKeys(!_.equals("SPARK_HOME"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about compare by == instead of equals ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that'd be ok too. I chose to leave the code as it was and do the minimal change (given that we are in the RC cycle)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This breaks ZK persistence due to https://issues.scala-lang.org/browse/SI-6654

This line throws a NotSerializable exception: https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala#L166

Offer id: 72f4d1ce-67f7-41b0-95a3-aa6fb208df32-O189, cpu: 3.0, mem: 12995.0
15/12/17 21:52:44 DEBUG ClientCnxn: Got ping response for sessionid: 0x151b1d1567e0002 after 0ms
15/12/17 21:52:44 DEBUG nio: created SCEP@2e746d70{l(/10.0.6.166:41456)<->r(/10.0.0.240:17386),s=0,open=true,ishut=false,oshut=false,rb=false,wb=false,w=true,i=0}-{AsyncHttpConnection@5dbcebe3,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=-14,l=0,c=0},r=0}
15/12/17 21:52:44 DEBUG HttpParser: filled 1591/1591
15/12/17 21:52:44 DEBUG Server: REQUEST /v1/submissions/create on AsyncHttpConnection@5dbcebe3,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=2,l=2,c=1174},r=1
15/12/17 21:52:44 DEBUG ContextHandler: scope null||/v1/submissions/create @ o.s.j.s.ServletContextHandler{/,null}
15/12/17 21:52:44 DEBUG ContextHandler: context=||/v1/submissions/create @ o.s.j.s.ServletContextHandler{/,null}
15/12/17 21:52:44 DEBUG ServletHandler: servlet |/v1/submissions/create|null -> org.apache.spark.deploy.rest.mesos.MesosSubmitRequestServlet-368e091
15/12/17 21:52:44 DEBUG ServletHandler: chain=null
15/12/17 21:52:44 WARN ServletHandler: /v1/submissions/create
java.io.NotSerializableException: scala.collection.immutable.MapLike$$anon$1
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1183)
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
    at org.apache.spark.util.Utils$.serialize(Utils.scala:83)
    at org.apache.spark.scheduler.cluster.mesos.ZookeeperMesosClusterPersistenceEngine.persist(MesosClusterPersistenceEngine.scala:110)
    at org.apache.spark.scheduler.cluster.mesos.MesosClusterScheduler.submitDriver(MesosClusterScheduler.scala:166)
    at org.apache.spark.deploy.rest.mesos.MesosSubmitRequestServlet.handleSubmit(MesosRestServer.scala:132)
    at org.apache.spark.deploy.rest.SubmitRequestServlet.doPost(RestSubmissionServer.scala:258)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, @mgummelt. This is a minefield... :(

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI for those who are following, this is fixed in #10366

@dragos
Copy link
Contributor Author

dragos commented Dec 17, 2015

@skyluc tested this on a Mesos cluster and cluster mode works.

@SparkQA
Copy link

SparkQA commented Dec 17, 2015

Test build #47930 has finished for PR 10359 at commit c3cd332.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@sarutak
Copy link
Member

sarutak commented Dec 17, 2015

Merging into master and branch-1.6. Thanks @dragos !

@asfgit asfgit closed this in 8184568 Dec 17, 2015
@SparkQA
Copy link

SparkQA commented Dec 17, 2015

Test build #47931 has finished for PR 10359 at commit c3cd332.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@dragos
Copy link
Contributor Author

dragos commented Dec 17, 2015

thanks for the quick review!

asfgit pushed a commit that referenced this pull request Dec 17, 2015
… server

Fix problem with #10332, this one should fix Cluster mode on Mesos

Author: Iulian Dragos <jaguarul@gmail.com>

Closes #10359 from dragos/issue/fix-spark-12345-one-more-time.

(cherry picked from commit 8184568)
Signed-off-by: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
@andrewor14
Copy link
Contributor

Note: I'm reverting this patch in master only since #10329, the better alternative, is merged there.
This patch continues to exist in branch-1.6.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants