@@ -54,6 +54,23 @@ To submit a Spark job inside the [DC/OS Overlay Network][16]:
54
54
Note that DC/OS Overlay support requires the [ UCR] [ 17 ] , rather than
55
55
the default Docker Containerizer, so you must set ` --conf spark.mesos.containerizer=mesos ` .
56
56
57
+ # Driver Failover Timeout
58
+
59
+ The ` --conf spark.mesos.driver.failoverTimeout ` option specifies the amount of time
60
+ (in seconds) that the master will wait for the driver to reconnect, after being
61
+ temporarily disconnected, before it tears down the driver framework by killing
62
+ all its executors. The default value is zero, meaning no timeout: if the
63
+ driver disconnects, the master immediately tears down the framework.
64
+
65
+ To submit a job with a nonzero failover timeout:
66
+
67
+ dcos spark run --submit-args="--conf spark.mesos.driver.failoverTimeout=60 --class MySampleClass http://external.website/mysparkapp.jar"
68
+
69
+ ** Note:** If you kill a job before it finishes, the framework will persist
70
+ as an ` inactive ` framework in Mesos for a period equal to the failover timeout.
71
+ You can manually tear down the framework before that period is over by hitting
72
+ the [ Mesos teardown endpoint] [ 18 ] .
73
+
57
74
# Versioning
58
75
59
76
The DC/OS Apache Spark Docker image contains OpenJDK 8 and Python 2.7.6.
@@ -68,3 +85,4 @@ The default DC/OS Apache Spark distribution is compiled against Hadoop 2.6 libra
68
85
[ 15 ] : http://spark.apache.org/docs/latest/configuration.html#overriding-configuration-directory
69
86
[ 16 ] : https://dcos.io/docs/overview/design/overlay/
70
87
[ 17 ] : https://dcos.io/docs/1.9/deploying-services/containerizers/ucr/
88
+ [ 18 ] : http://mesos.apache.org/documentation/latest/endpoints/master/teardown/
0 commit comments