File tree Expand file tree Collapse file tree 1 file changed +15
-5
lines changed Expand file tree Collapse file tree 1 file changed +15
-5
lines changed Original file line number Diff line number Diff line change @@ -139,13 +139,23 @@ executors, for example with `spark.executor.memory`.
139
139
140
140
### HDFS
141
141
142
- By default, DC/OS Spark jobs are configured to read from DC/OS HDFS. To
143
- submit Spark jobs that read from a different HDFS cluster, customize
144
- ` hdfs.config-url ` to be a URL that serves ` hdfs-site.xml ` and
145
- ` core-site.xml ` . [ Learn more] [ 8 ] .
142
+ To configure Spark for a specific HDFS cluster, configure
143
+ ` hdfs.config-url ` to be a URL that serves your ` hdfs-site.xml ` and
144
+ ` core-site.xml ` . For example:
145
+
146
+ {
147
+ "hdfs": {
148
+ "config-url": "http://mydomain.com/hdfs-config"
149
+ }
150
+ }
151
+
152
+
153
+ where ` http://mydomain.com/hdfs-config/hdfs-site.xml ` and
154
+ ` http://mydomain.com/hdfs-config/core-site.xml ` are valid
155
+ URLs.[ Learn more] [ 8 ] .
146
156
147
157
For DC/OS HDFS, these configuration files are served at
148
- ` http://<hdfs.framework-name>.marathon.mesos:<port>/config/ ` , where
158
+ ` http://<hdfs.framework-name>.marathon.mesos:<port>/v1/connect ` , where
149
159
` <hdfs.framework-name> ` is a configuration variable set in the HDFS
150
160
package, and ` <port> ` is the port of its marathon app.
151
161
You can’t perform that action at this time.
0 commit comments