Skip to content

Commit a42c0e3

Browse files
authored
Merge pull request apache#66 from mesosphere/spark-300-docs
[SPARK-300] fix docs for HDFS
2 parents 5d086a4 + 8887197 commit a42c0e3

File tree

1 file changed

+15
-5
lines changed

1 file changed

+15
-5
lines changed

docs/user-docs.md

Lines changed: 15 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -139,13 +139,23 @@ executors, for example with `spark.executor.memory`.
139139

140140
### HDFS
141141

142-
By default, DC/OS Spark jobs are configured to read from DC/OS HDFS. To
143-
submit Spark jobs that read from a different HDFS cluster, customize
144-
`hdfs.config-url` to be a URL that serves `hdfs-site.xml` and
145-
`core-site.xml`. [Learn more][8].
142+
To configure Spark for a specific HDFS cluster, configure
143+
`hdfs.config-url` to be a URL that serves your `hdfs-site.xml` and
144+
`core-site.xml`. For example:
145+
146+
{
147+
"hdfs": {
148+
"config-url": "http://mydomain.com/hdfs-config"
149+
}
150+
}
151+
152+
153+
where `http://mydomain.com/hdfs-config/hdfs-site.xml` and
154+
`http://mydomain.com/hdfs-config/core-site.xml` are valid
155+
URLs.[Learn more][8].
146156

147157
For DC/OS HDFS, these configuration files are served at
148-
`http://<hdfs.framework-name>.marathon.mesos:<port>/config/`, where
158+
`http://<hdfs.framework-name>.marathon.mesos:<port>/v1/connect`, where
149159
`<hdfs.framework-name>` is a configuration variable set in the HDFS
150160
package, and `<port>` is the port of its marathon app.
151161

0 commit comments

Comments
 (0)