You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You need to create an SSL certificate for the HttpFS server. As the `httpfs` Unix user, using the Java `keytool` command to create the SSL certificate:
120
120
@@ -131,6 +131,7 @@ The answer to "What is your first and last name?" (i.e. "CN") must be the hostna
131
131
Start HttpFS. It should work over HTTPS.
132
132
133
133
Using the Hadoop `FileSystem` API or the Hadoop FS shell, use the `swebhdfs://` scheme. Make sure the JVM is picking up the truststore containing the public key of the SSL certificate if using a self-signed certificate.
134
+
For more information about the client side settings, see [SSL Configurations for SWebHDFS](../hadoop-project-dist/hadoop-hdfs/WebHDFS.html#SSL_Configurations_for_SWebHDFS).
134
135
135
136
NOTE: Some old SSL clients may use weak ciphers that are not supported by the HttpFS server. It is recommended to upgrade the SSL client.
Copy file name to clipboardExpand all lines: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
+51Lines changed: 51 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -104,6 +104,7 @@ In the REST API, the prefix "`/webhdfs/v1`" is inserted in the path and a query
104
104
105
105
swebhdfs://<HOST>:<HTTP_PORT>/<PATH>
106
106
107
+
See also: [SSL Configurations for SWebHDFS](#SSL_Configurations_for_SWebHDFS)
107
108
108
109
### HDFS Configuration Options
109
110
@@ -164,6 +165,56 @@ The following properties control OAuth2 authentication.
164
165
|`dfs.webhdfs.oauth2.refresh.token.expires.ms.since.epoch`| (required if using ConfRefreshTokenBasedAccessTokenProvider) Access token expiration measured in milliseconds since Jan 1, 1970. *Note this is a different value than provided by OAuth providers and has been munged as described in interface to be suitable for a client application*|
165
166
|`dfs.webhdfs.oauth2.credential`| (required if using ConfCredentialBasedAccessTokenProvider). Credential used to obtain initial and subsequent access tokens. |
Copy file name to clipboardExpand all lines: hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm
+3-1Lines changed: 3 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -542,10 +542,12 @@ $H3 Copying Between Versions of HDFS
542
542
HftpFileSystem, as webhdfs is available for both read and write operations,
543
543
DistCp can be run on both source and destination cluster.
544
544
Remote cluster is specified as `webhdfs://<namenode_hostname>:<http_port>`.
545
-
(Use the "`swebhdfs://`" scheme when webhdfs is secured with SSL).
546
545
When copying between same major versions of Hadoop cluster (e.g. between 2.X
547
546
and 2.X), use hdfs protocol for better performance.
548
547
548
+
$H3 Secure Copy over the wire with distcp
549
+
Use the "`swebhdfs://`" scheme when webhdfs is secured with SSL. For more information see [SSL Configurations for SWebHDFS](../hadoop-project-dist/hadoop-hdfs/WebHDFS.html#SSL_Configurations_for_SWebHDFS).
550
+
549
551
$H3 MapReduce and other side-effects
550
552
551
553
As has been mentioned in the preceding, should a map fail to copy one of its
0 commit comments