Skip to content

HADOOP-17760. Delete hadoop.ssl.enabled and dfs.https.enable from docs and core-default.xml #3099

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jun 17, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -3096,14 +3096,6 @@
</description>
</property>

<property>
<name>hadoop.ssl.enabled</name>
<value>false</value>
<description>
Deprecated. Use dfs.http.policy and yarn.http.policy instead.
</description>
</property>

<property>
<name>hadoop.ssl.enabled.protocols</name>
<value>TLSv1.2</value>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -269,9 +269,8 @@ The following settings allow configuring SSL access to the NameNode web UI (opti

| Parameter | Value | Notes |
|:-----------------------------|:------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `dfs.http.policy` | `HTTP_ONLY` or `HTTPS_ONLY` or `HTTP_AND_HTTPS` | `HTTPS_ONLY` turns off http access. This option takes precedence over the deprecated configuration dfs.https.enable and hadoop.ssl.enabled. If using SASL to authenticate data transfer protocol instead of running DataNode as root and using privileged ports, then this property must be set to `HTTPS_ONLY` to guarantee authentication of HTTP servers. (See `dfs.data.transfer.protection`.) |
| `dfs.http.policy` | `HTTP_ONLY` or `HTTPS_ONLY` or `HTTP_AND_HTTPS` | `HTTPS_ONLY` turns off http access. If using SASL to authenticate data transfer protocol instead of running DataNode as root and using privileged ports, then this property must be set to `HTTPS_ONLY` to guarantee authentication of HTTP servers. (See `dfs.data.transfer.protection`.) |
| `dfs.namenode.https-address` | `0.0.0.0:9871` | This parameter is used in non-HA mode and without federation. See [HDFS High Availability](../hadoop-hdfs/HDFSHighAvailabilityWithNFS.html#Deployment) and [HDFS Federation](../hadoop-hdfs/Federation.html#Federation_Configuration) for details. |
| `dfs.https.enable` | `true` | This value is deprecated. `Use dfs.http.policy` |

### Secondary NameNode

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -238,8 +238,6 @@ public void initializeMemberVariables() {
// - org.apache.hadoop.net.NetUtils
xmlPropsToSkipCompare
.add("hadoop.rpc.socket.factory.class.ClientProtocol");
// - Where is this used?
xmlPropsToSkipCompare.add("hadoop.ssl.enabled");

// Keys with no corresponding variable
// - org.apache.hadoop.io.compress.bzip2.Bzip2Factory
Expand Down
2 changes: 0 additions & 2 deletions hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json
Original file line number Diff line number Diff line change
Expand Up @@ -4559,7 +4559,6 @@
"hadoop.hdfs.configuration.version" : "1",
"dfs.datanode.balance.bandwidthPerSec" : "1048576",
"mapreduce.reduce.shuffle.connect.timeout" : "180000",
"hadoop.ssl.enabled" : "false",
"dfs.journalnode.rpc-address" : "0.0.0.0:8485",
"yarn.nodemanager.aux-services" : "mapreduce.shuffle",
"mapreduce.job.counters.max" : "120",
Expand Down Expand Up @@ -9626,7 +9625,6 @@
"hadoop.hdfs.configuration.version" : "1",
"dfs.datanode.balance.bandwidthPerSec" : "1048576",
"mapreduce.reduce.shuffle.connect.timeout" : "180000",
"hadoop.ssl.enabled" : "false",
"dfs.journalnode.rpc-address" : "0.0.0.0:8485",
"yarn.nodemanager.aux-services" : "mapreduce.shuffle",
"mapreduce.job.counters.max" : "120",
Expand Down