-
Notifications
You must be signed in to change notification settings - Fork 9.1k
Fix so dfs.client.failover.max.attempts is respected correctly #699
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: trunk
Are you sure you want to change the base?
Conversation
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
Is there a jira filed for this? |
Please follow this doc to file a jira and contribute https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
Author: bharathkk <codin.martial@gmail.com> Reviewers: Prateek Maheshwari <pmaheshwari@apache.org>, Shanthoosh Venkatraman <svenkatr@linkedin.com> Closes apache#699 from bharathkk/bug-fix
Without this change, you would get incorrect behavior in that you would always have to set
dfs.client.failover.max.attempts
to be +1 greater to have the desired behavior, e.g. if you want it to attempt failover exactly once, you would have to setdfs.client.failover.max.attempts=2
.Without this change, if you set
dfs.client.failover.max.attempts=1
, to attempt to failover just once time, instead it wouldn't try at all, and you see this log message:Note that the non-failover retries just below this change is correct.