Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HDFS-16748. DFSClient should uniquely identify writing files by namespace id and iNodeId via RBF #4813

Merged
merged 2 commits into from
Sep 5, 2022

Conversation

ZanderXu
Copy link
Contributor

@ZanderXu ZanderXu commented Aug 27, 2022

Description of PR

DFSClient should uniquely identify the writing files by namespaceId and iNodeId. Because one DFSClient may be writing some files belong to different namespaces at the same time via RBF. If DFSClient only identify the writing files by fileId, it will lose some writing files, because the writing files from different namespaces may has same INodeId.

And the related code as bellows:

public void putFileBeingWritten(final long inodeId, final DFSOutputStream out) {
    synchronized(filesBeingWritten) {
      filesBeingWritten.put(inodeId, out);
      // update the last lease renewal time only when there was no
      // writes. once there is one write stream open, the lease renewer
      // thread keeps it updated well with in anyone's expiration time.
      if (lastLeaseRenewal == 0) {
        updateLastLeaseRenewal();
      }
    }
  }

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 1m 15s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 4 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 39s Maven dependency ordering for branch
+1 💚 mvninstall 28m 29s trunk passed
+1 💚 compile 7m 1s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 compile 6m 29s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 checkstyle 1m 33s trunk passed
+1 💚 mvnsite 3m 37s trunk passed
+1 💚 javadoc 2m 54s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 3m 29s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 7m 54s trunk passed
+1 💚 shadedclient 23m 57s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 25s Maven dependency ordering for patch
-1 ❌ mvninstall 0m 47s /patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt hadoop-hdfs-client in the patch failed.
-1 ❌ mvninstall 1m 16s /patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt hadoop-hdfs in the patch failed.
-1 ❌ mvninstall 0m 34s /patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt hadoop-hdfs-rbf in the patch failed.
-1 ❌ compile 0m 55s /patch-compile-hadoop-hdfs-project-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt hadoop-hdfs-project in the patch failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.
-1 ❌ javac 0m 55s /patch-compile-hadoop-hdfs-project-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt hadoop-hdfs-project in the patch failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.
-1 ❌ compile 0m 47s /patch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt hadoop-hdfs-project in the patch failed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.
-1 ❌ javac 0m 47s /patch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt hadoop-hdfs-project in the patch failed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 1m 14s /results-checkstyle-hadoop-hdfs-project.txt hadoop-hdfs-project: The patch generated 1 new + 79 unchanged - 0 fixed = 80 total (was 79)
-1 ❌ mvnsite 0m 50s /patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt hadoop-hdfs-client in the patch failed.
-1 ❌ mvnsite 1m 19s /patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt hadoop-hdfs in the patch failed.
-1 ❌ mvnsite 0m 35s /patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-rbf.txt hadoop-hdfs-rbf in the patch failed.
+1 💚 javadoc 2m 13s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 2m 56s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
-1 ❌ spotbugs 0m 47s /patch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt hadoop-hdfs-client in the patch failed.
-1 ❌ spotbugs 1m 19s /patch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt hadoop-hdfs in the patch failed.
-1 ❌ spotbugs 0m 37s /patch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.txt hadoop-hdfs-rbf in the patch failed.
-1 ❌ shadedclient 19m 27s patch has errors when building and testing our client artifacts.
_ Other Tests _
-1 ❌ unit 0m 49s /patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt hadoop-hdfs-client in the patch failed.
-1 ❌ unit 1m 19s /patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt hadoop-hdfs in the patch failed.
-1 ❌ unit 0m 39s /patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt hadoop-hdfs-rbf in the patch failed.
+1 💚 asflicense 0m 35s The patch does not generate ASF License warnings.
129m 40s
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4813/1/artifact/out/Dockerfile
GITHUB PR #4813
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux befc672a2fba 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / ee74d74c327168acb3426f8074e31f2e381c6039
Default Java Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4813/1/testReport/
Max. process+thread count 606 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4813/1/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 1m 6s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 4 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 43s Maven dependency ordering for branch
+1 💚 mvninstall 28m 20s trunk passed
+1 💚 compile 6m 53s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 compile 6m 28s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 checkstyle 1m 30s trunk passed
+1 💚 mvnsite 3m 32s trunk passed
+1 💚 javadoc 2m 55s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 3m 36s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 7m 57s trunk passed
+1 💚 shadedclient 24m 18s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 27s Maven dependency ordering for patch
+1 💚 mvninstall 2m 52s the patch passed
+1 💚 compile 6m 42s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javac 6m 42s the patch passed
+1 💚 compile 6m 16s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 javac 6m 16s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 1m 18s the patch passed
+1 💚 mvnsite 3m 6s the patch passed
+1 💚 javadoc 2m 16s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 3m 7s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 7m 50s the patch passed
+1 💚 shadedclient 24m 15s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 2m 26s hadoop-hdfs-client in the patch passed.
+1 💚 unit 382m 33s hadoop-hdfs in the patch passed.
+1 💚 unit 34m 51s hadoop-hdfs-rbf in the patch passed.
+1 💚 asflicense 1m 0s The patch does not generate ASF License warnings.
582m 51s
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4813/2/artifact/out/Dockerfile
GITHUB PR #4813
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux abfe71c2786b 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 83f306a
Default Java Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4813/2/testReport/
Max. process+thread count 2414 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4813/2/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@ZanderXu ZanderXu changed the title HDFS-16748. DFSClient should diff the writing files with namespace id and iNodeId HDFS-16748. DFSClient should uniquely identify writing files by namespace id and iNodeId Aug 29, 2022
@ZanderXu ZanderXu changed the title HDFS-16748. DFSClient should uniquely identify writing files by namespace id and iNodeId HDFS-16748. DFSClient should uniquely identify writing files by namespace id and iNodeId via RBF Aug 30, 2022
@ZanderXu
Copy link
Contributor Author

@tomscut @ayushtkn @Hexiaoqiao Hi, masters, can you help me review this patch?

@ZanderXu
Copy link
Contributor Author

@goiri Master, can help review this patch? After using RBF, this logic has bugs and needs to be fixed.

@Hexiaoqiao
Copy link
Contributor

@ZanderXu Thanks involve me here. IIUC, this improvement will help Router to forward renewLease to only one or certain Namenode, right? If that, it makes sense to me. Only one nit comment, title 'by namespace id and iNodeId via RBF' seems not related to updates completely. I would like to hear some other folks' comments.

@ZanderXu
Copy link
Contributor Author

ZanderXu commented Sep 1, 2022

@Hexiaoqiao Thanks for your review. This improvement is not used to help Router to forward renewLease to only one or certain Namenode.

One DFSClient may be writing some files from different namespace but with the same file id via RBF. The current filesBeingWritten is only use iNodeId to identify the written file, it cannot cover the case that two written files from different namespace with the same iNodeId.

For example:

  • RBF contains two mount point, /ns0 -> ns0 and /ns1 -> ns1
  • One DFSClient create one file /ns0/file0 via RBF, the RBF will forward this create rpc to ns0, and suppose the file with iNodeId 1000
  • DFSClient will put this outputStream into filesBeingWritten with iNodeId 1000
  • Then this DFSClient create one new file /ns1/file1 via RBF, the RBF will forward this create RPC to NS1, and the response of this RPC may with one 1000 iNodeId, because they are from different namespace
  • Then this DFSClient will put the new outputStream into filesBeingWritten with iNodeId 1000 again. It will overwrite the previous outputStream with path /ns0/file0 and iNodeId 1000 from NS0

It may caused this DFSClient cannot renew the lease of /ns0/file0

@Hexiaoqiao
Copy link
Contributor

OK. Got it.
So is this related to HDFS-16283 which added namespaces parameter for renewLease? IIUC, before HDFS-16283, even if one inode is overwrited by another one with same id, it will send renewLease to all namenode, and can not lose lease for client, right? Please correct me if something wrong.

@ayushtkn
Copy link
Member

ayushtkn commented Sep 1, 2022

even if one inode is overwrited by another one with same id, it will send renewLease to all namenode, and can not lose lease for client, right? P

Just thinking, if the second file which was on top, which overwrote the previous entry gets closed, then there would be no entry & in that case renewLease will not be triggered only for the entry which got overwritten, even without the previous patch? Am I missing something...

Comment on lines 201 to 205
if (this.namespace == null) {
this.renewLeaseKey = "DEFAULT" + "_" + this.fileId;
} else {
this.renewLeaseKey = this.namespace + "_" + this.fileId;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "DEFAULT" needs to be configurable, someone can have a namespace with name DEFAULT as well

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, Sir. I have updated it.

@ZanderXu
Copy link
Contributor Author

ZanderXu commented Sep 1, 2022

Just thinking, if the second file which was on top, which overwrote the previous entry gets closed, then there would be no entry & in that case renewLease will not be triggered only for the entry which got overwritten, even without the previous patch?

@ayushtkn Nice example. @Hexiaoqiao Maybe I misled you. Incorrect filesBeingWritten can cause a lot of problems, not just renewLease, such as DFSClient lose some writing files when closing all writing files.

@Hexiaoqiao
Copy link
Contributor

if the second file which was on top, which overwrote the previous entry gets closed, then there would be no entry & in that case renewLease will not be triggered only for the entry which got overwritten, even without the previous patch

Pretty case. Thanks to correct me. I want to figure out that this case could meet only for RBF deploy, right?

@ZanderXu
Copy link
Contributor Author

ZanderXu commented Sep 1, 2022

I want to figure out that this case could meet only for RBF deploy, right?

Yes.

@Hexiaoqiao
Copy link
Contributor

Great, +1 from my side. Let's wait what Yetus will say.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 1m 23s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 xmllint 0m 0s xmllint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 4 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 15m 31s Maven dependency ordering for branch
+1 💚 mvninstall 28m 38s trunk passed
+1 💚 compile 6m 53s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 compile 6m 25s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 checkstyle 1m 31s trunk passed
+1 💚 mvnsite 3m 34s trunk passed
+1 💚 javadoc 2m 54s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 3m 37s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 7m 59s trunk passed
+1 💚 shadedclient 24m 19s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 26s Maven dependency ordering for patch
+1 💚 mvninstall 2m 54s the patch passed
+1 💚 compile 6m 45s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javac 6m 45s the patch passed
+1 💚 compile 6m 20s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 javac 6m 20s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 1m 18s the patch passed
+1 💚 mvnsite 3m 3s the patch passed
+1 💚 javadoc 2m 20s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 3m 3s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 8m 3s the patch passed
+1 💚 shadedclient 24m 55s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 2m 38s hadoop-hdfs-client in the patch passed.
-1 ❌ unit 405m 16s /patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt hadoop-hdfs in the patch passed.
+1 💚 unit 35m 11s hadoop-hdfs-rbf in the patch passed.
+1 💚 asflicense 0m 59s The patch does not generate ASF License warnings.
608m 18s
Reason Tests
Failed junit tests hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes
hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4813/3/artifact/out/Dockerfile
GITHUB PR #4813
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint
uname Linux 7a485005fbbe 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / cc89a52
Default Java Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4813/3/testReport/
Max. process+thread count 2308 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4813/3/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@ZanderXu
Copy link
Contributor Author

ZanderXu commented Sep 1, 2022

The failed UT hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes and hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes are works well locally.

Copy link
Member

@ayushtkn ayushtkn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ayushtkn ayushtkn merged commit be4c638 into apache:trunk Sep 5, 2022
@ZanderXu
Copy link
Contributor Author

ZanderXu commented Sep 5, 2022

@ayushtkn @Hexiaoqiao Masters, thank you very much for helping me to review this patch.

HarshitGupta11 pushed a commit to HarshitGupta11/hadoop that referenced this pull request Nov 28, 2022
…namespace id and iNodeId via RBF (apache#4813). Contributed by ZanderXu.

Reviewed-by: He Xiaoqiao <hexiaoqiao@apache.org>
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants