Skip to content

HADOOP-16150. ChecksumFileSystem doesn't wrap concat() #525

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

steveloughran
Copy link
Contributor

@steveloughran steveloughran commented Feb 27, 2019

HADOOP-16150. ChecksumFileSystem doesn't wrap concat()

This intercepts concat() To throw an UnsupportedOperationException.
Without this the concat() call is passes straight down to the wrapped
FS, so, if the underlying FS does support concat(), concatenated files don't have checksums

It also disables the test TestLocalFSContractMultipartUploader, as the service-loader mechanism to create an MPU uploader needs to be replaced by an API call in the filesystems, as proposed by HDFS-13934

Contributed by Steve Loughran.

Change-Id: I85fc1fc9445ca0b7d325495d3bc55fe9f5e5ce52

…n't have checksums.

Contributed by Steve Loughran.

Change-Id: I85fc1fc9445ca0b7d325495d3bc55fe9f5e5ce52
@steveloughran steveloughran changed the title HADOOP-16150. checksumFS doesn't wrap concat() HADOOP-16150. ChecksumFileSystem doesn't wrap concat() Feb 27, 2019
@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Comment
0 reexec 25 Docker mode activated.
_ Prechecks _
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+1 mvninstall 1148 trunk passed
+1 compile 950 trunk passed
+1 checkstyle 58 trunk passed
+1 mvnsite 79 trunk passed
+1 shadedclient 812 branch has no errors when building and testing our client artifacts.
+1 findbugs 105 trunk passed
+1 javadoc 65 trunk passed
_ Patch Compile Tests _
+1 mvninstall 43 the patch passed
+1 compile 888 the patch passed
+1 javac 888 the patch passed
+1 checkstyle 58 the patch passed
+1 mvnsite 77 the patch passed
+1 whitespace 0 The patch has no whitespace issues.
+1 shadedclient 667 patch has no errors when building and testing our client artifacts.
+1 findbugs 104 the patch passed
+1 javadoc 66 the patch passed
_ Other Tests _
+1 unit 526 hadoop-common in the patch passed.
+1 asflicense 47 The patch does not generate ASF License warnings.
5770
Subsystem Report/Notes
Docker Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-525/1/artifact/out/Dockerfile
GITHUB PR #525
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle
uname Linux 98d54feea8c0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / ea3cdc6
maven version: Apache Maven 3.3.9
Default Java 1.8.0_191
findbugs v3.1.0-RC1
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-525/1/testReport/
Max. process+thread count 1357 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-525/1/console
Powered by Apache Yetus 0.9.0 http://yetus.apache.org

This message was automatically generated.

@eyanghwx
Copy link

eyanghwx commented Mar 5, 2019

@steveloughran I committed this to trunk using cherry-pick on gitbox. I don't have access to my Apache linked github account from work that makes github workflow a pain to use. Please close this pull request. Thanks

@steveloughran
Copy link
Contributor Author

done

I don't have access to my Apache linked github account from work that makes github workflow a pain to use.

No? Not even from a different browser window?

shanthoosh pushed a commit to shanthoosh/hadoop that referenced this pull request Oct 15, 2019
1. Currently coordination related state is spread across several Zookeeper classes. There are also back-and-forth flows that exist between the ZkJobCoordinator, ZkControllerImpl, ZkControllerListener and ZkLeaderElector. This PR nukes un-necessary interfaces (and their implementation classes), simplifies state management and unifies state in the ZkJobCoordinator class.

2. Clearly defined life-cycle hooks on events:
- Protocol validations happen once during the lifecycle of a StreamProcessor (instead of each new session)
- New subscriptions to listeners happen at each a new Zk session

Author: Jagadish <jvenkatraman@linkedin.com>

Reviewers: Prateek M<pmaheshw@linkedin.com>

Closes apache#525 from vjagadish/zk-simplify
@steveloughran steveloughran deleted the filesystem/HADOOP-16150-checksumfs-concat branch October 15, 2021 19:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants