Skip to content

HADOOP-16823. Large DeleteObject requests are their own Thundering Herd #1826

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

steveloughran
Copy link
Contributor

Currently AWS S3 throttling is initially handled in the AWS SDK, only reaching the S3 client code after it has given up.

This means we don't always directly observe when throttling is taking place.

  • disables throttling retries in the AWS client Library for S3 only
  • add a quantile for the S3 throttle events, as DDB has
  • isolate counters of s3 and DDB throttle events to classify issues better
  • improvements to DDB throttling handling and testing
  1. Because we are taking over the AWS retries, we need to expand the initial delay en retries and the number of retries we should support before giving up.
  2. I can split the DDB and S3 side of this patch...they came in together as once I turned off throttling across all AWS client configs, scale tests against a 10 TPS DDB table showed we weren't retrying adequately in some of the tests, and retrying inefficiently in listChildren.

(reinstatment of #1814 which was accidentally closed)

@steveloughran steveloughran requested a review from bgaborg January 31, 2020 11:48
@steveloughran steveloughran added bug fs/s3 changes related to hadoop-aws; submitter must declare test endpoint labels Jan 31, 2020
@apache apache deleted a comment from hadoop-yetus Feb 1, 2020
@steveloughran steveloughran force-pushed the s3/HADOOP-16823-throttling branch from 810bad4 to 74553b0 Compare February 1, 2020 12:38
@apache apache deleted a comment from hadoop-yetus Feb 1, 2020
@steveloughran steveloughran force-pushed the s3/HADOOP-16823-throttling branch from 464d789 to dacc401 Compare February 4, 2020 18:54
@apache apache deleted a comment from hadoop-yetus Feb 5, 2020
@apache apache deleted a comment from hadoop-yetus Feb 5, 2020
@apache apache deleted a comment from hadoop-yetus Feb 5, 2020
@steveloughran
Copy link
Contributor Author

./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:216:   @InterfaceStability.Unstable: 'member def modifier' has incorrect indentation level 3, expected level should be 2. [Indentation]
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:218:      "fs.s3a.experimental.optimized.directory.operations";: '"fs.s3a.experimental.optimized.directory.operations"' has incorrect indentation level 6, expected level should be 7. [Indentation]
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:285:   public static final String BULK_DELETE_PAGE_SIZE =: 'member def modifier' has incorrect indentation level 3, expected level should be 2. [Indentation]
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:286:      "fs.s3a.bulk.delete.page.size";: '"fs.s3a.bulk.delete.page.size"' has incorrect indentation level 6, expected level should be 7. [Indentation]
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:322:  public void initialize(URI name, Configuration originalConf):3: Method length is 154 lines (max allowed is 150). [MethodLength]
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DeleteOperation.java:50:import static org.apache.hadoop.fs.s3a.impl.CallableSupplier.waitForCompletion;:15: Unused import - org.apache.hadoop.fs.s3a.impl.CallableSupplier.waitForCompletion. [UnusedImports]
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StandardInvokeRetryHandler.java:180:    THROTTLE_LOG.debug("Request throttled on {}", metastore ? "S3" : "DynamoDB");: Line is longer than 80 characters (found 81). [LineLength]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java:58:import org.apache.hadoop.fs.s3a.S3ATestUtils;:8: Unused import - org.apache.hadoop.fs.s3a.S3ATestUtils. [UnusedImports]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java:207:   * @return true if the DDB table has prepaid IO and is small enough to throttle.: Line is longer than 80 characters (found 82). [LineLength]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java:519:  public void test_999_delete_all_entries() throws Throwable {:15: Name 'test_999_delete_all_entries' must match pattern '^[a-z][a-zA-Z0-9]*$'. [MethodName]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ThrottleTracker.java:113:      LOG.warn("No throttling detected in {} against {}", this, ddbms.toString());: Line is longer than 80 characters (found 82). [LineLength]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ILoadTestS3ABulkDeleteThrottling.java:117:  @Parameterized.Parameters(name = "bulk-delete-client-retry={0}-requests={2}-size={1}"): Line is longer than 80 characters (found 88). [LineLength]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ILoadTestS3ABulkDeleteThrottling.java:184:  public void test_010_Reset() throws Throwable {:15: Name 'test_010_Reset' must match pattern '^[a-z][a-zA-Z0-9]*$'. [MethodName]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ILoadTestS3ABulkDeleteThrottling.java:189:  public void test_020_DeleteThrottling() throws Throwable {:15: Name 'test_020_DeleteThrottling' must match pattern '^[a-z][a-zA-Z0-9]*$'. [MethodName]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ILoadTestS3ABulkDeleteThrottling.java:203:  public void test_030_Sleep() throws Throwable {:15: Name 'test_030_Sleep' must match pattern '^[a-z][a-zA-Z0-9]*$'. [MethodName]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ILoadTestS3ABulkDeleteThrottling.java:222:  private File deleteFiles(final int requests, final int entries):38: 'requests' hides a field. [HiddenField]

@steveloughran
Copy link
Contributor Author

BTW, latest patch adds an experimental "optimize directory markers" switch which tells the S3a client to be less strict about creating and deleting directory markers. I'm explicitly vague about what that means, but currently in file creation it only ever looks one level up to delete markers. There's trouble there if applications don't call mkdir() on a path before creating the file, but otherwise it avoids the tombstones and scale problems on deep trees

Currently AWS S3 throttling is initially handled in the AWS SDK, only reaching the S3 client code after it has given up.

This means we don't always directly observe when throttling is taking place.

Proposed:

* disable throttling retries in the AWS client Library
* add a quantile for the S3 throttle events, as DDB has
* isolate counters of s3 and DDB throttle events to classify issues better

Because we are taking over the AWS retries, we will need to expand the initial delay en retries and the number of retries we should support before giving up.

Also: should we log throttling events? It could be useful but there is a risk of logs overloading especially if many threads in the same process were triggering the problem.

Change-Id: I386928cd478a6a9fbb91f15b9185a1ea91878680
Proposed: log at debug.
fix checkstyle

Change-Id: I19f3848b298a8656ee5f986a2ba1cde50a106814
Turning off throttling in the AWS client causes problems for the DDBMetastore;
including showing where tests were making non-retrying operations against
the table.

Mostly addressed though ITestDynamoDBMetadataStoreScale is still petulant.
Either it takes too long to finish or it doesn't throttle. Oh, and
lag means that while a test may fail because throttling wasn't raised,
the next IO may fail.

Change-Id: I37bbcb67023f4cb3ebdcba978602be58099ad306
* Split out where/how we retry listchildren
* trying to speed up the ddb scale tests

(though the latest change there triggers an NPE...)

For anyone curious why tests take so long -it's probably
set up of the per-test-case FS instance, because that has full retry,
and once one test has throttled, that spin/wait goes on until DDB
is letting the client at it.

Which is a PITA but it does at least mean that "usually" each test case
is in a recovered state. Do we care? Should we just run them back to
back and be happy overloading things? I think so

Change-Id: Ib35d450449fffaa2379d62ca12180eaa70c38584
- Moving RetryingCollection to toplevel;
- DDBMS.listChildren() unwraps IOEs raised in iterator.
- throttling scale test happier if throttling doesn't surface in
  a test run as it may mean that the problem will surface later

Change-Id: Ibf55e6ab257269b55230eedacae6a17586d91211
Implies that from the (remote) test machine multiple smaller bulk deletes
are handled better than few large ones from a retry perspective.

Also imply: we should make sure that the backoff strategy we use in our
code doesn't back off over-aggressively
Proposed: retryUpToMaximumCountWithProportionalSleep rather than exponential
for throttling

oh, and the throttle detection code doesn't seem to be updating counters
here...

Change-Id: I163d128aa5ad5c203ade66bd4f049d3357d6a9d4
-trying to set up an explicit callback on retries in bulk deletes

Change-Id: I456680bbbedf3f135508ae3960e83eb1baefbfc6
* pulling out retry logic into its own class; only using this in delete()
* tests are parameterized

configured page size coming in as 0; no idea why not. To debug

Change-Id: I7e45b897c0a8d09167e6e6148a8f3930f31ec5b0
This adds an option "fs.s3a.experimental.optimized.directory.operations"

which says "optimize directory IO" without being explicit about
what it does.

In this release

It only looks for and deletes parent dir entry when creating a file
(we still do for mkdir though)

This is dangerous as it goes against what is the fs spec for create
(goes with createNonRecursive though :). If you create a file two levels under
an empty dir, that empty dir marker stays there.
but consider: if you create a file two levels under a file, s3a is happy.
and nobody has noticed.

Also
* directory cleanup/parent marker recreation is done async
* page size set to 250; seems to balance out better in the load tests.
* HADOOP-16613. dir marker contentType = application/x-directory

Change-Id: Id88a198f61beb3719aa4202d26f3634e5e9cc194
Change-Id: I7bb9a4a9cc0b5e1ee7a54be7c5f463621ca66bc1
Change-Id: I4df1d47c0865604bbb17083b08cd5a3bc4e1d9f4
These tests can fail even before this patch went in, but it was
as I was working on this where some of the problems were happening
often enough that I could track down problems.

Key: testDeleteTable() was not deleting a test table, it was deleting
whichever table the bucket was bonded to, so potentially interfering
with every other test. The stack traces were actually appearing in
the next test which was run, testAncestorOverwriteConflict(), which would
spin for 30-60s in setup before failing.

Change-Id: I5e942d3854a5e1e496405c5be620768d2f81a83a
@steveloughran steveloughran force-pushed the s3/HADOOP-16823-throttling branch from 6b0a67c to 10f459e Compare February 6, 2020 18:55
@steveloughran
Copy link
Contributor Author

experimental directory marker optimization feature removed. It was really broken and its very presence would only encourage people to turn it on, or at least, start demanding it was production ready within a short period of time..and be very disappointed when that couldn't happen

Copy link

@bgaborg bgaborg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM +1, the only note I added is that I would add the EXPERIMENTAL_AWS_INTERNAL_THROTTLING default value constant.

*/
@InterfaceStability.Unstable
public static final String EXPERIMENTAL_AWS_INTERNAL_THROTTLING =
"fs.s3a.experimental.aws.internal.throttling";
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is the default value for this defined?

Copy link
Contributor Author

@steveloughran steveloughran Feb 10, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its true, but yes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updating the value, also changing the name to ""fs.s3a.experimental.aws.s3.throttling" to make clear its s3 only

@bgaborg
Copy link

bgaborg commented Feb 10, 2020

(also this is not a bug imho, more like an improvement)

the new load test was not picking up throttle events in the retry handler.

This is because the failure we are seeing is actually the XML parser error we've seen before when an opening connection is broken and the AWS SDK client's XML parser simply sees and report a failure of XML parsing, rather than change in the network or remote system.

We have assumed until now there was a sign of network issues. The fact that is happening consistently when performing bulk delete operations makes me suspect that it is actually the S3 front end rejecting the caller. We are retrying on it, but treating it as a symptom of throttling and so updating the relevant counters.

Change-Id: I5b6907ddd7d3eaec65d12064b10c89d953d85e46
Change-Id: I55a7d6d77accdf7393e147db2866300495d11f5b
@steveloughran
Copy link
Contributor Author

Gabor, thanks for the review.

yeah, you are right. Improvement.

Before I merge, do you want to look at BulkDeleteRetryHandler and see if you agree what I'm doing there?

XML parser errors are being treated as retry failures, as that is what I'm seeing during the load tests (i.e. not 503/slow down). https://issues.apache.org/jira/browse/HADOOP-13811 shows the history there (and yes, my test bucket is versioned for 24h).

@steveloughran
Copy link
Contributor Author

(retested against s3 ireland, got the failure in testListingDeleteauth=true which is from my auth mode patch against versioned buckets -will do a quick followup for that patch)

-Dparallel-tests -DtestsThreadCount=8 -Ds3guard -Ddynamo -Dauth

@steveloughran steveloughran changed the title HADOOP-16823. Manage S3 Throttling exclusively in S3A client. HADOOP-16823. Improve S3A Throttling in S3Guard and S3 bulk delete operations Feb 10, 2020
@apache apache deleted a comment from hadoop-yetus Feb 10, 2020
@steveloughran steveloughran changed the title HADOOP-16823. Improve S3A Throttling in S3Guard and S3 bulk delete operations HADOOP-16823. Large DeleteObject requests are their own Thundering Herd Feb 10, 2020
@apache apache deleted a comment from hadoop-yetus Feb 11, 2020
@steveloughran
Copy link
Contributor Author

style

./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:278:   public static final String BULK_DELETE_PAGE_SIZE =: 'member def modifier' has incorrect indentation level 3, expected level should be 2. [Indentation]
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:279:      "fs.s3a.bulk.delete.page.size";: '"fs.s3a.bulk.delete.page.size"' has incorrect indentation level 6, expected level should be 7. [Indentation]
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:1965:   * with the counter set to the number of keys, rather than the number of invocations: Line is longer than 80 characters (found 86). [LineLength]
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:1967:   * This is because S3 considers each key as one mutating operation on the store: Line is longer than 80 characters (found 81). [LineLength]
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DeleteOperation.java:50:import static org.apache.hadoop.fs.s3a.impl.CallableSupplier.waitForCompletion;:15: Unused import - org.apache.hadoop.fs.s3a.impl.CallableSupplier.waitForCompletion. [UnusedImports]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java:206:   * @return true if the DDB table has prepaid IO and is small enough to throttle.: Line is longer than 80 characters (found 82). [LineLength]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java:518:  public void test_999_delete_all_entries() throws Throwable {:15: Name 'test_999_delete_all_entries' must match pattern '^[a-z][a-zA-Z0-9]*$'. [MethodName]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ThrottleTracker.java:113:      LOG.warn("No throttling detected in {} against {}", this, ddbms.toString());: Line is longer than 80 characters (found 82). [LineLength]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ILoadTestS3ABulkDeleteThrottling.java:117:  @Parameterized.Parameters(name = "bulk-delete-client-retry={0}-requests={2}-size={1}"): Line is longer than 80 characters (found 88). [LineLength]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ILoadTestS3ABulkDeleteThrottling.java:184:  public void test_010_Reset() throws Throwable {:15: Name 'test_010_Reset' must match pattern '^[a-z][a-zA-Z0-9]*$'. [MethodName]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ILoadTestS3ABulkDeleteThrottling.java:189:  public void test_020_DeleteThrottling() throws Throwable {:15: Name 'test_020_DeleteThrottling' must match pattern '^[a-z][a-zA-Z0-9]*$'. [MethodName]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ILoadTestS3ABulkDeleteThrottling.java:202:  public void test_030_Sleep() throws Throwable {:15: Name 'test_030_Sleep' must match pattern '^[a-z][a-zA-Z0-9]*$'. [MethodName]

@steveloughran
Copy link
Contributor Author

Did checkstyle changes and a diff with trunk to (a) reduce the diff and (b) see what I needed to improve with javadocs; mainly the RetryingCollection.

I got a failure on a -Dscale auth run

[ERROR]   ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRecursiveRootListing:257->Assert.assertTrue:41->Assert.fail:88 files mismatch: between 
  "s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-1"
  "s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-25"
  "s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-16"
  "s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-11"
  "s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-7"
  "s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-54"
  "s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-14"
  "s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-35"
  "s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-48"
  "s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-56"
  "s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-29"
  "s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-52"
  "s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-40"
  "s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-2"
  "s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-24"
  "s3a:

Now, I've been playing with older branch-2 versions recently, and could blame that -but "bulk" and "delete" describe exactly what I was working on in this patch.

It wasn't, but while working on this tests, with better renames, I managed to create a deadlock in the new code

  1. S3ABlockOutputStream was waiting for space in the bounded thread pool so it can do an async put.
  2. But that thread pool was blocked by threads waiting for their async directory operations to complete.
  3. Outcome: total deadlock.

Surfaced in ITestS3ADeleteManyFiles during parallel file creation.

Actions

  • remove the async stuff from the end of rename()
  • keep dir marker delete operations in finishedWrite() async, but use the unbounded thread pool.
  • Cleanup + enhancement of ITestS3ADeleteManyFiles so that it tests src and dest paths more rigorously, and sets a page size of 50 for better coverage of the paged rename sequence.

Makes me think we should do more parallel IO tests within the same process.

* remove the async stuff from the end of rename()
* keep dir marker delete operations in finishedWrite() async,
 but use the unbounded thread pool.
* Cleanup + enhancement of ITestS3ADeleteManyFiles so that it tests src
and dest paths more rigorously,
* and sets a page size of 50 for better coverage of the paged rename sequence.

Change-Id: I334d70cc52c73bd926ccd1414e11a0ba740d9b89
@apache apache deleted a comment from hadoop-yetus Feb 11, 2020
@apache apache deleted a comment from hadoop-yetus Feb 11, 2020
Change-Id: I5fe8caab3b490904ef50522ca1dc0c7888fc79dc
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Comment
+0 🆗 reexec 1m 12s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 10 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 0m 26s Maven dependency ordering for branch
+1 💚 mvninstall 21m 31s trunk passed
+1 💚 compile 17m 55s trunk passed
+1 💚 checkstyle 2m 49s trunk passed
+1 💚 mvnsite 2m 8s trunk passed
+1 💚 shadedclient 21m 48s branch has no errors when building and testing our client artifacts.
+1 💚 javadoc 1m 27s trunk passed
+0 🆗 spotbugs 1m 6s Used deprecated FindBugs config; considering switching to SpotBugs.
+1 💚 findbugs 3m 9s trunk passed
-0 ⚠️ patch 1m 28s Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 21s Maven dependency ordering for patch
+1 💚 mvninstall 1m 23s the patch passed
+1 💚 compile 18m 39s the patch passed
+1 💚 javac 18m 39s the patch passed
-0 ⚠️ checkstyle 3m 35s root: The patch generated 4 new + 75 unchanged - 2 fixed = 79 total (was 77)
+1 💚 mvnsite 2m 12s the patch passed
-1 ❌ whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
+1 💚 xml 0m 1s The patch has no ill-formed XML file.
+1 💚 shadedclient 15m 19s patch has no errors when building and testing our client artifacts.
+1 💚 javadoc 1m 27s the patch passed
+1 💚 findbugs 3m 27s the patch passed
_ Other Tests _
+1 💚 unit 9m 25s hadoop-common in the patch passed.
+1 💚 unit 1m 31s hadoop-aws in the patch passed.
+1 💚 asflicense 0m 46s The patch does not generate ASF License warnings.
130m 19s
Subsystem Report/Notes
Docker Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/11/artifact/out/Dockerfile
GITHUB PR #1826
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle markdownlint
uname Linux accd65b42e91 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 9b8a78d
Default Java 1.8.0_242
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/11/artifact/out/diff-checkstyle-root.txt
whitespace https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/11/artifact/out/whitespace-eol.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/11/testReport/
Max. process+thread count 1348 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/11/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.11.1 https://yetus.apache.org

This message was automatically generated.

@apache apache deleted a comment from hadoop-yetus Feb 12, 2020
Copy link

@bgaborg bgaborg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests were running without errors against ireland.
+1

@steveloughran
Copy link
Contributor Author

ooh, thanks for this!

Change-Id: I833700b25f4c8cfb16a89f843d441edfbf440e59
@steveloughran
Copy link
Contributor Author

merged

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Comment
+0 🆗 reexec 1m 25s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 1s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 10 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 1m 10s Maven dependency ordering for branch
+1 💚 mvninstall 23m 4s trunk passed
+1 💚 compile 18m 4s trunk passed
+1 💚 checkstyle 2m 48s trunk passed
+1 💚 mvnsite 2m 8s trunk passed
+1 💚 shadedclient 21m 51s branch has no errors when building and testing our client artifacts.
+1 💚 javadoc 1m 29s trunk passed
+0 🆗 spotbugs 1m 14s Used deprecated FindBugs config; considering switching to SpotBugs.
+1 💚 findbugs 3m 40s trunk passed
-0 ⚠️ patch 1m 36s Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 23s Maven dependency ordering for patch
+1 💚 mvninstall 1m 28s the patch passed
+1 💚 compile 19m 8s the patch passed
+1 💚 javac 19m 8s the patch passed
-0 ⚠️ checkstyle 3m 4s root: The patch generated 4 new + 75 unchanged - 2 fixed = 79 total (was 77)
+1 💚 mvnsite 2m 24s the patch passed
-1 ❌ whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
+1 💚 xml 0m 1s The patch has no ill-formed XML file.
+1 💚 shadedclient 16m 15s patch has no errors when building and testing our client artifacts.
+1 💚 javadoc 1m 37s the patch passed
+1 💚 findbugs 4m 3s the patch passed
_ Other Tests _
+1 💚 unit 10m 17s hadoop-common in the patch passed.
+1 💚 unit 1m 33s hadoop-aws in the patch passed.
+1 💚 asflicense 0m 51s The patch does not generate ASF License warnings.
136m 52s
Subsystem Report/Notes
Docker Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/12/artifact/out/Dockerfile
GITHUB PR #1826
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle markdownlint
uname Linux dc8447e3a59b 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / da99ac7
Default Java 1.8.0_242
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/12/artifact/out/diff-checkstyle-root.txt
whitespace https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/12/artifact/out/whitespace-eol.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/12/testReport/
Max. process+thread count 1393 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/12/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.11.1 https://yetus.apache.org

This message was automatically generated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug fs/s3 changes related to hadoop-aws; submitter must declare test endpoint
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants