-
Notifications
You must be signed in to change notification settings - Fork 9.1k
HADOOP-18637. S3A to support upload of files greater than 2 GB using DiskBlocks #5543
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HADOOP-18637. S3A to support upload of files greater than 2 GB using DiskBlocks #5543
Conversation
* disk block size for allocation requests => -1 * this turns off capacity checks on allocator * and disk blocks no longer worry about/report lack of space * block output stream knows not to worry about running out of space * tests to show this + had to edit pom.xml to always get the full stack trace. Change-Id: I97374a046481165489274fa83202f6b1ebc3bafa
test failings
and
the bucket ones look unrelated and more that my endpoint is set to eu-west-2
nothing has changed there and i don't see any explicit setting of the region other than for the explicit buckets. will need to test there on hadoop-trunk to see if something else has changed. |
ok, trunk run failed too, same bucket probe errors. other one not...maybe its a timing one
|
🎊 +1 overall
This message was automatically generated. |
fixing checkstyle
@Override | ||
long remainingCapacity() { | ||
return limit - bytesWritten; | ||
return unlimited() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remainingCapacity is long so shouldn't it be long.MAX_VALUE
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although I see we are always casting to int. So should fine. I think it is like that as we are writing the big file in disk in loop.
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM +1. Ran aws tests in us-west-1. All good.
I have a followup for this feature, primarily to reject multipart copy requests when disabled, test to verify that for a large enough threshold, calls don't get rejected. |
…DiskBlocks (apache#5543) Contributed By: HarshitGupta and Steve Loughran
@HarshitGupta11 create a new PR with your change for yetus to review, then we can merge through the github ui. No need code reviews, unless related to the backport itself |
…DiskBlocks (apache#5543) Contributed By: HarshitGupta and Steve Loughran
…DiskBlocks (apache#5543) Contributed By: HarshitGupta and Steve Loughran
…DiskBlocks (apache#5543) Contributed By: HarshitGupta and Steve Loughran
Description of PR
#5481 with extra commit to wrap up
unlimited disk block size.
How was this patch tested?
in progress against s3 london
For code changes:
LICENSE
,LICENSE-binary
,NOTICE-binary
files?