-
-
Notifications
You must be signed in to change notification settings - Fork 909
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v4 signing #419
v4 signing #419
Conversation
Merge 1.5.0-alpha2
|
Thank you! We need to cache bucket locations during an invocation, so you don't have to do the redirect on every operation, only on the first (failing) operation. For example, invoking the info operation causes the bucket location to be looked up 4 times, rather than the once I would expect. $ ./s3cmd info s3://eu-central-1.domsch.com/ |
Thanks for the suggestion, I added it. |
Thanks. That did resolve re-issuing the "get bucket location" call Sync into eu-central-1 did work for me. Try issuing an 'info' command to a file in such a bucket. I get a failure. DEBUG: Response: {'status': 400, 'headers': {'x-amz-id-2': On Sun, Nov 16, 2014 at 2:22 PM, Vasileios Mitrousis <
|
That's weird, the 400 response doesn't give me any error data. Expected API output should contain the correct region among the 400 reply, but in this case data is empty. I will check it out. Thanks for crashing it.
|
fixed it now.
It seems that it is an API problem. "Invalid region" response is the one expected but not returned. What I am doing in this case is to call get_bucket_location to get the correct region. That leads to a second redirect. It redirects first the get_bucket_location call and then redirects the previous (in that case info) call. I could do some trick to avoid running the get_bucket_location for the second time since we get the region from the erroneous response, but this might increase a bit the complexity of the handling. Please check my last commit and let me know what do you think regarding my comments above. Thanks! |
Some S3 API calls return HTTP 400 responses due to incorrectly-signed headers. These should also include 'data' in the response, but alas, some do not. Don't crash just because S3 didn't give us what we expected.
so we can see what is actually being sent.
All tests run fine now. Still have an issue with multi-part upload. |
@mdomsch file put and multi-part upload is fixed. |
Excellent, thank you. For grins, can you remove (or rename so it doesn't On Tue, Dec 9, 2014 at 6:46 AM, Vasileios Mitrousis <
|
It cannot be removed since it's used in cases where the bucket name doesn't conform with DNS naming conventions. The cause is that old datacenters still support names like xxx-Autotest-3 (capital chars) while new ones not. Signing v4 doesn't support these bucket names so in these edge cases I am still using v2 signing. (S3/S3.py:161) I renamed the sign_string_v2() method and all the tests passed. But I found a call in S3/CloudFront.py which is not covered by tests. Is it still supported and/or used? If yes, v4 support should be added for CloudFront for the new regions. |
Yes, CloudFront is still used, though I myself haven't done any significant On Tue, Dec 9, 2014 at 7:52 AM, Vasileios Mitrousis <
|
Do you think is a better idea to roll out these changes first? People can help on debugging until the release. On the other hand I have the impression that CloudFront is not heavily used with s3cmd so it could be a different PR with not as high priority as this. |
Lets merge the V4 work as is now, and then fix up CF's usage with a new PR. On Tue, Dec 9, 2014 at 9:24 AM, Vasileios Mitrousis <
|
We have a --default-location (now aliased to --region) that I'd like to use On Tue, Dec 9, 2014 at 11:57 AM, Matt Domsch matt@domsch.com wrote:
|
it's actually "bucket_location", not default_location. On Tue, Dec 9, 2014 at 1:10 PM, Matt Domsch matt@domsch.com wrote:
|
Please feel free to try it out and help me debug it if needed. If you need any explanation please be my guest.
It is based on @mludvig 's branch. Thanks a lot for kick-starting it.