-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS: Refresh vended credentials #11389
Conversation
1e200fd
to
5e51c5b
Compare
import software.amazon.awssdk.utils.cache.CachedSupplier; | ||
import software.amazon.awssdk.utils.cache.RefreshResult; | ||
|
||
public class VendedCredentialsProvider implements AwsCredentialsProvider, SdkAutoCloseable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jackye1995 is this credential provider similar to the one you mentioned a while ago where you guys had a custom credential provider that would always call loadTable()?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this looks similar.
5e51c5b
to
51424d1
Compare
aws/src/main/java/org/apache/iceberg/aws/s3/S3FileIOProperties.java
Outdated
Show resolved
Hide resolved
aws/src/main/java/org/apache/iceberg/aws/s3/VendedCredentialsProvider.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor comments, but +1
Optional<Credential> credentialWithPrefix = | ||
s3Credentials.stream().max(Comparator.comparingInt(c -> c.prefix().length())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[doubt] how will this work for credentials which are expanding different prefixes :
for ex :
s3//abc/123
s3//xyz/1
and the request is for s3://xyz ?
Never the less how will this work for cases with same prefixes :
- s3://abc/prefix-1/
- s3://abc/prefix-123/
we will use prefix-123 here instead of prefix-1 ? for the calls length is not the correct way to find the credentials imho
please let me know your thoughts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is actually a good callout and I think an oversight from what we're trying to define in the spec. Here we are taking the most selective prefix, but that doesn't necessarily correspond to the prefix we're using the credential against.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to get the behavior we're expecting we would probably need to update S3FileIO::client
to take the path as an argument so that the client can be configured/returned with the correct credential. Thoughts @nastra? I assume there is a similar issue on the GCP side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@singhpk234 to answer your specific question, it should fail because there is no prefix in your example that covers the requested path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@danielcweeks got you, This is always been a challenge when using credential provider as if you see it's resolveCredenitials()
doesn't take any args so to wire in correct creds for correct paths, which we might not
know at the time of creating S3 client
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah the credential to use is the credential with the longest matching prefix for a given requested path. In the case that there's no credential with a matching prefix for a given path, I think we'd want to just fail on the client side; s3 or whatever storage system would fail the request anyways but I think we may as well avoid the request if we know that no credential could cover the requested path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what we rather might want to do going forward is only allow the server to return a single credential per provider. That way it's easier for the client
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, sure that's easier for the client but I'm not sure about making the protocol unnecessarily restrictive. Though I do understand in most cases a single credential per provider is enough, I just want to make sure we're not putting the spec in an awkward spot in case there are legitimate use cases of different credentials per prefix. In general, we've always followed a pattern of avoiding putting unnecessary restrictions on protocol. Also I don't know if it's really that much more complex for clients to resolve the longest matching prefix for a given path
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that it's reasonable to enforce a single prefix for provider since that was the initial ask. There's an opportunity to improve this, but we can follow up with added support. I don't think it's too difficult, but there are a few edge cases that we'd need to work through. I'm ok proceeding with this implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'm good as long as this is just a constraint in the current implementation and we're not requiring/changing any server side expectations in the spec
51424d1
to
0551ea6
Compare
LoadCredentialsResponse.class, | ||
OAuth2Util.authHeaders(properties.get(OAuth2Properties.TOKEN)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assuming the Token here is same short lived token used by RestCatalog instance, are we planning to handle token refresh post its expiration with in the FileIO?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is indeed an issue that I was planning to address when I wrote the first version of this many months ago but simply forgot to get back to. thanks for raising this @ChaladiMohanVamsi
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think for the initial support for refresh, we can assume that we're bound by the token lifetime for refreshes. It's not perfect, but I'd like to see the resolution of some of the AuthManger refactor before settling on a solution. We're not regressing at this point, so I'm ok with leaving this as is and addressing in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, this is on my todo list for after the AuthManager refactor.
aws/src/main/java/org/apache/iceberg/aws/s3/VendedCredentialsProvider.java
Outdated
Show resolved
Hide resolved
Optional<Credential> credentialWithPrefix = | ||
s3Credentials.stream().max(Comparator.comparingInt(c -> c.prefix().length())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah the credential to use is the credential with the longest matching prefix for a given requested path. In the case that there's no credential with a matching prefix for a given path, I think we'd want to just fail on the client side; s3 or whatever storage system would fail the request anyways but I think we may as well avoid the request if we know that no credential could cover the requested path.
0551ea6
to
7ce08bd
Compare
aws/src/main/java/org/apache/iceberg/aws/s3/VendedCredentialsProvider.java
Show resolved
Hide resolved
return client; | ||
} | ||
|
||
private LoadCredentialsResponse fetchCredentials() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there are 2 concepts that would be beneficial to be added, a staleTime
and a prefetchTime
. You could check the AWS SDK StsCredentialsProvider for how that is implemented. But this prevents edge cases like the credentials is loaded at almost the expiration time and cause errors downstream.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is already being done further below:
return RefreshResult.builder(
(AwsCredentials)
AwsSessionCredentials.builder()
.accessKeyId(accessKeyId)
.secretAccessKey(secretAccessKey)
.sessionToken(sessionToken)
.expirationTime(expiresAt)
.build())
.staleTime(expiresAt)
.prefetchTime(prefetchAt)
.build();
* <p>When set, the {@link VendedCredentialsProvider} will be used to fetch and refresh vended | ||
* credentials. | ||
*/ | ||
public static final String REFRESH_CREDENTIALS_ENDPOINT = "client.refresh-credentials-endpoint"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't this be a s3 FileIO property?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This property needs to be set before FileIO
is actually being configured and is similar to the client region. VendedCredentialsProvider
is configured when AwsClientProperties#credentialsProvider(..)
is called, so I don't think this property should be a FileIO
property
a770941
to
a5b58ed
Compare
aws/src/main/java/org/apache/iceberg/aws/AwsClientProperties.java
Outdated
Show resolved
Hide resolved
@@ -136,6 +156,12 @@ public <T extends AwsClientBuilder> void applyClientCredentialConfigurations(T b | |||
@SuppressWarnings("checkstyle:HiddenField") | |||
public AwsCredentialsProvider credentialsProvider( | |||
String accessKeyId, String secretAccessKey, String sessionToken) { | |||
if (refreshCredentialsEnabled && !Strings.isNullOrEmpty(refreshCredentialsEndpoint)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Should we log a warning if the endpoint is set but refreshCredentialsEnabled is false? I don't think we should fail but this is probably something a user would want to be aware of.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure adding a warning adds a lot of value. It's valid to have the server send you back an endpoint + refresh enabled flag that you then override for cases like Kafka connect. I'd say let's go without a warning for now unless this becomes a place of confusion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks fine to me, I left a few nits but mainly I would like someone who is closer to this part of the code to also approve this PR before we merge.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there are two future improvements that we've identified (catalog token refresh and multi-prefix support within cloud provider), but this support goes a long way to improving current use cases and even enables new ones, so I'm +1.
LoadCredentialsResponse.class, | ||
OAuth2Util.authHeaders(properties.get(OAuth2Properties.TOKEN)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think for the initial support for refresh, we can assume that we're bound by the token lifetime for refreshes. It's not perfect, but I'd like to see the resolution of some of the AuthManger refactor before settling on a solution. We're not regressing at this point, so I'm ok with leaving this as is and addressing in the future.
Optional<Credential> credentialWithPrefix = | ||
s3Credentials.stream().max(Comparator.comparingInt(c -> c.prefix().length())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that it's reasonable to enforce a single prefix for provider since that was the initial ask. There's an opportunity to improve this, but we can follow up with added support. I don't think it's too difficult, but there are a few edge cases that we'd need to work through. I'm ok proceeding with this implementation.
a5b58ed
to
41fe921
Compare
List<Credential> s3Credentials = | ||
response.credentials().stream() | ||
.filter(c -> c.prefix().startsWith("s3")) | ||
.collect(Collectors.toList()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This imho still doesn't addresses the problem of wiring the credential for the right prefix ?
for ex : S3 rest server returned prefix as "s3://bucket/prefix-1"
but the call was for "s3://bucket/prefix-2", unless there is an enforcement from REST "that it will return only longest common prefix in the response"
I would recommend rather than using starting with "S3" let make it equal to "S3" for now to make sure the client doesn't mess around, what do you think ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@singhpk234 for how the implementation enforces that there's really only a single credential being sent back by the server. I'll be working on supporting and selecting the "right" credential when the server sents back multiple in a follow-up
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for how the implementation enforces that there's really only a single credential being sent back by the server
[doubt] how are we enforcing this ? for ex a rest server can send a credential for only one prefix but still for a diff prefix than what being asked for, are you suggesting that for now, rest server should only send back for exactly "s3" prefix ? if yes how are we enforcing this in rest for the meanwhile ?
Is strategy for now to let if fail ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The catalog has the responsibility of returning a credential with a scoped policy that provides the appropriate access for all prefixes. This isn't about the spec or client trying to enforce that behavior. Either the client will have access or not, that's up to the catalog.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The catalog has the responsibility of returning a credential with a scoped policy that provides the appropriate access for all prefixes
I see, so that fact that only one credential is being returned from the catalog, it itself means that its best prefix that could fit in into all the request the client is allowed make. Make sense ! so we would indeed fail at client trying for an un-acessible prefix from S3 end.
No description provided.