Feature request:
Please support using the AWS default credential chain for standard s3:// downloads in dfget when explicit S3 credentials are not provided.
Today, dfget s3://... requires --storage-access-key-id and --storage-access-key-secret up front, so it fails on EC2/EKS nodes that already have valid AWS credentials from the normal chain, such as:
- instance profile / node role
- IRSA / pod role
- environment variables
- shared AWS config/credentials files
This request is only for the standard dfget s3://... download path. It does not need to change P2P task identity or scheduling behavior.
Expected behavior:
- if
--storage-access-key-id and --storage-access-key-secret are both omitted, use the default AWS credential chain
- if only one of them is provided, return a validation error
- if
--storage-session-token is provided, require a full explicit key pair
- keep existing explicit-credential behavior unchanged
This would align the CLI behavior with the underlying S3 backend capability, while making S3 downloads work naturally on AWS-hosted nodes.
Use case:
We run Dragonfly clients on EKS/EC2 nodes that already have permission to read from S3 through the node role or service account role.
We want workload pods to be able to do:
dfget s3://bucket/path -O /tmp/file --storage-region us-east-1
and let Dragonfly use the existing AWS credentials on the node automatically.
Right now, we have to add a wrapper script that calls IMDS, fetches temporary credentials, and then passes:
--storage-access-key-id
--storage-access-key-secret
--storage-session-token
This adds complexity and unnecessary credential handling in user workloads.
From a P2P perspective, this feature is valuable because it only changes how back-to-source S3 authentication is resolved. The task is still identified by the object URL and normal Dragonfly task ID logic, so peer-to-peer distribution should continue to work the same way.
UI Example:
No new flag is required. The current UI can support both modes:
# Existing behavior: explicit credentials
dfget s3://my-bucket/path/to/object -O /tmp/object \
--storage-region us-east-1 \
--storage-access-key-id "$AWS_ACCESS_KEY_ID" \
--storage-access-key-secret "$AWS_SECRET_ACCESS_KEY" \
--storage-session-token "$AWS_SESSION_TOKEN"
# Requested behavior: default AWS credential chain
dfget s3://my-bucket/path/to/object -O /tmp/object \
--storage-region us-east-1
Feature request:
Please support using the AWS default credential chain for standard
s3://downloads indfgetwhen explicit S3 credentials are not provided.Today,
dfget s3://...requires--storage-access-key-idand--storage-access-key-secretup front, so it fails on EC2/EKS nodes that already have valid AWS credentials from the normal chain, such as:This request is only for the standard
dfget s3://...download path. It does not need to change P2P task identity or scheduling behavior.Expected behavior:
--storage-access-key-idand--storage-access-key-secretare both omitted, use the default AWS credential chain--storage-session-tokenis provided, require a full explicit key pairThis would align the CLI behavior with the underlying S3 backend capability, while making S3 downloads work naturally on AWS-hosted nodes.
Use case:
We run Dragonfly clients on EKS/EC2 nodes that already have permission to read from S3 through the node role or service account role.
We want workload pods to be able to do:
dfget s3://bucket/path -O /tmp/file --storage-region us-east-1and let Dragonfly use the existing AWS credentials on the node automatically.
Right now, we have to add a wrapper script that calls IMDS, fetches temporary credentials, and then passes:
--storage-access-key-id--storage-access-key-secret--storage-session-tokenThis adds complexity and unnecessary credential handling in user workloads.
From a P2P perspective, this feature is valuable because it only changes how back-to-source S3 authentication is resolved. The task is still identified by the object URL and normal Dragonfly task ID logic, so peer-to-peer distribution should continue to work the same way.
UI Example:
No new flag is required. The current UI can support both modes: