-
-
Notifications
You must be signed in to change notification settings - Fork 355
Supported Clients
Here are some of the snippets for various S3 clients in different languages.
AWS-S3
AWS::S3::Base.establish_connection!(
:access_key_id => "123",
:secret_access_key => "abc",
:server => "localhost",
:port => "10001" )
Right AWS
RightAws::S3Interface.new('1E3GDYEOGFJPIT7','hgTHt68JY07JKUY08ftHYtERkjgtfERn57',
{:multi_thread => false, :server => 'localhost',
:port => 10453, :protocol => 'http',:no_subdomains => true }
AWS-SDK
AWS::S3.new(
:access_key_id => 'YOUR_ACCESS_KEY_ID',
:secret_access_key => 'YOUR_SECRET_ACCESS_KEY',
:s3_endpoint => 'localhost',
:s3_port => 10001,
:use_ssl => false)
If you've disabled SSL as part of an AWS.config
call and attempt to use services that have not been redirected (such as STS) you will need to enable SSL for those services. Note that this configuration has not been extensively tested with non-S3 services from the AWS-SDK gem.
I would recommend using a hostname other than localhost. You will need to create DNS entries for somebucket.s3_endpoint
in order to use fakes3.
As an alternative to creating DNS entries, at least with aws-sdk, you can use a configuration like so:
AWS::S3.new(
:access_key_id => 'YOUR_ACCESS_KEY_ID',
:secret_access_key => 'YOUR_SECRET_ACCESS_KEY',
:s3_endpoint => 'localhost',
:s3_force_path_style => true,
:s3_port => 10001,
:use_ssl => false)
AWS-SDK V2
Aws::S3::Client.new(
:access_key_id => 'YOUR_ACCESS_KEY_ID',
:secret_access_key => 'YOUR_SECRET_ACCESS_KEY',
:region => 'YOUR_REGION',
:endpoint => 'http://localhost:10001/',
:force_path_style => true)
Fog
connection = Fog::Storage::AWS.new(aws_access_key_id: 123, aws_secret_access_key: "asdf", port: 10001, host: 'localhost', scheme: 'http')
I also needed the following monkeypatch to make it work.
require 'fog/aws/models/storage/files'
# fog always expects Last-Modified and ETag headers to present
# We relax this requirement to support fakes3
class Fog::Storage::AWS::Files
def normalize_headers(headers)
headers['Last-Modified'] = Time.parse(headers['Last-Modified']) if headers['Last-Modified']
headers['ETag'].gsub!('"','') if headers['ETag']
end
end
AWS SDK
Clone from S3_Uploader
Modify S3UploaderActivity.java
s3Client.setEndpoint("http://your-server-ip");
Change ACCESS_KEY_ID and SECRET_KEY in Constants.java
AWS SDK
BasicAWSCredentials credentials = new BasicAWSCredentials("foo", "bar");
AmazonS3Client s3Client = new AmazonS3Client(credentials);
s3Client.setEndpoint("http://localhost:4567");
s3Client.setS3ClientOptions(new S3ClientOptions().withPathStyleAccess(true));
If you do not set path style access (and use the default virtual-host style), you will have to set up your DNS or hosts file to contain subdomain buckets. On Unix, edit /etc/hosts and add:
127.0.0.1 bucketname.localhost
aws-cli
Using the aws-cli (version aws-cli/1.10.61 Python/2.7.9 Linux/4.4.15-moby botocore/1.4.51)
A few notes from my tests:
- make the client machine resolve s3.amazonaws.com to wherever fakes3 is running (eg: edit /etc/hosts). I haven't tried the fakes3 server options to specify address and/or hostname
- aws-cli will look for credentials. In my tests I only tested with AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables
- I've only needed to test creating a bucket (
mb
) and then pushing files to it (cp
) both single files and recursively. YMMV with other commands
Example Commands:
$ fakes3 -r /tmp/s3 -p 80 &
Loading FakeS3 with /tmp/s3 on port 80 with hostname s3.amazonaws.com
[2016-09-06 04:24:13] INFO WEBrick 1.3.1
[2016-09-06 04:24:13] INFO ruby 2.3.1 (2016-04-26) [x86_64-linux]
[2016-09-06 04:24:13] INFO WEBrick::HTTPServer#start: pid=655 port=80
$ export AWS_ACCESS_KEY_ID=1234
$ export AWS_SECRET_ACCESS_KEY=1234
# Make Bucket required the region parameter to work
$ aws --endpoint-url='http://s3.amazonaws.com' s3 mb s3://fakes3 --region us-west-1
$ aws --endpoint-url='http://s3.amazonaws.com' s3 cp ./tmp/data s3://fakes3/data --recursive
s3cmd
For S3 cmd you need to setup your dns to contain subdomain buckets since it doesn't do path style S3 requests. You can use a config like this to make it work. Gist
Then just run
s3cmd -c myconfig mb s3://my_bucket
bash
For very simple cases, you can use the following shell to generate the metadata file:
echo ":md5: $(md5 -q "$1")"
echo ":content_type: $(file -b --mime-type "$1")"
echo ":size: $(du "$1"| awk '{print $1;}')"
echo ":modified_date: '$(date +%Y-%m-%dT%T.000Z)'"
echo ":custom_metadata: {}"
Knox
$ npm install --save knox
var knox = require('knox');
knox.createClient({
key: '123',
secret: 'abc',
bucket: 'my_bucket',
endpoint: 'localhost',
style: 'path',
port: 10001
});
aws-sdk v2
$ npm install --save aws-sdk
var fs = require('fs')
var AWS = require('aws-sdk')
var config = {
s3ForcePathStyle: true,
accessKeyId: 'ACCESS_KEY_ID',
secretAccessKey: 'SECRET_ACCESS_KEY',
endpoint: new AWS.Endpoint('http://localhost:10001')
}
var client = new AWS.S3(config)
var params = {
Key: 'Key',
Bucket: 'Bucket',
Body: fs.createReadStream('./image.png')
}
client.upload(params, function uploadCallback (err, data) {
console.log(err, data)
})
aws-sdk v3
$ npm install --save @aws-sdk/client-s3
var fs = require('fs')
var { S3Client, PutObjectCommand } = require('@aws-sdk/client-s3')
var config = {
forcePathStyle: true,
credentials: {
accessKeyId: 'S3RVER',
secretAccessKey: 'S3RVER',
},
endpoint: 'http://localhost:4569'
}
var client = new S3Client(config)
var params = {
Key: 'Key',
Bucket: 'Bucket',
Body: fs.createReadStream('./image.png')
}
client.send(new PutObjectCommand(params))
.then(
data => {
console.log(data)
},
err => {
console.error(err)
}
);
pkgcloud
$ npm install --save pkgcloud
var fs = require('fs')
var pkgcloud = require('pkgcloud')
var config = {
provider: 'amazon',
protocol: 'http://',
serversUrl: 'localhost:8000',
accessKeyId: 'ACCESS_KEY_ID',
accessKey: 'SECRET_ACCESS_KEY'
}
var client = pkgcloud.storage.createClient(config)
var params = {
remote: 'Key',
container: 'Bucket'
}
fs.createReadStream('./image.png').pipe(client.upload(params));
ex_aws
Add ex_aws
, sweet_xml
and hackney
as dependencies:
# mix.exs
def application do
# Specify extra applications you'll use from Erlang/Elixir
[extra_applications: [:logger, :ex_aws, :sweet_xml, :hackney]]
end
defp deps do
[
{:ex_aws, "~> 1.1"},
{:sweet_xml, "~> 0.6.5"},
{:hackney, "~> 1.7"}
]
end
Install dependencies:
mix deps.get
Configure stuffs in config/config.exs
config :ex_aws,
access_key_id: [{:system, "AWS_ACCESS_KEY_ID"}, :instance_role],
secret_access_key: [{:system, "AWS_SECRET_ACCESS_KEY"}, :instance_role],
region: "fakes3"
config :ex_aws, :s3,
scheme: "http://",
host: "localhost",
port: 4567
That's it!
export AWS_ACCESS_KEY_ID=123
export AWS_SECRET_ACCESS_KEY=asdf
iex -S mix
iex(1)> ExAws.S3.put_bucket("bukkit", "fakes3") |> ExAws.request
{:ok,
%{body: "",
headers: [{"Content-Type", "text/xml"}, {"Access-Control-Allow-Origin", "*"},
{"Server", "WEBrick/1.3.1 (Ruby/2.4.0/2016-12-24) OpenSSL/1.0.2k"},
{"Date", "Wed, 22 Mar 2017 14:11:22 GMT"}, {"Content-Length", "0"},
{"Connection", "Keep-Alive"}], status_code: 200}}
iex(2)> ExAws.S3.list_buckets |> ExAws.request
{:ok,
%{body: %{buckets: [%{creation_date: "2017-03-22T15:11:22.000Z",
name: "bukkit"}], owner: %{display_name: "FakeS3", id: "123"}},
headers: [{"Content-Type", "application/xml"},
{"Server", "WEBrick/1.3.1 (Ruby/2.4.0/2016-12-24) OpenSSL/1.0.2k"},
{"Date", "Wed, 22 Mar 2017 14:16:07 GMT"}, {"Content-Length", "303"},
{"Connection", "Keep-Alive"}], status_code: 200}}
import boto3
s3 = boto3.resource('s3', endpoint_url='http://localhost:4567',
aws_access_key_id='123', aws_secret_access_key='abc')
s3.create_bucket(Bucket='my_bucket')
for bucket in s3.buckets.all():
print(bucket.name)
aws-sdk-go
var conf = &aws.Config{
Credentials: credentials.NewStaticCredentials("id", "secret", "token"),
Endpoint: aws.String("http://localhost:4569"),
Region: aws.String("us-west-2"),
DisableSSL: aws.Bool(true),
S3ForcePathStyle: aws.Bool(true),
}
var s3 = s3.New(session.New(conf))