Serverless Framework-based AWS Lambda function triggered by S3 events to resize images with the excellent Sharp module. By using the Sharp module (which uses the libvips library), image processing can be 3x-5x faster than using ImageMagick, thus reducing the time your function spends running, which can potentially dramatically decrease your lambda function's cost. The function's behaviour can be controlled entirely with configuration.
A tool to take images uploaded to an S3 bucket and produce one or more images of varying sizes, optimizations and other operations all controlled from a simple configuration file. It does this by creating an AWS Lambda function with the help of the Serverless Framework.
Installation can be achieved with the following commands
git clone https://github.com/adieuadieu/serverless-sharp-image
cd serverless-sharp-image
yarn install
(It is possible to exchange yarn
for npm
if yarn
is too hipster for your taste. No problem.)
Or, if you have serverless
installed globally:
serverless install -u https://github.com/adieuadieu/serverless-sharp-image
Then, modify the config.json
and event.json
files, adapting them to your needs. More on configuration below.
You must configure your AWS credentials either by defining AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
environmental variables, or using an AWS profile. You can read more about this on the Serverless Credentials Guide. It's a bit of a pain in the ass if you have many projects/credentials.
In short, either:
export AWS_PROFILE=<your-profile-name>
or
export AWS_ACCESS_KEY_ID=<your-key-here>
export AWS_SECRET_ACCESS_KEY=<your-secret-key-here>
Make sure the bucket in config.json
exists.
Then:
yarn test
You can also try out the service by invoking it. First deploy it with yarn run deploy
and then you can invoke your function with yarn run invoke
. This will invoke the function with the test event in event.json
. You may need to tweak this file to match your setup.
yarn run deploy
This package bundles a lambda-execution-environment-ready version of the Sharp library which allows you to deploy the lambda function from any OS.
TODO-ish:
Write something here about about the need to compile sharp on an AWS AMI that matches the one run by lambda cuz Sharp adds a node Addon. When deploying into production, it would be prudent to deploy from an environment which is similar to that of AWS Lambda. More on that is available here.
The lambda service is designed to be controlled by configuration. From the configuration you can setup how one or more images will be manipulated, with direct access to the underlying methods of Sharp for full control.
{
"sourceBucket": "my-sweet-unicorn-media",
"sourcePrefix": "originals/",
"destinationBucket": "my-sweet-unicorn-media",
"destinationPrefix": "web-ready/",
"s3": {
"params": {}
},
"all": [
["rotate"],
["toFormat", "jpeg", { "quality": 80 }]
],
"outputs": [
{
"key": "%(filename)s-200x200.jpg",
"params": {
"ACL": "public-read"
},
"operations": [
["resize", 200, 200],
["max"],
["withoutEnlargement"]
]
},
{
"key": "%(filename)s-100x100.jpg",
"operations": [
["resize", 100, 100],
["max"],
["withoutEnlargement"]
]
}
]
}
TODO: document configuration better/more detail
all - applied to the image before creating all the outputs
outputs - define the files you wish to generate from the source
Outputs are lists of Sharp's methods you want performed on your image. For example if you want to perform the Sharp method sharp(image).resize(200, 300)
you would define this in your configuration as ["resize", 200, 300]
Note that method's are performed in order they appear in the configuration, and differing order can produce different results.
- key: uses sprintf internally
- params: set some specific S3 options for the image when uploaded to the destination S3 bucket. See more about the param options on the AWS S3's upload method documentation
-
key - The full object key with which the service was invoked
Example:
- Given object key:
unicorns/and/pixie/sticks/omg.jpg
%(key)s
- "unicorns/and/pixie/sticks/omg.jpg"
- Given object key:
-
type - The Content-Type of the object, as returned by S3
Example:
- Given Content-Type:
image/jpeg
%(type)s
- "image/jpeg"
- Given Content-Type:
-
crumbs - The crumbs of the S3 object as an array (e.g. the object key split by "/", not including the filename)
Example:
- Given object key:
unicorns/and/pixie/sticks/omg.jpg
%(crumbs[0])s
- "unicorns"
%(crumbs[2])s
- "pixies"
- Given object key:
-
directory - The "directory" of the S3 object
Example:
- Given object key:
unicorns/and/pixie/sticks/omg.jpg
%(directory)s
- "unicorns/and/pixie/sticks"
- Given object key:
-
filename - The file name (minus the last extension)
Example:
- Given object key:
unicorns/and/pixie/sticks/omg.jpg
%(filename)s
- "omg"
- Given object key:
-
extension - The file's extension determined by the Content-Type returned by S3
Example:
- Given Content-Type:
image/png
%(extension)s
- "png"
- Given Content-Type:
How can I use an existing bucket for my original images and processed output images?
By default, Serverless tries to provision all the necessary resources required by the lambda function by creating a stack in AWS CloudFormation. To use existing buckets, first remove the `s3` event section from the `serverless.yml` configuration file in the `functions.sharpImage.events` configuration, then remove the entire `resources` section from the `serverless.yml` file. Alternatively, if you'd like to use an existing bucket for the original image, but have a new processed-images output bucket created, only remove the s3 event section in `serverless.yml`.How can I use the same bucket for both the source and destination?
To do this, remove the `imageDestinationBucket` section from the `resources` section in `serverless.yml`.I keep getting a timeout error when deploying and it's really annoying.
Indeed, that is annoying. I had the same problem, and so that's why it's now here in this troubleshooting section. This may be an issue in the underlying AWS SDK when using a slower Internet connection. Try changing the `AWS_CLIENT_TIMEOUT` environment variable to a higher value. For example, in your command prompt enter the following and try deploying again:export AWS_CLIENT_TIMEOUT=3000000