Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,13 @@ This project offers an open source implementation of the [Turborepo custom remot
📚 For detailed documentation, please refer to our [official website](https://adirishi.github.io/turborepo-remote-cache-cloudflare)

> [!IMPORTANT]
> You can now store your build artifacts in either Cloudflare 🪣 R2 or 🔑 KV storage. Find out how in our [official documentation](https://adirishi.github.io/turborepo-remote-cache-cloudflare/configuration/kv-storage)
> You can now store your build artifacts in Cloudflare 🪣 R2, 🔑 KV, or ☁️ S3 storage. Find out how in our [official documentation](https://adirishi.github.io/turborepo-remote-cache-cloudflare/configuration/project-configuration)

## 🤔 Why should I use this?

If you're a Turborepo user, this project offers compelling advantages:

- 💿 **Storage Options**: Choose between 🪣 [R2](https://adirishi.github.io/turborepo-remote-cache-cloudflare/configuration/r2-storage) or 🔑 [KV](https://adirishi.github.io/turborepo-remote-cache-cloudflare/configuration/kv-storage) storage for your build artifacts. This gives you the flexibility to choose the storage option that best fits your needs.
- 💿 **Storage Options**: Choose between 🪣 [R2](https://adirishi.github.io/turborepo-remote-cache-cloudflare/configuration/r2-storage), 🔑 [KV](https://adirishi.github.io/turborepo-remote-cache-cloudflare/configuration/kv-storage), or ☁️ [S3](https://adirishi.github.io/turborepo-remote-cache-cloudflare/configuration/s3-storage) storage for your build artifacts. This gives you the flexibility to choose the storage option that best fits your needs.
- 🚀 **Faster Builds**: Harness the power of remote caching to significantly speed up your builds
- 🌐 **Independence from Vercel**: Use Turborepo without tying your project to Vercel. This gives you flexibility in hosting decisions.
- 🌍 **Global Deployment**: Code deploys instantly across the globe in over 300 countries, ensuring unmatched performance and reliability.
Expand Down
12 changes: 11 additions & 1 deletion docs/configuration/project-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,19 @@ layout: doc

# ⚙️ Project Configuration

## Storage Options

This project supports multiple storage backends for your build artifacts:

- **🪣 [R2 Storage](/configuration/r2-storage)**: Cloudflare's object storage with zero egress fees
- **🔑 [KV Storage](/configuration/kv-storage)**: Cloudflare's key-value storage with global distribution
- **☁️ [S3 Storage](/configuration/s3-storage)**: Amazon S3 for maximum compatibility and flexibility

The storage priority order is: S3 > KV > R2. When multiple storage options are configured, the highest priority one will be used.

## Automatic deletion of old cache files

This project sets up a [cron trigger](https://developers.cloudflare.com/workers/platform/triggers/cron-triggers/) for Cloudflare workers, which automatically deletes old cache files within the bound R2 bucket. This behavior can be customized:
This project sets up a [cron trigger](https://developers.cloudflare.com/workers/platform/triggers/cron-triggers/) for Cloudflare workers, which automatically deletes old cache files within the configured storage backend. This behavior can be customized:

- To disable the automatic deletion, remove the [triggers] configuration in [wrangler.jsonc](https://github.com/AdiRishi/turborepo-remote-cache-cloudflare/blob/master/wrangler.jsonc)
- To change the retention period for objects, adjust the `BUCKET_OBJECT_EXPIRATION_HOURS` option in [wrangler.jsonc](https://github.com/AdiRishi/turborepo-remote-cache-cloudflare/blob/master/wrangler.jsonc) or set it via [workers environment variables](https://developers.cloudflare.com/workers/platform/environment-variables/)
114 changes: 114 additions & 0 deletions docs/configuration/s3-storage.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
---
layout: doc
---

# ☁️ Storing artifacts in Amazon S3

[Amazon S3](https://aws.amazon.com/s3/) provides scalable, reliable object storage that can be used to store your build artifacts. With S3, you can leverage AWS's global infrastructure for storing and retrieving your build cache.

Follow these steps to store your build artifacts in Amazon S3:

## 1. Create an S3 Bucket

An S3 bucket is a container for objects in Amazon S3. You can create a bucket via the [AWS Management Console](https://console.aws.amazon.com/) or using the [AWS CLI](https://aws.amazon.com/cli/).

### Using the AWS CLI

```sh
aws s3 mb s3://your-bucket-name --region your-region
```

### Using the AWS Management Console

1. Navigate to the [AWS S3 console](https://console.aws.amazon.com/s3/)
2. Click the `Create bucket` button
3. Enter a name for your bucket and select the region
4. Configure bucket settings as needed
5. Click `Create bucket`

::: tip
Choose a region closest to where the bulk of your API requests will be coming from. For example, if you want to optimize for requests coming from GitHub Actions in the US, pick a US region.

You can find the list of available regions [here](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region).
:::

## 2. Create IAM User and Access Keys

You'll need AWS access credentials to authenticate with S3. Follow these steps to create an IAM user with S3 permissions:

### Using the AWS Management Console

1. Navigate to the [IAM console](https://console.aws.amazon.com/iam/)
2. Click `Users` in the left sidebar
3. Click `Create user`
4. Enter a username and select `Programmatic access`
5. Click `Next: Permissions`
6. Click `Attach existing policies directly`
7. Search for and select `AmazonS3FullAccess` (or create a custom policy with minimal S3 permissions)
8. Complete the user creation process
9. Save the Access Key ID and Secret Access Key
Comment on lines +47 to +49
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid recommending AmazonS3FullAccess; prefer a minimal, bucket-scoped policy.

Granting AmazonS3FullAccess is overly permissive. Recommend attaching the minimal policy (shown below) and explicitly discourage FullAccess in production.

Apply this diff to Step 7:

-7. Search for and select `AmazonS3FullAccess` (or create a custom policy with minimal S3 permissions)
+7. Create and attach a custom policy with minimal S3 permissions (recommended). Avoid using `AmazonS3FullAccess` in production.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
7. Search for and select `AmazonS3FullAccess` (or create a custom policy with minimal S3 permissions)
8. Complete the user creation process
9. Save the Access Key ID and Secret Access Key
7. Create and attach a custom policy with minimal S3 permissions (recommended). Avoid using `AmazonS3FullAccess` in production.
8. Complete the user creation process
9. Save the Access Key ID and Secret Access Key
🤖 Prompt for AI Agents
docs/configuration/s3-storage.md around lines 47 to 49: the doc currently
recommends selecting AmazonS3FullAccess which is overly permissive; update Step
7 to instruct attaching a minimal, bucket-scoped policy instead (and reference
the policy example shown elsewhere in the doc), explicitly discourage using
AmazonS3FullAccess in production, and replace the line with wording that tells
users to create or attach a custom policy limited to the specific bucket and
required actions (GetObject, PutObject, ListBucket, DeleteObject as needed) and
to save the Access Key ID and Secret Access Key after completing user creation.


### Minimal S3 Policy

If you prefer to create a custom policy with minimal permissions, use this policy:

```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucket"],
"Resource": ["arn:aws:s3:::your-bucket-name", "arn:aws:s3:::your-bucket-name/*"]
}
]
}
```

## 3. Update Your Configuration

Update your `wrangler.jsonc` file to include the S3 configuration. Since S3 credentials are sensitive, they should be set as secrets:

```jsonc{5-15}
{
"name": "turborepo-remote-cache",
// Other settings...

"vars": {
"ENVIRONMENT": "production",
"BUCKET_OBJECT_EXPIRATION_HOURS": 720,
"S3_BUCKET_NAME": "your-bucket-name",
"S3_REGION": "us-east-1"
},

// Comment out R2 and KV configurations when using S3
// "r2_buckets": [...],
// "kv_namespaces": [...]
}
```

Set the sensitive S3 credentials as secrets:

```sh
echo "your-access-key-id" | wrangler secret put S3_ACCESS_KEY_ID
echo "your-secret-access-key" | wrangler secret put S3_SECRET_ACCESS_KEY
```

## 4. Deploy Your Worker

Once you've updated your Worker script and `wrangler.jsonc` file, deploy your Worker using the Wrangler CLI or your GitHub actions workflow.

And that's it! Your build artifacts will now be stored in Amazon S3.

## Configuration Options

| Variable | Required | Description | Default |
| ---------------------- | -------- | ---------------------------- | ----------- |
| `S3_ACCESS_KEY_ID` | Yes | AWS access key ID | - |
| `S3_SECRET_ACCESS_KEY` | Yes | AWS secret access key | - |
| `S3_BUCKET_NAME` | Yes | S3 bucket name | - |
| `S3_REGION` | No | AWS region for the S3 bucket | `us-east-1` |

::: info
When S3 storage is configured, it takes priority over R2 and KV storage. The storage priority order is: S3 > KV > R2.
:::
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ features:
details: Use Turborepo without tying your project to Vercel. This gives you flexibility in hosting decisions.
- icon: 🪣
title: Multiple Storage Options
details: Choose between R2 or KV storage for your build artifacts. This gives you the flexibility to choose the storage option that best fits your needs.
details: Choose between R2, KV, or S3 storage for your build artifacts. This gives you the flexibility to choose the storage option that best fits your needs.
- icon: 💰
title: Affordable Start
details: With Cloudflare Workers' generous free tier and zero egress fees, you can make up to 100,000 requests every day at no cost.
Expand Down
31 changes: 31 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Configuration Examples

This directory contains example configuration files for different storage backends.

## Available Examples

- **`s3-config.jsonc`** - Configuration for Amazon S3 storage
- **`r2-config.jsonc`** - Configuration for Cloudflare R2 storage (if you have one)
- **`kv-config.jsonc`** - Configuration for Cloudflare KV storage (if you have one)

## Usage

1. Copy the appropriate configuration file to your project root
2. Rename it to `wrangler.jsonc`
3. Update the values to match your setup
4. Set the required secrets using `wrangler secret put`

## Setting Secrets

For S3 storage, you'll need to set these secrets:

```bash
echo "your-access-key-id" | wrangler secret put S3_ACCESS_KEY_ID
echo "your-secret-access-key" | wrangler secret put S3_SECRET_ACCESS_KEY
```

For all storage types, you'll also need:

```bash
echo "your-turbo-token" | wrangler secret put TURBO_TOKEN
```
19 changes: 19 additions & 0 deletions examples/s3-config.jsonc
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
{
"$schema": "node_modules/wrangler/config-schema.json",
"name": "turborepo-remote-cache",
"main": "src/index.ts",
"compatibility_date": "2025-02-24",
"upload_source_maps": true,
"observability": {
"enabled": true,
},
"vars": {
"ENVIRONMENT": "production",
"BUCKET_OBJECT_EXPIRATION_HOURS": 720,
"S3_BUCKET_NAME": "your-turborepo-cache-bucket",
"S3_REGION": "us-east-1",
},
"triggers": {
"crons": ["0 3 * * *"],
},
}
5 changes: 4 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,9 @@
"cloudflare-workers",
"vercel",
"turborepo",
"cloudflare-r2"
"cloudflare-r2",
"amazon-s3",
"aws-s3"
],
"version": "3.2.0",
"author": {
Expand Down Expand Up @@ -72,6 +74,7 @@
},
"dependencies": {
"@hono/valibot-validator": "^0.5.3",
"aws4fetch": "^1.0.20",
"hono": "^4.9.2",
"valibot": "^1.1.0"
},
Expand Down
8 changes: 8 additions & 0 deletions pnpm-lock.yaml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 4 additions & 0 deletions src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,10 @@ export type Env = {
ENVIRONMENT: 'development' | 'production';
R2_STORE?: R2Bucket;
KV_STORE?: KVNamespace;
S3_ACCESS_KEY_ID?: string;
S3_SECRET_ACCESS_KEY?: string;
S3_BUCKET_NAME?: string;
S3_REGION?: string;
TURBO_TOKEN: string;
BUCKET_OBJECT_EXPIRATION_HOURS: number;
STORAGE_MANAGER: StorageManager;
Expand Down
Loading
Loading