“We turn giant uploads into tiny, recoverable pieces, process them safely, and lock results behind encryption only our app can unlock.”
This repository contains a reference implementation of a resumable media upload stack built with NestJS. It mirrors the production architecture that orchestrates S3 multipart uploads, asynchronous processing, application-level encryption, and secure delivery endpoints. In addition to the backend, the repo ships reference React and Flutter clients that demonstrate how mobile and web apps cooperate with the API.
[React / Flutter]
| 1. init upload (filename, size, mime)
v
[NestJS API] ----> S3-compatible storage (createMultipartUpload)
|<-- uploadId, key, partSize, presigned PUT URLs
|
| 2. client uploads parts concurrently via presigned PUTs
| 3. client resumes by asking API which parts exist
| 4. complete
v
[Queue] enqueue processing job
v
[Worker]
download original (stubbed locally)
-> transcode/transform (MediaConvert / Sharp abstractions)
-> encrypt (AES-256-GCM, app level keys)
-> persist to secure bucket
v
[DB] mark READY (stores wrapped data key, IV, tag)
Delivery: API streams decrypted bytes or produces signed manifests (Bitmovie Player ready).
The implementation is intentionally modular: storage providers, encryption implementations, and queue backends can be swapped without touching the upload flows.
| Module | Description |
|---|---|
modules/uploads |
REST API for creating upload sessions, requesting presigned URLs, completing uploads, and resuming stalled sessions. Upload metadata is persisted via the UploadSessionRepository abstraction. |
modules/storage |
Storage adapters. The default configuration wraps the AWS SDK so multipart uploads and processed assets are stored in S3 buckets. |
modules/processing |
Queue contract, deterministic processing service, and worker bootstrap. The default worker consumes an in-memory queue and emits monitoring callbacks (wired to CloudWatch/SNS in production). |
modules/security |
App-level encryption interfaces. The provided LocalKmsEncryptionService simulates GenerateDataKey/Encrypt behaviour so encrypted assets can be produced without KMS. |
modules/persistence |
Repository bindings. In memory storage keeps the sample app stateless; in production back these contracts with Postgres + Redis. |
- Initiate –
POST /uploads/sessionsstores metadata (including theuploadedByattribution) and calls the multipart storage adapter. The response includessessionId,uploadId, the normalized S3 object key, and the resolved uploader so clients can immediately start uploading. - Upload Parts – Clients request presigned PUT URLs for individual parts (
POST /uploads/sessions/:id/parts). Only the failing part is retried if a network error occurs. - Complete –
POST /uploads/sessions/:id/completerecords the uploaded ETags, finalizes the multipart upload, and enqueues a processing job. - Resume –
GET /uploads/sessions/:id(orPOST /uploads/sessions/:id/sync) returns the authoritative part list so web/mobile apps can resume interrupted uploads.
- Jobs contain the upload session ID, bucket, key, mime type, uploader, and the location/etag of the staged object.
- The worker marks the session
PROCESSING, calls theProcessingService, and persists completion or failure. DeterministicProcessingServiceemulates Sharp/MediaConvert pipelines by producing deterministic manifests (HLS playlists for video, rendition metadata for images) before passing them through the encryption layer. The manifests include the uploader attribution so overlays can display usernames alongside the watermark.LocalKmsEncryptionServiceperforms AES-256-GCM encryption with locally wrapped data keys. Replace with an AWS KMS-backed implementation by conforming to theEncryptionServicecontract.MonitoringReporteris the hook to emit CloudWatch metrics or SNS notifications; in the sample it logs to the console.
Located in clients/web. The heart of the implementation is a useResumableUpload hook that:
- Calls the API to initiate a session.
- Collects the uploader's display name and embeds it into every request so watermark overlays can display attribution.
- Splits the file into parts and uploads them concurrently via
fetchusing the provided presigned URLs. - Stores progress in
localStorage, allowing browser refreshes to resume uploads. - Calls the completion endpoint and polls until the asset is marked
READY.
Run the demo inside any React project by copying the hook/component pair and wiring the UPLOADS_API_BASE_URL constant to your backend.
Located in clients/flutter. The ResumableUploadService class handles chunking, retries, and background-safe persistence using the shared_preferences package. The included Riverpod-powered widget displays upload progress and retries failed chunks.
Both reference clients prompt for a username before starting an upload so the processing pipeline can stamp the attribution into the overlay metadata.
Environment variables drive the adapters:
| Variable | Description | Default |
|---|---|---|
SOURCE_BUCKET |
Bucket that stores raw multipart uploads | uploads |
PROCESSED_BUCKET |
Bucket for encrypted, processed assets | secure-processed |
LOCAL_MASTER_KEY |
32+ byte string used by LocalKmsEncryptionService |
Generated fallback |
Provide AWS credentials and replace the local adapters with production implementations to connect to S3, KMS, MediaConvert, BullMQ/Redis, and Postgres.
Provide AWS credentials and bucket names via environment variables before starting the service (or place them in a .env file that your NestJS bootstrap reads):
AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYAWS_SESSION_TOKEN(only when using temporary credentials)AWS_REGIONSOURCE_BUCKET(name of the multipart upload bucket)PROCESSED_BUCKET(name of the processed output bucket)
The included S3 adapters respect these settings in every environment.
# Start the NestJS API + background worker
npm run start:dev
# Run unit tests (includes upload happy-path coverage)
npm test- Swap
InMemoryProcessingQueuefor BullMQ (Redis) or SQS. Only the queue provider needs to change because the worker relies on the abstractProcessingQueue. - Implement a TypeORM (or Prisma) repository for
UploadSessionRepositoryso uploads survive process restarts. - Hook
MonitoringReporterinto CloudWatch metrics and SNS topics for alerting. - Update the React + Flutter clients to surface processed HLS manifests in Bitmovie Player.
src/
modules/
uploads/ // Controllers, services, DTOs for upload sessions
processing/ // Queue contracts, worker bootstrap, monitoring hooks
storage/ // Abstract + S3 storage adapters
security/ // Encryption contracts + local KMS simulator
persistence/ // Repository provider bindings
clients/
web/ // React hook + component for resumable uploads
flutter/ // Dart service and widgets mirroring the web workflow
MIT