Skip to content

Adokiye/ShardSafe-Upload-Platform

Repository files navigation

Resumable Media Upload Service

“We turn giant uploads into tiny, recoverable pieces, process them safely, and lock results behind encryption only our app can unlock.”

This repository contains a reference implementation of a resumable media upload stack built with NestJS. It mirrors the production architecture that orchestrates S3 multipart uploads, asynchronous processing, application-level encryption, and secure delivery endpoints. In addition to the backend, the repo ships reference React and Flutter clients that demonstrate how mobile and web apps cooperate with the API.

System Architecture

[React / Flutter]
   | 1. init upload (filename, size, mime)
   v
[NestJS API] ----> S3-compatible storage (createMultipartUpload)
   |<-- uploadId, key, partSize, presigned PUT URLs
   |
   | 2. client uploads parts concurrently via presigned PUTs
   | 3. client resumes by asking API which parts exist
   | 4. complete
   v
[Queue] enqueue processing job
   v
[Worker]
   download original (stubbed locally)
     -> transcode/transform (MediaConvert / Sharp abstractions)
     -> encrypt (AES-256-GCM, app level keys)
     -> persist to secure bucket
   v
[DB] mark READY (stores wrapped data key, IV, tag)

Delivery: API streams decrypted bytes or produces signed manifests (Bitmovie Player ready).

The implementation is intentionally modular: storage providers, encryption implementations, and queue backends can be swapped without touching the upload flows.

Backend Modules

Module Description
modules/uploads REST API for creating upload sessions, requesting presigned URLs, completing uploads, and resuming stalled sessions. Upload metadata is persisted via the UploadSessionRepository abstraction.
modules/storage Storage adapters. The default configuration wraps the AWS SDK so multipart uploads and processed assets are stored in S3 buckets.
modules/processing Queue contract, deterministic processing service, and worker bootstrap. The default worker consumes an in-memory queue and emits monitoring callbacks (wired to CloudWatch/SNS in production).
modules/security App-level encryption interfaces. The provided LocalKmsEncryptionService simulates GenerateDataKey/Encrypt behaviour so encrypted assets can be produced without KMS.
modules/persistence Repository bindings. In memory storage keeps the sample app stateless; in production back these contracts with Postgres + Redis.

Upload flow

  1. InitiatePOST /uploads/sessions stores metadata (including the uploadedBy attribution) and calls the multipart storage adapter. The response includes sessionId, uploadId, the normalized S3 object key, and the resolved uploader so clients can immediately start uploading.
  2. Upload Parts – Clients request presigned PUT URLs for individual parts (POST /uploads/sessions/:id/parts). Only the failing part is retried if a network error occurs.
  3. CompletePOST /uploads/sessions/:id/complete records the uploaded ETags, finalizes the multipart upload, and enqueues a processing job.
  4. ResumeGET /uploads/sessions/:id (or POST /uploads/sessions/:id/sync) returns the authoritative part list so web/mobile apps can resume interrupted uploads.

Processing flow

  • Jobs contain the upload session ID, bucket, key, mime type, uploader, and the location/etag of the staged object.
  • The worker marks the session PROCESSING, calls the ProcessingService, and persists completion or failure.
  • DeterministicProcessingService emulates Sharp/MediaConvert pipelines by producing deterministic manifests (HLS playlists for video, rendition metadata for images) before passing them through the encryption layer. The manifests include the uploader attribution so overlays can display usernames alongside the watermark.
  • LocalKmsEncryptionService performs AES-256-GCM encryption with locally wrapped data keys. Replace with an AWS KMS-backed implementation by conforming to the EncryptionService contract.
  • MonitoringReporter is the hook to emit CloudWatch metrics or SNS notifications; in the sample it logs to the console.

Frontend Reference Clients

React (web)

Located in clients/web. The heart of the implementation is a useResumableUpload hook that:

  1. Calls the API to initiate a session.
  2. Collects the uploader's display name and embeds it into every request so watermark overlays can display attribution.
  3. Splits the file into parts and uploads them concurrently via fetch using the provided presigned URLs.
  4. Stores progress in localStorage, allowing browser refreshes to resume uploads.
  5. Calls the completion endpoint and polls until the asset is marked READY.

Run the demo inside any React project by copying the hook/component pair and wiring the UPLOADS_API_BASE_URL constant to your backend.

Flutter (mobile)

Located in clients/flutter. The ResumableUploadService class handles chunking, retries, and background-safe persistence using the shared_preferences package. The included Riverpod-powered widget displays upload progress and retries failed chunks.

Both reference clients prompt for a username before starting an upload so the processing pipeline can stamp the attribution into the overlay metadata.

Configuration

Environment variables drive the adapters:

Variable Description Default
SOURCE_BUCKET Bucket that stores raw multipart uploads uploads
PROCESSED_BUCKET Bucket for encrypted, processed assets secure-processed
LOCAL_MASTER_KEY 32+ byte string used by LocalKmsEncryptionService Generated fallback

Provide AWS credentials and replace the local adapters with production implementations to connect to S3, KMS, MediaConvert, BullMQ/Redis, and Postgres.

AWS prerequisites

Provide AWS credentials and bucket names via environment variables before starting the service (or place them in a .env file that your NestJS bootstrap reads):

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_SESSION_TOKEN (only when using temporary credentials)
  • AWS_REGION
  • SOURCE_BUCKET (name of the multipart upload bucket)
  • PROCESSED_BUCKET (name of the processed output bucket)

The included S3 adapters respect these settings in every environment.

Running Locally

# Start the NestJS API + background worker
npm run start:dev

# Run unit tests (includes upload happy-path coverage)
npm test

Extending to Production

  • Swap InMemoryProcessingQueue for BullMQ (Redis) or SQS. Only the queue provider needs to change because the worker relies on the abstract ProcessingQueue.
  • Implement a TypeORM (or Prisma) repository for UploadSessionRepository so uploads survive process restarts.
  • Hook MonitoringReporter into CloudWatch metrics and SNS topics for alerting.
  • Update the React + Flutter clients to surface processed HLS manifests in Bitmovie Player.

Repository Layout

src/
  modules/
    uploads/        // Controllers, services, DTOs for upload sessions
    processing/     // Queue contracts, worker bootstrap, monitoring hooks
    storage/        // Abstract + S3 storage adapters
    security/       // Encryption contracts + local KMS simulator
    persistence/    // Repository provider bindings
clients/
  web/             // React hook + component for resumable uploads
  flutter/         // Dart service and widgets mirroring the web workflow

License

MIT

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages