Skip to content

FastAPI transparent HTTP proxy with Redis caching. Utilizes MD5 request hashing, per-request TTL, and graceful degradation to dedupe API calls and save costs.

Notifications You must be signed in to change notification settings

yigitkonur/transparent-cache-proxy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

5 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿš€ FastAPI Transparent Proxy ๐Ÿš€

Stop paying for duplicate API calls. Start caching like a pro.

The ultimate transparent HTTP proxy for no-code platforms. It sits between your automations and expensive APIs, caching responses based on MD5 hashes so identical requests return instantly.

python fastapi ย ย โ€ขย ย  license platform

zero config n8n ready


FastAPI Transparent Proxy is the caching layer your no-code automations wish they had. Stop making the same API calls over and over. This proxy sits between your n8n/Make/Zapier workflows and expensive third-party APIs, returning cached responses for identical requestsโ€”saving bandwidth, reducing latency, and cutting your API bills.

๐Ÿง 

MD5 Deduplication
Same request = same cache key

โšก

Sub-ms Response
Cache hits are instant

๐Ÿ”Œ

Zero Config
Works without Redis too

How it slaps:

  • You: Point your n8n HTTP Request node to this proxy
  • Proxy: Hashes the request, checks cache, returns or forwards
  • Result: First call hits the API, next 1000 identical calls return instantly
  • Your wallet: ๐Ÿ“ˆ

๐Ÿ’ฅ Why This Slaps Other Methods

Manually deduplicating API calls in no-code is a nightmare. This proxy makes other approaches look ancient.

โŒ The Old Way (Pain) โœ… The Proxy Way (Glory)
  1. Build complex "check if already fetched" logic
  2. Store results in Airtable/Notion/Sheets
  3. Add branches: "if cached then skip"
  4. Debug why your workflow is 47 nodes
  5. Pay for 1000 duplicate API calls anyway
  1. Deploy this proxy (one command)
  2. Change your API URL to proxy URL
  3. Done. Caching is automatic.
  4. Watch your API costs drop 90%
  5. Go grab a coffee. โ˜•

We're not just forwarding requests. We're building deterministic cache keys from MD5 hashes of method + URL + headers + body, so identical business requests always hit the same cache entryโ€”even across different workflow runs.


๐Ÿš€ Get Started in 60 Seconds

Platform One-liner
๐Ÿณ Docker docker run -p 8000:8000 ghcr.io/yigitkonur/fastapi-proxy
๐Ÿ Python pip install -r requirements.txt && uvicorn main:app
โ˜๏ธ Railway/Render Deploy from GitHub, set REDIS_URL env var

Quick Install (Python)

# Clone and enter
git clone https://github.com/yigitkonur/fastapi-http-proxy-with-caching.git
cd fastapi-http-proxy-with-caching

# Setup virtual environment
python3 -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate
pip install -r requirements.txt

# Run (works immediately, even without Redis!)
uvicorn main:app --host 0.0.0.0 --port 8000

โœจ Zero Config: The proxy starts in degraded mode without Redisโ€”requests still work, just without caching. Add Redis when you're ready for the full experience.


๐ŸŽฎ Usage: Fire and Forget

Basic Proxy Request

# Instead of calling the API directly...
curl -X POST "https://expensive-api.com/data" -d '{"query": "foo"}'

# Route through the proxy:
curl -X POST "http://localhost:8000/proxy?url=https://expensive-api.com/data" \
  -H "Content-Type: application/json" \
  -d '{"query": "foo"}'

Response Format

{
  "success": true,
  "cached": true,
  "cache_key": "a1b2c3d4e5f67890",
  "status_code": 200,
  "data": { "your": "api response" }
}

The cached: true means you just saved an API call. ๐ŸŽ‰

In n8n

  1. Add an HTTP Request node
  2. Set URL to: http://your-proxy:8000/proxy?url=https://actual-api.com/endpoint
  3. Configure method, headers, body as normal
  4. Every identical request now returns from cache

Advanced Options

# Force fresh request (bypass cache)
curl "http://localhost:8000/proxy?url=https://api.com/data&bypass_cache=true"

# Custom cache TTL (2 hours instead of default 1 hour)
curl "http://localhost:8000/proxy?url=https://api.com/data&cache_ttl=7200"

Health & Admin Endpoints

# Health check (great for load balancers)
curl http://localhost:8000/health
# โ†’ {"status": "healthy", "redis_connected": true, "version": "2.0.0"}

# Cache statistics
curl http://localhost:8000/cache/stats
# โ†’ {"total_keys": 1547, "memory_usage": "2.3M", "prefix": "proxy:cache:"}

# Nuclear option: clear all cache
curl -X DELETE http://localhost:8000/cache
# โ†’ {"deleted": 1547, "message": "Cleared 1547 cached entries"}

โœจ Feature Breakdown: The Secret Sauce

Feature What It Does Why You Care
๐Ÿง  MD5 Hashing
Deterministic keys
Hashes method + URL + headers + body into cache key Identical requests always return same cached response
โšก Graceful Degradation
No Redis? No problem
Starts without Redis, just skips caching Deploy anywhere, add Redis later
๐Ÿ”„ All HTTP Methods
Not just POST
GET, POST, PUT, DELETE, PATCH all supported Works with any API pattern
โฐ Flexible TTL
Per-request control
Default 1 hour, override per request Cache static data longer, dynamic shorter
๐ŸŽฏ Cache Bypass
When you need fresh
bypass_cache=true skips cache Force refresh when needed
๐Ÿ“Š Health Checks
Production ready
/health endpoint with Redis status Perfect for k8s liveness probes
๐Ÿ”ง Legacy Support
Drop-in replacement
/webhook-test/post-response still works Migrate existing workflows gradually

โš™๏ธ Configuration

All settings via environment variables. Copy .env.example to .env:

cp .env.example .env
Variable Default Description
REDIS_URL redis://localhost:6379/0 Redis connection (or Upstash URL)
CACHE_TTL_SECONDS 3600 Default cache lifetime (1 hour)
CACHE_PREFIX proxy:cache: Redis key prefix
PROXY_TIMEOUT_SECONDS 30 Timeout for proxied requests
DEBUG false Enable verbose logging

Using Upstash (Serverless Redis)

Upstash is perfect for thisโ€”pay only for what you use:

  1. Create a database at console.upstash.com
  2. Copy your Redis URL
  3. Set in .env:
    REDIS_URL=redis://default:YOUR_PASSWORD@YOUR_ENDPOINT.upstash.io:6379
    

Cost: ~$0.20 per 100K cached requests. If you're making 1M duplicate calls/month, that's $2 vs whatever you're paying now.


๐Ÿ—๏ธ Project Structure

โ”œโ”€โ”€ main.py                 # Entry point (thin wrapper)
โ”œโ”€โ”€ app/
โ”‚   โ”œโ”€โ”€ __init__.py        # Package metadata
โ”‚   โ”œโ”€โ”€ main.py            # FastAPI app factory + lifespan
โ”‚   โ”œโ”€โ”€ config.py          # Pydantic settings from env
โ”‚   โ”œโ”€โ”€ models.py          # Request/response schemas
โ”‚   โ”œโ”€โ”€ dependencies.py    # Service injection
โ”‚   โ”œโ”€โ”€ services/
โ”‚   โ”‚   โ”œโ”€โ”€ cache.py       # Redis + MD5 hashing logic
โ”‚   โ”‚   โ””โ”€โ”€ proxy.py       # HTTP forwarding logic
โ”‚   โ””โ”€โ”€ routes/
โ”‚       โ”œโ”€โ”€ proxy.py       # /proxy endpoint
โ”‚       โ””โ”€โ”€ health.py      # /health, /cache/stats
โ”œโ”€โ”€ requirements.txt       # Pinned dependencies
โ”œโ”€โ”€ Dockerfile            # Multi-stage production build
โ”œโ”€โ”€ .env.example          # Configuration template
โ””โ”€โ”€ README.md

๐Ÿณ Deployment

Docker (Recommended)

# Build
docker build -t fastapi-proxy .

# Run (without Redis - degraded mode)
docker run -p 8000:8000 fastapi-proxy

# Run with Redis
docker run -p 8000:8000 -e REDIS_URL=redis://host:6379 fastapi-proxy

Docker Compose (with Redis)

version: '3.8'
services:
  proxy:
    build: .
    ports:
      - "8000:8000"
    environment:
      - REDIS_URL=redis://redis:6379/0
    depends_on:
      - redis
  redis:
    image: redis:alpine
    volumes:
      - redis_data:/data
volumes:
  redis_data:

Systemd (Linux Server)

[Unit]
Description=FastAPI Transparent Proxy
After=network.target

[Service]
User=www-data
WorkingDirectory=/opt/fastapi-proxy
Environment="PATH=/opt/fastapi-proxy/venv/bin"
ExecStart=/opt/fastapi-proxy/venv/bin/uvicorn main:app --host 0.0.0.0 --port 8000
Restart=always

[Install]
WantedBy=multi-user.target

๐Ÿ”ฅ Common Issues & Quick Fixes

Expand for troubleshooting tips
Problem Solution
"Redis unavailable" warning Expected without Redis. The proxy still works, just without caching. Add REDIS_URL when ready.
Cache not working Check redis_connected: true in /health. Verify your REDIS_URL is correct.
Timeout errors Increase PROXY_TIMEOUT_SECONDS. Some APIs are slow.
Cache key collisions Shouldn't happenโ€”MD5 is deterministic. If you're seeing wrong cached responses, check if you're modifying headers unintentionally.
High memory usage Set CACHE_TTL_SECONDS lower, or use /cache DELETE endpoint to clear.

๐Ÿ› ๏ธ Development

# Clone
git clone https://github.com/yigitkonur/fastapi-http-proxy-with-caching.git
cd fastapi-http-proxy-with-caching

# Setup
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

# Run with hot reload
uvicorn main:app --reload

# Run tests (coming soon)
pytest

Built with ๐Ÿ”ฅ because paying for duplicate API calls is a soul-crushing waste of money.

MIT ยฉ YiฤŸit Konur

About

FastAPI transparent HTTP proxy with Redis caching. Utilizes MD5 request hashing, per-request TTL, and graceful degradation to dedupe API calls and save costs.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published