Skip to content

πŸš€ High-performance HTTP caching proxy server built with FastAPI. Supports in-memory and Redis backends with smart caching, real-time statistics, and Docker deployment. Reduce latency and bandwidth usage effortlessly!

License

Notifications You must be signed in to change notification settings

Maheshnath09/caching-proxy-server

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

6 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ Caching Proxy Server

A high-performance, intelligent HTTP caching proxy server built with FastAPI that dramatically reduces latency and bandwidth usage by caching HTTP responses. Supports both in-memory and Redis-based caching backends with automatic cache invalidation and smart caching strategies.

FastAPI Python Docker Redis


✨ Features

  • 🎯 Smart Caching - Intelligent response caching with configurable TTL
  • ⚑ High Performance - Built on FastAPI with async/await support
  • πŸ”„ Dual Backend Support - Choose between in-memory or Redis caching
  • πŸ“Š Real-time Statistics - Track cache hits, misses, and hit rates
  • πŸ›‘οΈ Production Ready - Docker containerized with health checks
  • πŸ” Cache Management - Full CRUD operations on cached items
  • 🌐 Direct Proxy Mode - Transparent HTTP proxy via /http/ paths
  • πŸ“ˆ LRU Eviction - Automatic cache eviction when memory limits are reached
  • πŸ”’ Security First - Non-root Docker user and secure defaults

πŸ—οΈ Architecture

graph LR
    A[Client] -->|HTTP Request| B[FastAPI Proxy]
    B -->|Check Cache| C{Cache Hit?}
    C -->|Yes| D[Return Cached Response]
    C -->|No| E[Forward to Origin]
    E -->|Response| F[Cache Response]
    F --> D
    B -.->|Memory Backend| G[(In-Memory Cache)]
    B -.->|Redis Backend| H[(Redis)]
Loading

Components

  • FastAPI Application - Async HTTP proxy with middleware support
  • Cache Manager - Abstraction layer for different cache backends
  • Memory Backend - Fast in-memory LRU cache with TTL support
  • Redis Backend - Distributed caching with persistence
  • Statistics Tracker - Real-time metrics collection

πŸš€ Quick Start

Using Docker Compose (Recommended)

  1. Clone and navigate to the project:

    cd caching-proxy-server
  2. Start the services:

    docker-compose up -d
  3. Verify it's running:

    curl http://localhost:8000/health

That's it! πŸŽ‰ The proxy server is now running on http://localhost:8000

Using Docker Only

# Build the image
docker build -t caching-proxy-server .

# Run with memory cache
docker run -d -p 8000:8000 \
  -e CACHE_BACKEND=memory \
  --name proxy-server \
  caching-proxy-server

# Run with Redis (requires Redis running separately)
docker run -d -p 8000:8000 \
  -e CACHE_BACKEND=redis \
  -e REDIS_URL=redis://your-redis-host:6379 \
  --name proxy-server \
  caching-proxy-server

Local Development

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Run the server
python main.py

πŸ“– API Documentation

πŸ”Ή Proxy Request

Endpoint: POST /proxy

Proxy an HTTP request through the caching layer.

Request Body:

{
  "url": "https://api.example.com/data",
  "method": "GET",
  "headers": {
    "User-Agent": "CachingProxy/1.0"
  },
  "params": {
    "page": 1,
    "limit": 10
  }
}

Query Parameters:

  • ttl (optional): Custom cache TTL in seconds

Example:

curl -X POST http://localhost:8000/proxy \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://jsonplaceholder.typicode.com/posts/1",
    "method": "GET"
  }'

Response:

{
  "status_code": 200,
  "content": "...",
  "headers": {...},
  "from_cache": false,
  "cache_key": "abc123..."
}

πŸ”Ή Direct Proxy Mode

Endpoint: GET|POST /http/{target_url}

Transparent proxy mode - just prefix your URL with /http/.

Examples:

# Proxy a GET request
curl http://localhost:8000/http/jsonplaceholder.typicode.com/posts/1

# Proxy with query parameters
curl "http://localhost:8000/http/api.github.com/users/octocat?per_page=5"

# Proxy a POST request
curl -X POST http://localhost:8000/http/httpbin.org/post \
  -H "Content-Type: application/json" \
  -d '{"key": "value"}'

πŸ”Ή Cache Statistics

Endpoint: GET /stats

Get real-time caching statistics.

Example:

curl http://localhost:8000/stats

Response:

{
  "hits": 150,
  "misses": 50,
  "total_requests": 200,
  "hit_rate": 75.0,
  "cache_backend": "memory"
}

πŸ”Ή Cache Information

Endpoint: GET /cache/info/{cache_key}

Get details about a specific cached item.

Example:

curl http://localhost:8000/cache/info/abc123def456

πŸ”Ή Delete Cache Item

Endpoint: DELETE /cache/{cache_key}

Remove a specific item from cache.

Example:

curl -X DELETE http://localhost:8000/cache/abc123def456

πŸ”Ή Clear All Cache

Endpoint: DELETE /cache/clear

Clear the entire cache (memory backend only).

Example:

curl -X DELETE http://localhost:8000/cache/clear

πŸ”Ή Health Check

Endpoint: GET /health

Check if the service is healthy.

Example:

curl http://localhost:8000/health

Response:

{
  "status": "healthy",
  "cache_backend": "memory",
  "timestamp": 1701234567.89
}

βš™οΈ Configuration

Environment Variables

Variable Default Description
HOST 0.0.0.0 Server bind address
PORT 8000 Server port
CACHE_BACKEND memory Cache backend: memory or redis
CACHE_TTL 300 Default cache TTL in seconds (5 minutes)
MAX_CACHE_SIZE 1000 Maximum cache entries (memory backend)
REDIS_URL redis://localhost:6379 Redis connection URL

Configuration Examples

Memory Cache (Default):

CACHE_BACKEND=memory
CACHE_TTL=300
MAX_CACHE_SIZE=1000

Redis Cache:

CACHE_BACKEND=redis
REDIS_URL=redis://redis:6379
CACHE_TTL=600

Custom Configuration:

# Copy example env file
cp .env.example .env

# Edit configuration
nano .env

# Restart services
docker-compose restart

🐳 Docker Commands

Build & Run

# Build the image
docker build -t caching-proxy-server .

# Run with docker-compose
docker-compose up -d

# View logs
docker-compose logs -f proxy

# Stop services
docker-compose down

# Stop and remove volumes
docker-compose down -v

Switching Cache Backends

To Memory Cache:

# Edit docker-compose.yml or set environment variable
export CACHE_BACKEND=memory
docker-compose up -d

To Redis Cache:

export CACHE_BACKEND=redis
docker-compose up -d

πŸ“Š Performance & Benchmarks

Cache Performance

  • Memory Backend: Sub-millisecond cache hits
  • Redis Backend: 1-5ms cache hits (network dependent)
  • Cache Miss: Depends on origin server response time

Typical Use Cases

βœ… API Response Caching - Cache expensive API calls
βœ… Static Content Proxy - Reduce bandwidth for static assets
βœ… Rate Limit Protection - Serve cached responses during rate limits
βœ… Microservices - Cache inter-service communication
βœ… Development - Mock external APIs with cached responses


πŸ”§ Troubleshooting

Common Issues

1. Container won't start

# Check logs
docker-compose logs proxy

# Verify port availability
netstat -an | grep 8000

2. Redis connection failed

# Check Redis is running
docker-compose ps redis

# Test Redis connection
docker-compose exec redis redis-cli ping

3. Cache not working

# Check cache statistics
curl http://localhost:8000/stats

# Verify cache backend setting
docker-compose exec proxy env | grep CACHE_BACKEND

4. High memory usage

# Reduce MAX_CACHE_SIZE
# Edit docker-compose.yml or .env
MAX_CACHE_SIZE=500

# Restart
docker-compose restart proxy

πŸ› οΈ Development

Project Structure

caching-proxy-server/
β”œβ”€β”€ main.py              # FastAPI application & endpoints
β”œβ”€β”€ cache_backends.py    # Cache backend implementations
β”œβ”€β”€ config.py            # Configuration management
β”œβ”€β”€ models.py            # Pydantic models
β”œβ”€β”€ requirements.txt     # Python dependencies
β”œβ”€β”€ Dockerfile           # Docker image definition
β”œβ”€β”€ docker-compose.yml   # Multi-container setup
β”œβ”€β”€ .dockerignore        # Docker build exclusions
β”œβ”€β”€ .env.example         # Environment template
└── README.md            # This file

Running Tests

# Install dev dependencies
pip install pytest httpx pytest-asyncio

# Run tests (create test file first)
pytest tests/

API Documentation

Once running, visit:


πŸ“ License

This project is open source and available under the MIT License.


🀝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.


πŸ“§ Support

For issues and questions:

  • Open an issue on GitHub
  • Check existing documentation
  • Review logs: docker-compose logs -f

🎯 Roadmap

  • Authentication & API keys
  • Cache warming strategies
  • Prometheus metrics export
  • GraphQL support
  • WebSocket proxying
  • Advanced cache invalidation rules

Built with ❀️ using FastAPI

⭐ Star this repo if you find it useful!

About

πŸš€ High-performance HTTP caching proxy server built with FastAPI. Supports in-memory and Redis backends with smart caching, real-time statistics, and Docker deployment. Reduce latency and bandwidth usage effortlessly!

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published