A high-performance, intelligent HTTP caching proxy server built with FastAPI that dramatically reduces latency and bandwidth usage by caching HTTP responses. Supports both in-memory and Redis-based caching backends with automatic cache invalidation and smart caching strategies.
- π― Smart Caching - Intelligent response caching with configurable TTL
- β‘ High Performance - Built on FastAPI with async/await support
- π Dual Backend Support - Choose between in-memory or Redis caching
- π Real-time Statistics - Track cache hits, misses, and hit rates
- π‘οΈ Production Ready - Docker containerized with health checks
- π Cache Management - Full CRUD operations on cached items
- π Direct Proxy Mode - Transparent HTTP proxy via
/http/paths - π LRU Eviction - Automatic cache eviction when memory limits are reached
- π Security First - Non-root Docker user and secure defaults
graph LR
A[Client] -->|HTTP Request| B[FastAPI Proxy]
B -->|Check Cache| C{Cache Hit?}
C -->|Yes| D[Return Cached Response]
C -->|No| E[Forward to Origin]
E -->|Response| F[Cache Response]
F --> D
B -.->|Memory Backend| G[(In-Memory Cache)]
B -.->|Redis Backend| H[(Redis)]
- FastAPI Application - Async HTTP proxy with middleware support
- Cache Manager - Abstraction layer for different cache backends
- Memory Backend - Fast in-memory LRU cache with TTL support
- Redis Backend - Distributed caching with persistence
- Statistics Tracker - Real-time metrics collection
-
Clone and navigate to the project:
cd caching-proxy-server -
Start the services:
docker-compose up -d
-
Verify it's running:
curl http://localhost:8000/health
That's it! π The proxy server is now running on http://localhost:8000
# Build the image
docker build -t caching-proxy-server .
# Run with memory cache
docker run -d -p 8000:8000 \
-e CACHE_BACKEND=memory \
--name proxy-server \
caching-proxy-server
# Run with Redis (requires Redis running separately)
docker run -d -p 8000:8000 \
-e CACHE_BACKEND=redis \
-e REDIS_URL=redis://your-redis-host:6379 \
--name proxy-server \
caching-proxy-server# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Run the server
python main.pyEndpoint: POST /proxy
Proxy an HTTP request through the caching layer.
Request Body:
{
"url": "https://api.example.com/data",
"method": "GET",
"headers": {
"User-Agent": "CachingProxy/1.0"
},
"params": {
"page": 1,
"limit": 10
}
}Query Parameters:
ttl(optional): Custom cache TTL in seconds
Example:
curl -X POST http://localhost:8000/proxy \
-H "Content-Type: application/json" \
-d '{
"url": "https://jsonplaceholder.typicode.com/posts/1",
"method": "GET"
}'Response:
{
"status_code": 200,
"content": "...",
"headers": {...},
"from_cache": false,
"cache_key": "abc123..."
}Endpoint: GET|POST /http/{target_url}
Transparent proxy mode - just prefix your URL with /http/.
Examples:
# Proxy a GET request
curl http://localhost:8000/http/jsonplaceholder.typicode.com/posts/1
# Proxy with query parameters
curl "http://localhost:8000/http/api.github.com/users/octocat?per_page=5"
# Proxy a POST request
curl -X POST http://localhost:8000/http/httpbin.org/post \
-H "Content-Type: application/json" \
-d '{"key": "value"}'Endpoint: GET /stats
Get real-time caching statistics.
Example:
curl http://localhost:8000/statsResponse:
{
"hits": 150,
"misses": 50,
"total_requests": 200,
"hit_rate": 75.0,
"cache_backend": "memory"
}Endpoint: GET /cache/info/{cache_key}
Get details about a specific cached item.
Example:
curl http://localhost:8000/cache/info/abc123def456Endpoint: DELETE /cache/{cache_key}
Remove a specific item from cache.
Example:
curl -X DELETE http://localhost:8000/cache/abc123def456Endpoint: DELETE /cache/clear
Clear the entire cache (memory backend only).
Example:
curl -X DELETE http://localhost:8000/cache/clearEndpoint: GET /health
Check if the service is healthy.
Example:
curl http://localhost:8000/healthResponse:
{
"status": "healthy",
"cache_backend": "memory",
"timestamp": 1701234567.89
}| Variable | Default | Description |
|---|---|---|
HOST |
0.0.0.0 |
Server bind address |
PORT |
8000 |
Server port |
CACHE_BACKEND |
memory |
Cache backend: memory or redis |
CACHE_TTL |
300 |
Default cache TTL in seconds (5 minutes) |
MAX_CACHE_SIZE |
1000 |
Maximum cache entries (memory backend) |
REDIS_URL |
redis://localhost:6379 |
Redis connection URL |
Memory Cache (Default):
CACHE_BACKEND=memory
CACHE_TTL=300
MAX_CACHE_SIZE=1000Redis Cache:
CACHE_BACKEND=redis
REDIS_URL=redis://redis:6379
CACHE_TTL=600Custom Configuration:
# Copy example env file
cp .env.example .env
# Edit configuration
nano .env
# Restart services
docker-compose restart# Build the image
docker build -t caching-proxy-server .
# Run with docker-compose
docker-compose up -d
# View logs
docker-compose logs -f proxy
# Stop services
docker-compose down
# Stop and remove volumes
docker-compose down -vTo Memory Cache:
# Edit docker-compose.yml or set environment variable
export CACHE_BACKEND=memory
docker-compose up -dTo Redis Cache:
export CACHE_BACKEND=redis
docker-compose up -d- Memory Backend: Sub-millisecond cache hits
- Redis Backend: 1-5ms cache hits (network dependent)
- Cache Miss: Depends on origin server response time
β
API Response Caching - Cache expensive API calls
β
Static Content Proxy - Reduce bandwidth for static assets
β
Rate Limit Protection - Serve cached responses during rate limits
β
Microservices - Cache inter-service communication
β
Development - Mock external APIs with cached responses
1. Container won't start
# Check logs
docker-compose logs proxy
# Verify port availability
netstat -an | grep 80002. Redis connection failed
# Check Redis is running
docker-compose ps redis
# Test Redis connection
docker-compose exec redis redis-cli ping3. Cache not working
# Check cache statistics
curl http://localhost:8000/stats
# Verify cache backend setting
docker-compose exec proxy env | grep CACHE_BACKEND4. High memory usage
# Reduce MAX_CACHE_SIZE
# Edit docker-compose.yml or .env
MAX_CACHE_SIZE=500
# Restart
docker-compose restart proxycaching-proxy-server/
βββ main.py # FastAPI application & endpoints
βββ cache_backends.py # Cache backend implementations
βββ config.py # Configuration management
βββ models.py # Pydantic models
βββ requirements.txt # Python dependencies
βββ Dockerfile # Docker image definition
βββ docker-compose.yml # Multi-container setup
βββ .dockerignore # Docker build exclusions
βββ .env.example # Environment template
βββ README.md # This file
# Install dev dependencies
pip install pytest httpx pytest-asyncio
# Run tests (create test file first)
pytest tests/Once running, visit:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
This project is open source and available under the MIT License.
Contributions are welcome! Please feel free to submit a Pull Request.
For issues and questions:
- Open an issue on GitHub
- Check existing documentation
- Review logs:
docker-compose logs -f
- Authentication & API keys
- Cache warming strategies
- Prometheus metrics export
- GraphQL support
- WebSocket proxying
- Advanced cache invalidation rules
Built with β€οΈ using FastAPI
β Star this repo if you find it useful!