A comprehensive hands-on project to learn Envoy Proxy fundamentals using GitHub Codespaces. This project demonstrates key Envoy concepts including load balancing, routing, health checks, and observability.
- Envoy Proxy Basics: Configuration, listeners, clusters, and routes
- Load Balancing: Round-robin distribution across multiple backends
- Path-based Routing: Route requests based on URL paths
- Health Checking: Automatic service health monitoring
- Rate Limiting: Control request rates to protect services
- Observability: Admin interface, stats, and logging
- Service Discovery: How Envoy discovers and manages backends
Internet/Codespace → Envoy Proxy (Port 8000) → Backend Services
↓
Admin Interface (Port 8080)
Backend Services:
- Backend 1: Node.js service (routes:
/api/v1/) - Backend 2: Node.js service (routes:
/api/v2/) - Backend 3: Python Flask service (routes:
/python/) - Load Balancing: All services available at
/and/loadbalance
- Clone this repository in GitHub Codespaces
- Start all services:
docker-compose up --build
- Wait for services to start (about 30 seconds)
- Access Envoy at the forwarded port 8000
- Check the admin interface at port 8080
# Test main endpoint (load balanced)
curl http://localhost:8000/
# Test health endpoint
curl http://localhost:8000/health# Route to Backend 1 (Node.js)
curl http://localhost:8000/api/v1/
# Route to Backend 2 (Node.js)
curl http://localhost:8000/api/v2/
# Route to Backend 3 (Python Flask)
curl http://localhost:8000/python/# Make multiple requests to see load balancing
for i in {1..5}; do
curl http://localhost:8000/loadbalance
echo "---"
done# Test slow endpoints
curl http://localhost:8000/api/v1/slow
curl http://localhost:8000/api/v2/slow
curl http://localhost:8000/python/slow
# Test error handling
curl http://localhost:8000/api/v1/error# Test rate limiting (configured for 100 requests per minute)
for i in {1..10}; do
curl -w "%{http_code}\n" http://localhost:8000/ -o /dev/null -s
doneThe Envoy admin interface is available at port 8080 and provides:
- Stats:
http://localhost:8080/stats- Detailed metrics - Clusters:
http://localhost:8080/clusters- Backend service status - Config:
http://localhost:8080/config_dump- Current configuration - Listeners:
http://localhost:8080/listeners- Active listeners - Server Info:
http://localhost:8080/server_info- Envoy version and build info
# Connection stats
curl http://localhost:8080/stats | grep "cluster.*cx_"
# Request stats
curl http://localhost:8080/stats | grep "cluster.*rq_"
# Health check stats
curl http://localhost:8080/stats | grep "health_check"
# Rate limiting stats
curl http://localhost:8080/stats | grep "rate_limit"The main config file (envoy-config/envoy.yaml) demonstrates:
- Admin Interface: Management and monitoring
- Listeners: Accept incoming connections
- Routes: Path-based request routing
- Clusters: Backend service definitions
- Health Checks: Automatic service monitoring
- Filters: Request processing (rate limiting, logging)
Listeners: Define how Envoy accepts connections
listeners:
- name: main_listener
address:
socket_address:
address: 0.0.0.0
port_value: 8000Clusters: Define backend services
clusters:
- name: backend1_cluster
connect_timeout: 0.25s
type: LOGICAL_DNS
lb_policy: ROUND_ROBINRoutes: Define request routing logic
routes:
- match:
prefix: "/api/v1/"
route:
cluster: backend1_clusterRun the automated load generator to test Envoy features:
# Start load testing container
docker-compose --profile tools up load-generatorThis will automatically test:
- Basic connectivity
- Path-based routing
- Load balancing
- Health checks
- Performance under load
- Change the load balancing policy from
ROUND_ROBINtoLEAST_REQUEST - Restart Envoy and observe the behavior
- Check admin stats to see the difference
- Add circuit breaker configuration to a cluster
- Test with the
/errorendpoint - Monitor circuit breaker stats
- Add custom headers to requests
- Modify routing based on headers
- Test with curl using custom headers
- Configure request timeouts
- Test with slow endpoints
- Observe timeout behavior in logs