fix: add LRU cache with size limits to prevent memory exhaustion (#39)#40
fix: add LRU cache with size limits to prevent memory exhaustion (#39)#40willibrandon merged 3 commits intomainfrom
Conversation
Replace unbounded template cache with sharded LRU implementation: - Configurable max size (default: 10,000 templates) - O(1) get/put operations with proper eviction - 64 shards with dynamic sizing based on capacity - Atomic stats tracking (hits/misses/evictions/expirations) - Optional TTL support with lazy cleanup - Thread-safe global cache with one-time configuration Security: Fixes potential DoS via dynamic template generation that could cause unbounded memory growth. Cache now strictly enforces size limits. Performance: 57ns/op for Get, 82ns/op for Put, zero allocations on hits. Benchmarks show 100% hit rate for repeated templates. Testing: Added tests including specific validation of the memory exhaustion scenario (1M unique templates → 100 cached entries).
- Document template cache configuration in README.md - Add cache implementation details to CLAUDE.md (v0.8.1+) - Create examples/template-cache demonstrating usage and protection - Explain security fix for memory exhaustion vulnerability
There was a problem hiding this comment.
Pull Request Overview
This PR replaces the unbounded template cache with a thread-safe, sharded LRU cache implementation to prevent memory exhaustion from dynamic template generation. The new cache enforces strict size limits while maintaining high performance through lock sharding and atomic statistics, fixing a potential DoS vulnerability where applications generating templates dynamically could cause unbounded memory growth.
Key Changes
- Implemented sharded LRU cache with configurable size limits and optional TTL
- Replaced simple map-based cache with bounded, thread-safe implementation
- Added comprehensive test coverage including security tests and benchmarks
Reviewed Changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| internal/parser/lru_cache.go | Core LRU cache implementation with sharding, TTL support, and thread-safe operations |
| internal/parser/lru_cache_test.go | Comprehensive test suite covering basic operations, concurrency, eviction, and benchmarks |
| internal/parser/cache_security_test.go | Security-focused tests validating protection against memory exhaustion attacks |
| internal/parser/cache.go | Updated cache interface to use new LRU implementation instead of unbounded map |
| examples/template-cache/main.go | Example demonstrating cache configuration and monitoring capabilities |
| README.md | Documentation updates explaining template cache configuration and monitoring |
| CLAUDE.md | Development notes documenting the new template cache feature |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
You can also share your feedback on Copilot code review for a chance to win a $100 gift card. Take the survey.
- Check globalCacheConfigured flag in initGlobalCache to prevent race - Replace magic number with calculated constant in memory test - Optimize time conversions using direct nanosecond arithmetic (2 places)
Description
Replaces the unbounded template cache with a thread-safe, sharded LRU cache implementation to prevent memory exhaustion from dynamic template generation. The cache enforces strict size limits while maintaining high performance through lock sharding and atomic statistics.
The previous implementation used an unbounded map that could grow indefinitely when applications generated templates dynamically (e.g.,
fmt.Sprintf("User %d: {Action}", id)), creating a potential DoS vulnerability.Type of change
Checklist
go test ./...)golangci-lint run)Additional notes
Performance metrics:
Security validation:
Test confirms 1,000,000 unique templates are safely bounded to configured max size (100 entries), with 999,900 evictions and minimal memory growth (<100KB).
Key implementation details:
Fixes #39