⚡️ Speed up function lru_cache by 9%
#24
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📄 9% (0.09x) speedup for
lru_cacheinsrc/anthropic/_utils/_utils.py⏱️ Runtime :
36.7 microseconds→33.6 microseconds(best of372runs)📝 Explanation and details
The optimization introduces caching for the default decorator instance to avoid repeated expensive calls to
functools.lru_cache().Key change: When
maxsize=128(the default), the optimized version stores and reuses a singlefunctools.lru_cacheinstance via function attributes (hasattr/setattr/getattr), rather than creating a new one on every call.Why this speeds up performance:
functools.lru_cache()has non-trivial overhead for instance creationmaxsize=128is used frequently in practice (as evidenced by the test cases)functools.lru_cache()constructionPerformance characteristics:
lru_cachefunction is called repeatedly with the defaultmaxsize=128functools.lru_cache()creation, while the optimized version distributes this cost and reduces it for repeat callsmaxsizevalues still use the original code path, so no regression for edge casesTest case effectiveness: This optimization particularly benefits scenarios with multiple function decorations using default cache settings, which is common in production codebases where
@lru_cache()is applied frequently across different functions.✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-lru_cache-mhe1qnmdand push.