Improve runnability of perf scripts#294
Merged
masklinn merged 2 commits intoua-parser:masterfrom Feb 23, 2026
Merged
Conversation
The perf scripts do multiple passes over the input[^1] so they need the entire input in memory. However they don't need to hold every line in memory individually: UA logs tend to be pretty redundant (80 to 95% depending on the site), and the strings are the vast majority of the payload (as they average 100~150 bytes each). We can memoize the inputs in order to dedup them, and while *usually* that's not a good idea since the content has to live for the entire program lifetime we can abuse `sys.intern` for it (reduces the amount of change necessary, python GCs `sys.intern` anyway). This reduces memory consumption of the UAs list by an order of magnitude or so[^2] which is *very* significant for large logs, although note that the bedaly simulator is heinously costly: running hitrates on on the 174M UAs "sample 2" dataset, memory use falls by 10~15GB when the belady sim completes, might be a good idea to try and find out which of its collections is the source of the problem and see if it can be improved upon. This also makes hitrates significantly faster on sample 2, likely as a combination of two factors (though that has not been confirmed in any way so YMMV): - lower memory / cache trashing from having to trawl less memory - much more efficient dict hit (a pointer comparison is sufficient to validate a key after hashcode check), especially combined with sample 2 having significantly higher hit rates than sample 1 (dailymotion) as a cache hit is a dict hit first (though there's costs associated with metadata maintenance afterwards) [^1]: technically they could do just one by interleaving all the parser configurations but currently that's not the case, I also worry that this would affect CPU-level prediction although I guess since this is Python it's not that much of a worry, and UA parsing is a pretty unpredictable workload... [^2]: UA strings are 100~150 bytes on average, dedup'ing them means storing an 8 byte pointer, plus the average of the UA length over its dupes, which averages out to a handful of bytes
In checking things with sample 2, I've realised the current configuration system is too inflexible to allow easily running bench just once when some of the configurations are just not acceptable or sensible. A more flexible selector system rather than 3 separate options being producted together allows cleaner parser configurations, which makes running things easier.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.