Skip to content

Improve runnability of perf scripts#294

Merged
masklinn merged 2 commits intoua-parser:masterfrom
masklinn:scripts-opt
Feb 23, 2026
Merged

Improve runnability of perf scripts#294
masklinn merged 2 commits intoua-parser:masterfrom
masklinn:scripts-opt

Conversation

@masklinn
Copy link
Contributor

No description provided.

The perf scripts do multiple passes over the input[^1] so they need
the entire input in memory. However they don't need to hold every line
in memory individually: UA logs tend to be pretty redundant (80 to 95%
depending on the site), and the strings are the vast majority of the
payload (as they average 100~150 bytes each).

We can memoize the inputs in order to dedup them, and while *usually*
that's not a good idea since the content has to live for the entire
program lifetime we can abuse `sys.intern` for it (reduces the amount
of change necessary, python GCs `sys.intern` anyway).

This reduces memory consumption of the UAs list by an order of
magnitude or so[^2] which is *very* significant for large logs,
although note that the bedaly simulator is heinously costly: running
hitrates on on the 174M UAs "sample 2" dataset, memory use falls by
10~15GB when the belady sim completes, might be a good idea to try and
find out which of its collections is the source of the problem and see
if it can be improved upon.

This also makes hitrates significantly faster on sample 2, likely as a
combination of two factors (though that has not been confirmed in any
way so YMMV):

- lower memory / cache trashing from having to trawl less memory
- much more efficient dict hit (a pointer comparison is sufficient to
  validate a key after hashcode check), especially combined with
  sample 2 having significantly higher hit rates than sample 1
  (dailymotion) as a cache hit is a dict hit first (though there's
  costs associated with metadata maintenance afterwards)

[^1]: technically they could do just one by interleaving all the
  parser configurations but currently that's not the case, I also
  worry that this would affect CPU-level prediction although I guess
  since this is Python it's not that much of a worry, and UA parsing
  is a pretty unpredictable workload...
[^2]: UA strings are 100~150 bytes on average, dedup'ing them means
  storing an 8 byte pointer, plus the average of the UA length over
  its dupes, which averages out to a handful of bytes
In checking things with sample 2, I've realised the current
configuration system is too inflexible to allow easily running bench
just once when some of the configurations are just not acceptable or
sensible.

A more flexible selector system rather than 3 separate options being
producted together allows cleaner parser configurations, which makes
running things easier.
@masklinn masklinn enabled auto-merge (rebase) February 23, 2026 17:52
@masklinn masklinn merged commit c3c3d96 into ua-parser:master Feb 23, 2026
29 of 30 checks passed
@masklinn masklinn deleted the scripts-opt branch February 24, 2026 16:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant