-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize qubit hash for Set operations #6908
base: main
Are you sure you want to change the base?
Conversation
Improves amortized `Set` operations perf by around 50%, though with the caveat that sets with qudits of different dimensions but the same index will always have the same key (not just the same bucket), and thus have to check `__eq__`, causing degenerate perf impact. It seems unlikely that anyone would intentionally do this though. ```python s = set() for q in cirq.GridQubit.square(100): s = s.union({q}) ```
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #6908 +/- ##
=======================================
Coverage 97.87% 97.87%
=======================================
Files 1084 1084
Lines 94406 94408 +2
=======================================
+ Hits 92396 92398 +2
Misses 2010 2010 ☔ View full report in Codecov by Sentry. |
cirq-core/cirq/devices/grid_qubit.py
Outdated
# This approach seems to perform better than traditional "random" hash in `Set` | ||
# operations for typical circuits, as it reduces bucket collisions. Caveat: it does not |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How did you evaluate this reduction in bucket collisions? Would be good to show this explicitly before we decide to abandon the standard tuple hash.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test code is up in the description. It's about 50% faster with this implementation.
One note is that it seems like it's only faster for copy-on-change ops like s = s.union({q})
. It doesn't seem to have any effect when we operate on sets mutably like s |= {q}
. But given most of our stuff is immutable, we see a lot more of the former in our codebase.
I can see also an improvement for a set construction from grid qubits; there is no significant difference for a set update in place. sq = cirq.GridQubit.square(100)
[hash(q) for q in sq]
# set from ordered qubits
%timeit set(sq)
#
# OLD: 1.54 ms ± 58.9 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
# NEW: 1.32 ms ± 75.5 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
# set update in place
%%timeit s = set()
for q in sq:
s.add(q)
#
# OLD: 1.43 ms ± 15.2 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
# NEW: 1.44 ms ± 60.6 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
# new sets created via union
%%timeit s = set()
for q in sq:
s = s.union({q})
#
# OLD: 820 ms ± 3.25 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# NEW: 347 ms ± 1.86 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) |
Change the hash function from tuple, to manually multiplying each term by
1_000_003
, which is also the term multiplier Python uses internally for strings and complex ints. This hashes at the same speed as the tuple, but maintains a linear relationship with each term, which reduces the number of bucket collisions in the hash tables underlying Sets and Dicts for line and grid qubits. Improves amortizedSet
operations perf such as the below by around 50%.Fixes #6886