-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add uncorrelated readout error mitigation for arbitrarily long Pauli strings #6654
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #6654 +/- ##
=======================================
Coverage 97.81% 97.81%
=======================================
Files 1066 1066
Lines 91779 91832 +53
=======================================
+ Hits 89773 89826 +53
Misses 2006 2006 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks Eliott. can you add a reference link to the methods (e.g. to arXiv)?
@@ -1,4 +1,4 @@ | |||
# Copyright 2022 The Cirq Developers | |||
# Copyright 2024 The Cirq Developers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
modifications to files don't change their year
class TensoredConfusionMatrices: | ||
"""Store and use confusion matrices for readout error mitigation on sets of qubits. | ||
|
||
The confusion matrix (CM) for one qubit is: | ||
|
||
[ Pr(0|0) Pr(1|0) ] | ||
[ Pr(0|0) Pr(0|1) ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
was the old documentation wrong or is this a result of the changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The old documentation was wrong.
""" | ||
num_bitstrs, n = measurements.shape | ||
assert len(zero_errors) == len(one_errors) == n | ||
p1 = np.einsum("ij,j->ij", measurements, 1 - one_errors) + np.einsum( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isn't this p1 = measurements @ (1 - one_errors) + (1 - measurements) @ zero_errors
? if it's then the matrix product way is clearer to software engineers who are not necessarly familiar with Einstein summation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it's different because in matrix multiplication the index j
would be summed over, and here it is not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a way to do this without einsum? what is the function being calculated ? or is this some a reweighing of the matrix?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's computing the probability that the bit should be 1 after adding noise, for each of the measured bitstrings. The einsum is just saying
p1[i,j] = measurements[i,j] * (1- one_errors)[j] + (1-measurements)[i,j] * zero_errors[j]
,
which you could also implement with a loop, but that would be less efficient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer to keep the einsum if you don't mind.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can't this be achieved by the broadcast behaviour of array multiplication? p1 = measurements * (1 - one_errors) + (1 - measurements) * zero_errors
that is *
instead of @
. I tested this below and array multiplication is faster than einsum
you can make the choice here but if it's based on performance then array multiplication is faster. eitherway please add a comment explaining what the line does
In [1]: import numpy as np
In [2]: A = np.random.random((5000, 1 << 18))
In [3]: B = np.random.random(1 << 18)
In [4]: want = np.einsum('ij,j->ij', A, B)
In [5]: got = A * B
In [6]: np.allclose(want, got)
Out[6]: True
In [7]: %timeit A * B
2.61 s ± 61.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [8]: %timeit np.einsum('ij,j->ij', A, B)
3.38 s ± 71 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, you're right. I will change it, thanks.
measurements: np.ndarray, | ||
zero_errors: np.ndarray, | ||
one_errors: np.ndarray, | ||
rng: np.random.Generator = np.random.default_rng(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
prefer to use a None
default value. the default value of an argumet is resolved only once ... that means that all places that call the method without specifiying an rng
will use the same object. with None
we create a new object for each of these calls which prevents any possible correlation between different calls
rng: np.random.Generator = np.random.default_rng(), | |
rng: Optional[np.random.Generator] = None, | |
... | |
if rng is None: rng = np.random.default_rng() |
return np.tile(rng.integers(0, 2, size=repetitions), (n, 1)).T | ||
|
||
|
||
def _add_noise_and_mitigate_ghz( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is a significant amount of logic in this method.. tests should be simply and dry (but not too dry). can you simplify the tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, I want to demonstrate not only that the code runs but also that it works on large Pauli operators.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you replace the random part of the function rng = np.random.default_rng()
with a deterministic one (e.g. rng = np.random.default_rng(0)
) then the function becomes determinstic -> not needed. in the test below instead of calling the method just iterate over the values z_mit, dz_mit, z_raw, dz_raw
from a table.
so just make the test deterministic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
almost there
Co-authored-by: Noureldin <noureldinyosri@google.com>
) -> tuple[float, float]: | ||
r"""Uncorrelated readout error mitigation for a multi-qubit Pauli operator. | ||
|
||
This function |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you reformat this paragraph?
…strings (quantumlib#6654) * Add uncorrelated readout error mitigation for arbitrarily long pauli strings
This adds the method
readout_mitigation_pauli_uncorrelated
tocirq.experiments.readout_confusion_matrix.TensoredConfusionMatrices
, which allows one to scalably perform readout error mitigation on arbitrarily long Pauli strings without having to construct any exponentially large matrices. For now, this is restricted to the case when the confusion matrix is a tensor product of single-qubit confusion matrices but could be generalized if desired. The test illustrates its usage. It is a reimplementation of work that I did in grad school (a special case of https://github.com/eliottrosenberg/correlated_SPAM).