Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add uncorrelated readout error mitigation for arbitrarily long Pauli strings #6654

Merged
merged 15 commits into from
Jul 1, 2024

Conversation

eliottrosenberg
Copy link
Collaborator

@eliottrosenberg eliottrosenberg commented Jun 27, 2024

This adds the method readout_mitigation_pauli_uncorrelated to cirq.experiments.readout_confusion_matrix.TensoredConfusionMatrices, which allows one to scalably perform readout error mitigation on arbitrarily long Pauli strings without having to construct any exponentially large matrices. For now, this is restricted to the case when the confusion matrix is a tensor product of single-qubit confusion matrices but could be generalized if desired. The test illustrates its usage. It is a reimplementation of work that I did in grad school (a special case of https://github.com/eliottrosenberg/correlated_SPAM).

Copy link

codecov bot commented Jun 27, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 97.81%. Comparing base (ac63c60) to head (8fb724e).

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #6654   +/-   ##
=======================================
  Coverage   97.81%   97.81%           
=======================================
  Files        1066     1066           
  Lines       91779    91832   +53     
=======================================
+ Hits        89773    89826   +53     
  Misses       2006     2006           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@NoureldinYosri NoureldinYosri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks Eliott. can you add a reference link to the methods (e.g. to arXiv)?

@@ -1,4 +1,4 @@
# Copyright 2022 The Cirq Developers
# Copyright 2024 The Cirq Developers
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

modifications to files don't change their year

class TensoredConfusionMatrices:
"""Store and use confusion matrices for readout error mitigation on sets of qubits.

The confusion matrix (CM) for one qubit is:

[ Pr(0|0) Pr(1|0) ]
[ Pr(0|0) Pr(0|1) ]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was the old documentation wrong or is this a result of the changes?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The old documentation was wrong.

"""
num_bitstrs, n = measurements.shape
assert len(zero_errors) == len(one_errors) == n
p1 = np.einsum("ij,j->ij", measurements, 1 - one_errors) + np.einsum(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't this p1 = measurements @ (1 - one_errors) + (1 - measurements) @ zero_errors? if it's then the matrix product way is clearer to software engineers who are not necessarly familiar with Einstein summation

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, it's different because in matrix multiplication the index j would be summed over, and here it is not.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a way to do this without einsum? what is the function being calculated ? or is this some a reweighing of the matrix?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's computing the probability that the bit should be 1 after adding noise, for each of the measured bitstrings. The einsum is just saying
p1[i,j] = measurements[i,j] * (1- one_errors)[j] + (1-measurements)[i,j] * zero_errors[j],
which you could also implement with a loop, but that would be less efficient.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer to keep the einsum if you don't mind.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can't this be achieved by the broadcast behaviour of array multiplication? p1 = measurements * (1 - one_errors) + (1 - measurements) * zero_errors that is * instead of @. I tested this below and array multiplication is faster than einsum

you can make the choice here but if it's based on performance then array multiplication is faster. eitherway please add a comment explaining what the line does

In [1]: import numpy as np

In [2]: A = np.random.random((5000, 1 << 18))

In [3]: B = np.random.random(1 << 18)

In [4]: want = np.einsum('ij,j->ij', A, B)

In [5]: got = A * B

In [6]: np.allclose(want, got)
Out[6]: True

In [7]: %timeit A * B
2.61 s ± 61.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [8]: %timeit np.einsum('ij,j->ij', A, B)
3.38 s ± 71 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, you're right. I will change it, thanks.

measurements: np.ndarray,
zero_errors: np.ndarray,
one_errors: np.ndarray,
rng: np.random.Generator = np.random.default_rng(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prefer to use a None default value. the default value of an argumet is resolved only once ... that means that all places that call the method without specifiying an rng will use the same object. with None we create a new object for each of these calls which prevents any possible correlation between different calls

Suggested change
rng: np.random.Generator = np.random.default_rng(),
rng: Optional[np.random.Generator] = None,
...
if rng is None: rng = np.random.default_rng()

return np.tile(rng.integers(0, 2, size=repetitions), (n, 1)).T


def _add_noise_and_mitigate_ghz(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is a significant amount of logic in this method.. tests should be simply and dry (but not too dry). can you simplify the tests?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I want to demonstrate not only that the code runs but also that it works on large Pauli operators.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you replace the random part of the function rng = np.random.default_rng() with a deterministic one (e.g. rng = np.random.default_rng(0)) then the function becomes determinstic -> not needed. in the test below instead of calling the method just iterate over the values z_mit, dz_mit, z_raw, dz_raw from a table.

so just make the test deterministic

@CirqBot CirqBot added the size: M 50< lines changed <250 label Jun 29, 2024
Copy link
Collaborator

@NoureldinYosri NoureldinYosri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

almost there

cirq-core/cirq/experiments/readout_confusion_matrix.py Outdated Show resolved Hide resolved
eliottrosenberg and others added 2 commits July 1, 2024 15:55
Co-authored-by: Noureldin <noureldinyosri@google.com>
) -> tuple[float, float]:
r"""Uncorrelated readout error mitigation for a multi-qubit Pauli operator.

This function
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you reformat this paragraph?

@NoureldinYosri NoureldinYosri enabled auto-merge (squash) July 1, 2024 20:27
@NoureldinYosri NoureldinYosri merged commit da5c3b5 into main Jul 1, 2024
37 checks passed
harry-phasecraft pushed a commit to PhaseCraft/Cirq that referenced this pull request Oct 31, 2024
…strings (quantumlib#6654)

* Add uncorrelated readout error mitigation for arbitrarily long pauli strings
@pavoljuhas pavoljuhas deleted the u/eliottrosenbrg/readout_mit branch January 22, 2025 07:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size: M 50< lines changed <250
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants