Skip to content

Commit 7aecd0b

Browse files
smadhukSid Madhukoscarhiggott
authored
Implement correlated matching (#156)
* Use UserGraph for MWPM creation instead of IntermediateWeightedGraph * Format user_graph.h * Flag-protect (unimplemented) correlated matching * Don't throw if correlations are enabled; Add perf tests * PR comment; Ensure backwards compatibility with C++ API * Remove pymatching binary * Fix CI * Fix CI * Fix pybind source files * Refactor io.* into user_graph.* (#125) Co-authored-by: Sid Madhuk <smadhuk@google.com> * Add method to iterate through a DEM which decomposes Hyper Edges * Small cleanup * pr comments: remove comments and rename variable * throw on decomposed errors with hyperedge components * extend: throw on probabilities greater than half * Track joint probabilities of correlated errors * clenaup: remove extra newline * pr feedback: keep edges sorted in the user_graph and joint_probabilities * Calculate implied edge weights from joint probabilities * remove unnecessary test * small cleanup * Populate implied edge weights in edges in the User Graph * Populate unconverted implied weights during matching graph creation * Convert implied weights for decoding * Implement reweight logic for the search and matching graphs * Pipe reweighting logic all the way to the Pymatching binary CLI * Pipe reweighting logic all the way to the Pymatching binary CLI * Clean up search graph logic so * Fix tests * Add correlated matching branches to methods missed in earlier PRs * Fix pybind * Update pybindings to handle correlations * 2 fixes: undo weights at end of shots; reweight search flooder * Fix edge reweighting logic * Fix test failures * Update src/pymatching/sparse_blossom/driver/user_graph.cc Co-authored-by: oscarhiggott <29460323+oscarhiggott@users.noreply.github.com> * Update src/pymatching/sparse_blossom/driver/user_graph.cc Co-authored-by: oscarhiggott <29460323+oscarhiggott@users.noreply.github.com> * Update src/pymatching/sparse_blossom/flooder/graph.cc Co-authored-by: oscarhiggott <29460323+oscarhiggott@users.noreply.github.com> * Update src/pymatching/sparse_blossom/search/search_graph.cc Co-authored-by: oscarhiggott <29460323+oscarhiggott@users.noreply.github.com> * Update src/pymatching/sparse_blossom/search/search_graph.h Co-authored-by: oscarhiggott <29460323+oscarhiggott@users.noreply.github.com> * Update src/pymatching/sparse_blossom/driver/user_graph.cc Co-authored-by: oscarhiggott <29460323+oscarhiggott@users.noreply.github.com> * fix tests * Only discretize weights when converting to MatchingGraphs * Update src/pymatching/matching.py Co-authored-by: oscarhiggott <29460323+oscarhiggott@users.noreply.github.com> * Update src/pymatching/matching.py Co-authored-by: oscarhiggott <29460323+oscarhiggott@users.noreply.github.com> * Update src/pymatching/matching.py Co-authored-by: oscarhiggott <29460323+oscarhiggott@users.noreply.github.com> * Update src/pymatching/matching.py Co-authored-by: oscarhiggott <29460323+oscarhiggott@users.noreply.github.com> * Fix tests * fix formatting * Remove unused method * Update src/pymatching/sparse_blossom/driver/user_graph.h Co-authored-by: oscarhiggott <29460323+oscarhiggott@users.noreply.github.com> * Update src/pymatching/sparse_blossom/driver/user_graph.h Co-authored-by: oscarhiggott <29460323+oscarhiggott@users.noreply.github.com> * Update src/pymatching/sparse_blossom/driver/user_graph.h Co-authored-by: oscarhiggott <29460323+oscarhiggott@users.noreply.github.com> * Make previously free edges to implied weights unconverted variable a class member * Add default new parameters to prevent breaking API changes * remove unused method * const ref for std::vector<ImpliedWeightUnconverted> * check implied weight validity when computing normalizing constant * return bool in get_edge_or_boundary_edge_weight and add test * pr comment: use SIZE_MAX instead of -1 for consistency * Add tests for correlations with pybind flow * Remove unused blob of code * Decode to edges array also allows correlated matching (#157) * Decode to edges array also allows correlated matching * format * Update src/pymatching/matching.py --------- Co-authored-by: Sid Madhuk <smadhuk@google.com> Co-authored-by: oscarhiggott <29460323+oscarhiggott@users.noreply.github.com> * Check for invalid pymatching.Matching configuration when decoding with enable_correlations=True (#160) * Catch attempted decoding using enable_correlations=True when pymatching.Matching is not configured for correlations * add to docstrings on correlations expose enable_correlation flags to more loading methods * fix docstring * turn on decompose_errors in perf * decompose_errors in dem loading tests * test for exception with undecomposed hyperedge * flake8 * add enable_correlations to more docstrings * flake8 * update undecomposed hyperedges error * Ensure that pymatching CLI includes a search flooder for predict (#163) Co-authored-by: Sid Madhuk <smadhuk@google.com> * Correlated matching docs (#164) * correlated matching docs * readme --------- Co-authored-by: Sid Madhuk <smadhuk@google.com> Co-authored-by: oscarhiggott <29460323+oscarhiggott@users.noreply.github.com> Co-authored-by: Oscar Higgott <oscarhiggott@google.com>
1 parent 36b26e8 commit 7aecd0b

34 files changed

+4521
-458
lines changed

CMakeLists.txt

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,6 @@ endif ()
3939

4040
set(SOURCE_FILES_NO_MAIN
4141
src/pymatching/sparse_blossom/driver/namespaced_main.cc
42-
src/pymatching/sparse_blossom/driver/io.cc
4342
src/pymatching/sparse_blossom/driver/mwpm_decoding.cc
4443
src/pymatching/sparse_blossom/flooder/graph.cc
4544
src/pymatching/sparse_blossom/flooder/detector_node.cc
@@ -63,7 +62,6 @@ set(SOURCE_FILES_NO_MAIN
6362

6463
set(TEST_FILES
6564
src/pymatching/sparse_blossom/driver/namespaced_main.test.cc
66-
src/pymatching/sparse_blossom/driver/io.test.cc
6765
src/pymatching/sparse_blossom/driver/mwpm_decoding.test.cc
6866
src/pymatching/sparse_blossom/flooder_matcher_interop/varying.test.cc
6967
src/pymatching/sparse_blossom/flooder/graph.test.cc
@@ -89,7 +87,7 @@ set(PERF_FILES
8987
src/pymatching/perf/main.perf.cc
9088
src/pymatching/perf/util.perf.cc
9189
src/pymatching/sparse_blossom/driver/mwpm_decoding.perf.cc
92-
src/pymatching/sparse_blossom/driver/io.perf.cc
90+
src/pymatching/sparse_blossom/driver/user_graph.perf.cc
9391
src/pymatching/sparse_blossom/flooder_matcher_interop/varying.perf.cc
9492
src/pymatching/sparse_blossom/tracker/radix_heap_queue.perf.cc
9593
)

README.md

Lines changed: 26 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ PyMatching can be configured using arbitrary weighted graphs, with or without a
2323
Craig Gidney's [Stim](https://github.com/quantumlib/Stim) library to simulate and decode error correction circuits
2424
in the presence of circuit-level noise. The [sinter](https://pypi.org/project/sinter/) package combines Stim and
2525
PyMatching to perform fast, parallelised monte-carlo sampling of quantum error correction circuits.
26+
As of a recent update (v2.3), pymatching also supports [correlated matching](https://arxiv.org/abs/1310.0863).
2627

2728
Documentation for PyMatching can be found at: [pymatching.readthedocs.io](https://pymatching.readthedocs.io/en/stable/)
2829

@@ -70,6 +71,14 @@ in a similar way to how clusters are grown in Union-Find, whereas our approach i
7071
and uses a global priority queue to grow alternating trees.
7172
Yue also has a paper coming soon, so stay tuned for that as well.
7273

74+
## Correlated matching
75+
76+
As of PyMatching version 2.3, [correlated matching](https://arxiv.org/abs/1310.0863) is now also available in pymatching! Thank you to Sid Madhuk, who was the primary contributor for this new feature.
77+
78+
Correlated matching has better accuracy than standard (uncorrelated) matching for many decoding problems where hyperedge errors are present. When these hyperedge errors are decomposed into edges (graphlike errors), they amount to correlations between these edges in the matching graph. A common example of such a hyperedge error is a $Y$ error in the surface code.
79+
80+
The "two-pass" correlated matching decoder implemented in pymatching works by running sparse blossom twice. The first pass is a standard (uncorrelated) run of sparse blossom, to predict a set of edges in the matching graph. Correlated matching then assumes these errors (edges) occurred and reweights edges that are correlated with it based on this assumption. Matching is then run for the second time on this reweighted graph.
81+
7382
## Installation
7483

7584
The latest version of PyMatching can be downloaded and installed from [PyPI](https://pypi.org/project/PyMatching/)
@@ -100,10 +109,12 @@ First, we generate a stim circuit. Here, we use a surface code circuit included
100109
import numpy as np
101110
import stim
102111
import pymatching
103-
circuit = stim.Circuit.generated("surface_code:rotated_memory_x",
104-
distance=5,
105-
rounds=5,
106-
after_clifford_depolarization=0.005)
112+
circuit = stim.Circuit.generated(
113+
"surface_code:rotated_memory_x",
114+
distance=5,
115+
rounds=5,
116+
after_clifford_depolarization=0.005
117+
)
107118
```
108119

109120
Next, we use stim to generate a `stim.DetectorErrorModel` (DEM), which is effectively a
@@ -125,28 +136,26 @@ sampler = circuit.compile_detector_sampler()
125136
syndrome, actual_observables = sampler.sample(shots=1000, separate_observables=True)
126137
```
127138

128-
Now we can decode! We compare PyMatching's predictions of the logical observables with the actual observables sampled
129-
with stim, in order to count the number of mistakes and estimate the logical error rate:
139+
Now we can decode! We compare PyMatching's predictions of the logical observables with the actual observables sampled with stim, in order to count the number of mistakes and estimate the logical error rate:
130140

131141
```python
132-
num_errors = 0
133-
for i in range(syndrome.shape[0]):
134-
predicted_observables = matching.decode(syndrome[i, :])
135-
num_errors += not np.array_equal(actual_observables[i, :], predicted_observables)
142+
predicted_observables = matching.decode_batch(syndrome)
143+
num_errors = np.sum(np.any(predicted_observables != actual_observables, axis=1))
136144

137145
print(num_errors) # prints 8
138146
```
139147

140-
As of PyMatching v2.1.0, you can use `matching.decode_batch` to decode a batch of shots instead.
141-
Since `matching.decode_batch` iterates over the shots in C++, it's faster than iterating over calls
142-
to `matching.decode` in Python. The following cell is therefore a faster
143-
equivalent to the cell above:
148+
To decode instead with correlated matching, set `enable_correlations=True` both when configuiing the `pymatching.Matching` object:
149+
```python
150+
matching_corr = pymatching.Matching.from_detector_error_model(dem, enable_correlations=True)
151+
```
144152

153+
as well as when decoding:
145154
```python
146-
predicted_observables = matching.decode_batch(syndrome)
147-
num_errors = np.sum(np.any(predicted_observables != actual_observables, axis=1))
155+
predicted_observables_corr = matching_corr.decode_batch(syndrome, enable_correlations=True)
156+
num_errors = np.sum(np.any(predicted_observables_corr != actual_observables, axis=1))
148157

149-
print(num_errors) # prints 8
158+
print(num_errors) # prints 3
150159
```
151160

152161
### Loading from a parity check matrix

0 commit comments

Comments
 (0)