-
Notifications
You must be signed in to change notification settings - Fork 34
Open
Labels
performanceImprove performance of an existing feature.Improve performance of an existing feature.
Description
**Update on the performance**
The code seems to be bottlenecked by the multiplication of QubitOperator
. I tried several thing, like using MultiformOperator
, the https://github.com/IntelLabs/mat2qubit package and several other options.
The very last thing I tried is splitting the multiplication with a divide-and-conquer strategy.
def element_to_qubitop(n_qubits, i, j, coeff=1.):
# Must add 2 to the padding because of the "0b" prefix.
bin_i = format(i, f"#0{n_qubits+2}b")
bin_j = format(j, f"#0{n_qubits+2}b")
qu_ops = [QubitOperator("", coeff)]
for qubit, (bi, bj) in enumerate(zip(bin_i[2:][::-1], bin_j[2:][::-1])):
if bi == "0" and bj == "0":
qu_ops += [0.5 + QubitOperator(f"Z{qubit}", 0.5)]
elif bi == "0" and bj == "1":
qu_ops += [QubitOperator(f"X{qubit}", 0.5) + QubitOperator(f"Y{qubit}", 0.5j)]
elif bi == "1" and bj == "0":
qu_ops += [QubitOperator(f"X{qubit}", 0.5) + QubitOperator(f"Y{qubit}", -0.5j)]
# The remaining case is 11.
else:
qu_ops += [0.5 + QubitOperator(f"Z{qubit}", -0.5)]
qu_op = multiply_ops(qu_ops)
return qu_op
def multiply_ops(qu_ops):
if len(qu_ops) == 2:
return qu_ops[0] * qu_ops[1]
elif len(qu_ops) == 1:
return qu_ops[0]
else:
return multiply_ops(qu_ops[:len(qu_ops)//2]) * multiply_ops(qu_ops[len(qu_ops)//2:])
However, this code is not speeding up things, in fact it is a little bit slower according to my manual tests. This suggest me that it is faster to do multiplication of big QubitOperator
with a smaller one than doing the same thing with medium-size ones.
The next step is trying to leverage a faster language (like Julia or C). I have already begun working on an implementation using Julia.
Originally posted by @AlexandreF-1qbit in #286 (comment)
Metadata
Metadata
Assignees
Labels
performanceImprove performance of an existing feature.Improve performance of an existing feature.