Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytorch advanced examples #1427

Closed
wants to merge 6 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 45 additions & 9 deletions doc/source/code-examples/advancedexamples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -289,12 +289,13 @@ The following gates support parameter setting:
* :class:`qibo.gates.fSim`: Accepts a tuple of two parameters ``(theta, phi)``.
* :class:`qibo.gates.GeneralizedfSim`: Accepts a tuple of two parameters
``(unitary, phi)``. Here ``unitary`` should be a unitary matrix given as an
array or ``tf.Tensor`` of shape ``(2, 2)``.
array or ``tf.Tensor`` of shape ``(2, 2)``. A ``torch.Tensor`` is required when using the pytorch backend.
* :class:`qibo.gates.Unitary`: Accepts a single ``unitary`` parameter. This
should be an array or ``tf.Tensor`` of shape ``(2, 2)``.
should be an array or ``tf.Tensor`` of shape ``(2, 2)``. A ``torch.Tensor`` is required when using the pytorch backend.

Note that a ``np.ndarray`` or a ``tf.Tensor`` may also be used in the place of
a flat list. Using :meth:`qibo.models.circuit.Circuit.set_parameters` is more
a flat list (``torch.Tensor`` is required when using the pytorch backend).
Using :meth:`qibo.models.circuit.Circuit.set_parameters` is more
efficient than recreating a new circuit with new parameter values. The inverse
method :meth:`qibo.models.circuit.Circuit.get_parameters` is also available
and returns a list, dictionary or flat list with the current parameter values
Expand Down Expand Up @@ -551,9 +552,9 @@ Here is a simple example using the Heisenberg XXZ model Hamiltonian:
For more information on the available options of the ``vqe.minimize`` call we
refer to the :ref:`Optimizers <Optimizers>` section of the documentation.
Note that if the Stochastic Gradient Descent optimizer is used then the user
has to use a backend based on tensorflow primitives and not the default custom
has to use a backend based on tensorflow or pytorch primitives and not the default custom
backend, as custom operators currently do not support automatic differentiation.
To switch the backend one can do ``qibo.set_backend("tensorflow")``.
To switch the backend one can do ``qibo.set_backend("tensorflow")`` or ``qibo.set_backend("pytorch")``.
Check the :ref:`How to use automatic differentiation? <autodiff-example>`
section for more details.

Expand Down Expand Up @@ -695,12 +696,13 @@ the model. For example the previous example would have to be modified as:
How to use automatic differentiation?
-------------------------------------

The parameters of variational circuits can be optimized using the frameworks of
Tensorflow or Pytorch.

As a deep learning framework, Tensorflow supports
`automatic differentiation <https://www.tensorflow.org/tutorials/customization/autodiff>`_.
This can be used to optimize the parameters of variational circuits. For example
the following script optimizes the parameters of two rotations so that the circuit
output matches a target state using the fidelity as the corresponding loss
function.
The following script optimizes the parameters of two rotations so that the
circuit output matches a target state using the fidelity as the corresponding loss function.

Note that, as in the following example, the rotation angles have to assume real values
to ensure the rotational gates are representing unitary operators.
Expand Down Expand Up @@ -777,6 +779,40 @@ that is supported by Tensorflow, such as defining
and using the `Sequential model API <https://www.tensorflow.org/api_docs/python/tf/keras/Sequential>`_
to train them.

Similarly, Pytorch supports `automatic differentiation <https://pytorch.org/tutorials/beginner/basics/autogradqs_tutorFor%20example%20tial.html>`_.
The following script optimizes the parameters of the variational circuit of the first example using the Pytorch framework.

.. code-block:: python

import qibo
qibo.set_backend("pytorch")
import torch
from qibo import gates, models

# Optimization parameters
nepochs = 1000
optimizer = torch.optim.Adam
target_state = torch.ones(4, dtype=torch.complex128) / 2.0

# Define circuit ansatz
params = torch.tensor(
torch.rand(2, dtype=torch.float64), requires_grad=True
)
c = models.Circuit(2)
c.add(gates.RX(0, params[0]))
c.add(gates.RY(1, params[1]))

optimizer = optimizer([params])

for _ in range(nepochs):
optimizer.zero_grad()
c.set_parameters(params)
final_state = c().state()
fidelity = torch.abs(torch.sum(torch.conj(target_state) * final_state))
loss = 1 - fidelity
loss.backward()
optimizer.step()


.. _noisy-example:

Expand Down
38 changes: 38 additions & 0 deletions doc/source/code-examples/test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
import qibo

qibo.set_backend("pytorch")
import torch

from qibo import gates, models

torch.set_anomaly_enabled(True)

# Optimization parameters
nepochs = 1
optimizer = torch.optim.Adam
target_state = torch.ones(4, dtype=torch.complex128) / 2.0

# Define circuit ansatz
params = torch.rand(2, dtype=torch.float64, requires_grad=True)
print(params)
optimizer = optimizer([params])
c = models.Circuit(2)
c.add(gates.RX(0, params[0]))
c.add(gates.RY(1, params[1]))
gate = gates.RY(0, params[1])

print("Gate", gate.matrix())
print(torch.norm(gate.matrix()).grad)

# for _ in range(nepochs):
# optimizer.zero_grad()
# c.set_parameters(params)
# final_state = c().state()
# print("state", final_state)
# fidelity = torch.abs(torch.sum(torch.conj(target_state) * final_state))
# loss = 1 - fidelity
# loss.backward()
# optimizer.step()
# print("state", final_state)
# print("params", params)
# print("loss", loss.grad)
5 changes: 3 additions & 2 deletions src/qibo/backends/pytorch.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
"""PyTorch backend."""

from typing import Optional

import numpy as np

from qibo import __version__
Expand Down Expand Up @@ -85,7 +87,7 @@ def cast(
x,
dtype=None,
copy: bool = False,
requires_grad: bool = None,
requires_grad: Optional[bool] = None,
):
"""Casts input as a Torch tensor of the specified dtype.

Expand Down Expand Up @@ -117,7 +119,6 @@ def cast(
# check if dtype is an integer to remove gradients
if dtype in [self.np.int32, self.np.int64, self.np.int8, self.np.int16]:
requires_grad = False

if isinstance(x, self.np.Tensor):
x = x.to(dtype)
elif isinstance(x, list) and all(isinstance(row, self.np.Tensor) for row in x):
Expand Down