-
Notifications
You must be signed in to change notification settings - Fork 3
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'master' of https://github.com/mcpeixoto/ReinforcementQML
- Loading branch information
Showing
1 changed file
with
13 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
# Quantum Reinforcement Learning to Solve Cart Pole Environment | ||
|
||
Done for the class of Quantum Data Science, University of Minho | ||
|
||
Masters in Physics Engineering - Information Physics Branch - 2022/23 | ||
|
||
## Abstract | ||
|
||
We conducted a comprehensive investigation into the application of quantum reinforcement learning for solving the Cart Pole environment, comparing it with a classical model based on deep neural networks. Our study explored various quantum models, examining the impact of different entanglement layer configurations and the utilization of data re-uploading. Additionally, we varied the number of layers in the deep neural network. | ||
|
||
Our findings indicate that, for models with fewer than four layers, the classical model demonstrates compatibility with the quantum model. However, as the number of layers increases, the quantum model outperforms the classical one. Remarkably, the quantum model exhibited the best performance during both training and testing stages. | ||
|
||
Overall, our study highlights the advantages of employing quantum reinforcement learning for the Cart Pole environment, showcasing the superiority of certain quantum models over classical counterparts. |