Skip to content
This repository was archived by the owner on Jun 8, 2020. It is now read-only.

Commit c48d195

Browse files
authored
Update README.md
1 parent 53051f2 commit c48d195

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ Implementations of important policy gradient algorithms in deep reinforcement le
2222

2323
- Soft Actor Critic (SAC)
2424

25-
- Papaer (`sac2018.py`): ["Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor" (Haarnoja et al., 2018)](https://arxiv.org/abs/1801.01290)
25+
- Paper (`sac2018.py`): ["Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor" (Haarnoja et al., 2018)](https://arxiv.org/abs/1801.01290)
2626
- Paper (`sac2019.py`): ["Soft Actor-Critic Algorithms and Applications" (Haarnoja et al., 2019)](https://arxiv.org/abs/1812.05905)
2727
- Algorithm in 2018 paper uses value network, double Q networks, and Gaussian policy. Algorithm in 2019 paper uses double Q networks and Gaussian policy, and adds automatic entropy tuning.
2828
- TODO: SAC for discrete action space

0 commit comments

Comments
 (0)