Skip to content

Tensorflow implementation of 'Asynchronous Methods for Deep Reinforcement Learning'

Notifications You must be signed in to change notification settings

gliese581gg/A3C_tensorflow

Repository files navigation

#Asynchronous Methods for Deep Reinforcement Learning

(Version 1.0, Last updated :2016.12.12)

###1. Introduction

This is tensorflow implementation of 'Asynchronous Methods for Deep Reinforcement Learning'.(https://arxiv.org/abs/1602.01783)

You can also check batch version of A3C (https://github.com/gliese581gg/batch-A3C_tensorflow)

###2. Usage

python run.py (args)

where args :

-log (log directory name) : Tensorboard event file will be crated in 'logs/(your_input)/' (default : 'A3C')
-net (A3C or AnDQN) : Network type (A3C or Asynchronous n-step DQN)
-ckpt (ckpt file path) : checkpoint file (including path)
-LSTM (True or False) : whether or not use LSTM layer

Usage for tensorboard : tensorboard --logdir (your_log_directory) --port (your_port_number)
                        url for tensorboard will appear on terminal:)

###3. Requirements:

###4. Test results on 'Pong' alt tag

Result for Feed-Forward A3C (took about 12 hours, 20 million frames)

alt tag

Result for LSTM A3C (took about 20 hours, 28 million frames)

AnDQN is not tested yet!

###5. Changelog

-2016.12.12 : First upload!

-2016.12.19 : Bug fix in Net_A3C.py (LSTM state bug)

About

Tensorflow implementation of 'Asynchronous Methods for Deep Reinforcement Learning'

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages