-
Notifications
You must be signed in to change notification settings - Fork 90
Windows
This page describes steps to get Reaver working on a fresh Windows 10 install, no additional software necessary.
Note that the main difference between Linux and Windows builds is the headless functionality of the StarCraft II Linux client, meaning that the full game can be launched without any graphical interface. This significantly speeds up the game, especially in parallelized case.
To install Reaver on Windows please follow the steps as listed, one by one:
- ensure you have latest StarCraft II and NVIDIA drivers installed
- install Anaconda (not a necessity, but will make steps below easier)
- check
Add Anaconda to my PATH environment variable
when prompted
- check
- open command prompt as Administrator (press
Windows
-> typecmd
-> right click ->Run as Administrator
) - create a new environment for Reaver:
conda create -n reaver-env python=3.6
- activate the environment:
activate reaver-env
- install TensorFlow + CUDA toolkit:
conda install tensorflow-gpu
- install PySC2 from source:
pip install https://github.com/deepmind/pysc2/archive/master.zip
- install Reaver:
pip install reaver
- open command prompt as Administrator (press
Windows
-> typecmd
-> right click ->Run as Administrator
) - navigate to base folder for your experiments (e.g.
cd C:\Users\inoryy
) - activate the environment:
activate reaver-env
- run reaver:
python -m reaver.run --env MoveToBeacon --envs 4
After the last command you should see four StarCraft II games open and see a train log for the first update:
| T 2 | Fr 512 | Ep 0 | Up 1 | RMe 0.00 | RSd 0.00 | RMa 0.00 | RMi 0.00 | Pl -0.086 | Vl 0.043 | El 0.0156 | Gr 84.766 | Fps 256 |
Subsequent train logs will appear based on the log_freq
flag configuration, which is 100
by default. To see logging information on every update, run:
python -m reaver.run --env MoveToBeacon --envs 4 --log_freq 1
During training you should see your agent experiment with various commands - left/right clicking, queuing commands and so on. It might also seem that your agent is doing nothing - that is also fine, unless it lasts more than 200-300 updates. In these cases what is most likely happening is that the agent is trying out less noticeable commands (e.g. control group assignments, using Stop
, Halt
, and so on).
With four games running in parallel you should see your agent converge to baseline results (RMe = 25-26
) in about 30 minutes, depending on the hardware.