Skip to content

Commit

Permalink
Small improvements to using ray on large cluster documentation. (ray-…
Browse files Browse the repository at this point in the history
  • Loading branch information
robertnishihara authored and pcmoritz committed May 19, 2017
1 parent b62693c commit 3d2f1b1
Showing 1 changed file with 4 additions and 2 deletions.
6 changes: 4 additions & 2 deletions doc/source/using-ray-on-a-large-cluster.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Additional assumptions:
* The head node will run Redis and the global scheduler.
* The head node has ssh access to all other nodes.
* All nodes are accessible via ssh keys
* Ray is checked out on each node at the location `$HOME/ray`.
* Ray is checked out on each node at the location ``$HOME/ray``.

**Note:** The commands below will probably need to be customized for your
specific setup.
Expand Down Expand Up @@ -248,7 +248,9 @@ Next run the upgrade script on the worker nodes.
parallel-ssh -h workers.txt -P -t 0 -I < upgrade.sh
Note here that we use the ``-t 0`` option to set the timeout to infinite.
Note here that we use the ``-t 0`` option to set the timeout to infinite. You
may also want to use the ``-p`` flag, which controls the degree of parallelism
used by parallel ssh.

It is probably a good idea to ssh to one of the other nodes and verify that the
upgrade script ran as expected.
Expand Down

0 comments on commit 3d2f1b1

Please sign in to comment.