Skip to content

Commit be4a37b

Browse files
robertnishiharapcmoritz
authored andcommitted
Various cleanups: remove start_ray_local from ray.init, remove unused code, fix "pip install numbuf". (ray-project#193)
* Remove start_ray_local from ray.init and change default number of workers to 10. * Remove alexnet example. * Move array methods to experimental. * Remove TRPO example. * Remove old files. * Compile plasma when we build numbuf. * Address comments.
1 parent b9d6135 commit be4a37b

32 files changed

+90
-892
lines changed

README.md

+2-3
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,8 @@ machines).
1717
import ray
1818
import numpy as np
1919

20-
# Start a scheduler, an object store, and some workers.
21-
ray.init(start_ray_local=True, num_workers=10)
20+
# Start Ray with some workers.
21+
ray.init(num_workers=10)
2222

2323
# Define a remote function for estimating pi.
2424
@ray.remote
@@ -63,4 +63,3 @@ estimate of pi (waiting until the computation has finished if necessary).
6363
- [Hyperparameter Optimization](examples/hyperopt/README.md)
6464
- [Batch L-BFGS](examples/lbfgs/README.md)
6565
- [Learning to Play Pong](examples/rl_pong/README.md)
66-
- [Training AlexNet](examples/alexnet/README.md)

build-webui.sh

-35
This file was deleted.

doc/reusable-variables.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Python wrapper for an Atari simulator.
3131
import gym
3232
import ray
3333

34-
ray.init(start_ray_local=True, num_workers=5)
34+
ray.init(num_workers=10)
3535

3636
# Define a function to create the gym environment.
3737
def env_initializer():

doc/serialization.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ This can be addressed by calling `ray.register_class(Foo)`.
7676
```python
7777
import ray
7878

79-
ray.init(start_ray_local=True, num_workers=1)
79+
ray.init(num_workers=10)
8080

8181
# Define a custom class.
8282
class Foo(object):

doc/services-api.rst

-5
This file was deleted.

doc/tutorial.md

+24-15
Original file line numberDiff line numberDiff line change
@@ -7,28 +7,37 @@ To use Ray, you need to understand the following:
77

88
## Overview
99

10-
Ray is a distributed extension of Python. When using Ray, several processes are
11-
involved.
12-
13-
- A **scheduler**: The scheduler assigns tasks to workers. It is its own
14-
process.
15-
- Multiple **workers**: Workers execute tasks and store the results in object
16-
stores. Each worker is a separate process.
17-
- One **object store** per node: The object store enables the sharing of Python
18-
objects between worker processes so each worker does not have to have a separate
19-
copy.
20-
- A **driver**: The driver is the Python process that the user controls and
21-
which submits tasks to the scheduler. For example, if the user is running a
22-
script or using a Python shell, then the driver is the process that runs the
23-
script or the shell.
10+
Ray is a Python-based distributed execution engine. It can be used on a single
11+
machine to achieve effective multiprocessing, and it can be used on a cluster
12+
for large computations.
13+
14+
When using Ray, several processes are involved.
15+
16+
- Multiple **worker** processes execute tasks and store results in object stores.
17+
Each worker is a separate process.
18+
- One **object store** per node stores immutable objects in shared memory and
19+
allows workers to efficiently share objects on the same node with minimal
20+
copying and deserialization.
21+
- One **local scheduler** per node assigns tasks to workers on the same node.
22+
- A **global scheduler** receives tasks from local schedulers and assigns them
23+
to other local schedulers.
24+
- A **driver** is the Python process that the user controls. For example, if the
25+
user is running a script or using a Python shell, then the driver is the Python
26+
process that runs the script or the shell. A driver is similar to a worker in
27+
that it can submit tasks to its local scheduler and get objects from the object
28+
store, but it is different in that the local scheduler will not assign tasks to
29+
the driver to be executed.
30+
- A **Redis server** maintains much of the system's state. For example, it keeps
31+
track of which objects live on which machines and of the task specifications. It
32+
can also be queried directly for debugging purposes.
2433

2534
## Starting Ray
2635

2736
To start Ray, start Python, and run the following commands.
2837

2938
```python
3039
import ray
31-
ray.init(start_ray_local=True, num_workers=10)
40+
ray.init(num_workers=10)
3241
```
3342

3443
That command starts a scheduler, one object store, and ten workers. Each of

doc/using-ray-with-tensorflow.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ import tensorflow as tf
7979
import numpy as np
8080
import ray
8181

82-
ray.init(start_ray_local=True, num_workers=5)
82+
ray.init(num_workers=5)
8383

8484
BATCH_SIZE = 100
8585
NUM_BATCHES = 1

examples/alexnet/README.md

-108
This file was deleted.

0 commit comments

Comments
 (0)