Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] Move all /latest links to /master #11897

Merged
merged 6 commits into from
Nov 10, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,4 @@ Please provide a script that can be run to reproduce the issue. The script shoul
If we cannot run your script, we cannot fix your issue.

- [ ] I have verified my script runs in a clean environment and reproduces the issue.
- [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
- [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/master/installation.html).
22 changes: 11 additions & 11 deletions README.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
.. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png

.. image:: https://readthedocs.org/projects/ray/badge/?version=latest
:target: http://docs.ray.io/en/latest/?badge=latest
.. image:: https://readthedocs.org/projects/ray/badge/?version=master
:target: http://docs.ray.io/en/master/?badge=master

.. image:: https://img.shields.io/badge/Ray-Join%20Slack-blue
:target: https://forms.gle/9TSdDYUgxYs8SA9e8
Expand All @@ -15,7 +15,7 @@ Ray is packaged with the following libraries for accelerating machine learning w

- `Tune`_: Scalable Hyperparameter Tuning
- `RLlib`_: Scalable Reinforcement Learning
- `RaySGD <https://docs.ray.io/en/latest/raysgd/raysgd.html>`__: Distributed Training Wrappers
- `RaySGD <https://docs.ray.io/en/master/raysgd/raysgd.html>`__: Distributed Training Wrappers
- `Ray Serve`_: Scalable and Programmable Serving

There are also many `community integrations <https://docs.ray.io/en/master/ray-libraries.html>`_ with Ray, including `Dask`_, `MARS`_, `Modin`_, `Horovod`_, `Hugging Face`_, `Scikit-learn`_, and others. Check out the `full list of Ray distributed libraries here <https://docs.ray.io/en/master/ray-libraries.html>`_.
Expand Down Expand Up @@ -78,7 +78,7 @@ Ray programs can run on a single machine, and can also seamlessly scale to large

``ray submit [CLUSTER.YAML] example.py --start``

Read more about `launching clusters <https://docs.ray.io/en/latest/cluster/index.html>`_.
Read more about `launching clusters <https://docs.ray.io/en/master/cluster/index.html>`_.

Tune Quick Start
----------------
Expand Down Expand Up @@ -140,10 +140,10 @@ If TensorBoard is installed, automatically visualize all trial results:

tensorboard --logdir ~/ray_results

.. _`Tune`: https://docs.ray.io/en/latest/tune.html
.. _`Population Based Training (PBT)`: https://docs.ray.io/en/latest/tune-schedulers.html#population-based-training-pbt
.. _`Vizier's Median Stopping Rule`: https://docs.ray.io/en/latest/tune-schedulers.html#median-stopping-rule
.. _`HyperBand/ASHA`: https://docs.ray.io/en/latest/tune-schedulers.html#asynchronous-hyperband
.. _`Tune`: https://docs.ray.io/en/master/tune.html
.. _`Population Based Training (PBT)`: https://docs.ray.io/en/master/tune-schedulers.html#population-based-training-pbt
.. _`Vizier's Median Stopping Rule`: https://docs.ray.io/en/master/tune-schedulers.html#median-stopping-rule
.. _`HyperBand/ASHA`: https://docs.ray.io/en/master/tune-schedulers.html#asynchronous-hyperband

RLlib Quick Start
-----------------
Expand Down Expand Up @@ -189,7 +189,7 @@ RLlib Quick Start
"num_workers": 4,
"env_config": {"corridor_length": 5}})

.. _`RLlib`: https://docs.ray.io/en/latest/rllib.html
.. _`RLlib`: https://docs.ray.io/en/master/rllib.html


Ray Serve Quick Start
Expand Down Expand Up @@ -264,7 +264,7 @@ This example runs serves a scikit-learn gradient boosting classifier.
# }


.. _`Ray Serve`: https://docs.ray.io/en/latest/serve/index.html
.. _`Ray Serve`: https://docs.ray.io/en/master/serve/index.html

More Information
----------------
Expand All @@ -282,7 +282,7 @@ More Information
- `Ray HotOS paper`_
- `Blog (old)`_

.. _`Documentation`: http://docs.ray.io/en/latest/index.html
.. _`Documentation`: http://docs.ray.io/en/master/index.html
.. _`Tutorial`: https://github.com/ray-project/tutorial
.. _`Blog (old)`: https://ray-project.github.io/
.. _`Blog`: https://medium.com/distributed-computing-with-ray
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ import { sum } from "../../../common/util";
import ActorStateRepr from "./ActorStateRepr";

const memoryDebuggingDocLink =
"https://docs.ray.io/en/latest/memory-management.html#debugging-using-ray-memory";
"https://docs.ray.io/en/master/memory-management.html#debugging-using-ray-memory";

type ActorDatum = {
label: string;
Expand Down
2 changes: 1 addition & 1 deletion dashboard/client/src/pages/dashboard/tune/Tune.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ class Tune extends React.Component<
You can use this tab to monitor Tune jobs, their statuses,
hyperparameters, and more. For more information, read the
documentation{" "}
<a href="https://docs.ray.io/en/latest/ray-dashboard.html#tune">
<a href="https://docs.ray.io/en/master/ray-dashboard.html#tune">
here
</a>
.
Expand Down
2 changes: 1 addition & 1 deletion doc/examples/cython/ray-project/project.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ name: ray-example-cython

description: "Example of how to use Cython with ray"
tags: ["ray-example", "cython"]
documentation: https://docs.ray.io/en/latest/advanced.html#cython-code-in-ray
documentation: https://docs.ray.io/en/master/advanced.html#cython-code-in-ray

cluster:
config: ray-project/cluster.yaml
Expand Down
2 changes: 1 addition & 1 deletion doc/examples/lbfgs/ray-project/project.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ name: ray-example-lbfgs

description: "Parallelizing the L-BFGS algorithm in ray"
tags: ["ray-example", "optimization", "lbfgs"]
documentation: https://docs.ray.io/en/latest/auto_examples/plot_lbfgs.html
documentation: https://docs.ray.io/en/master/auto_examples/plot_lbfgs.html

cluster:
config: ray-project/cluster.yaml
Expand Down
2 changes: 1 addition & 1 deletion doc/examples/newsreader/ray-project/project.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ name: ray-example-newsreader

description: "A simple news reader example that uses ray actors to serve requests"
tags: ["ray-example", "flask", "rss", "newsreader"]
documentation: https://docs.ray.io/en/latest/auto_examples/plot_newsreader.html
documentation: https://docs.ray.io/en/master/auto_examples/plot_newsreader.html
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should just get rid of all of these ray-project/* things


cluster:
config: ray-project/cluster.yaml
Expand Down
2 changes: 1 addition & 1 deletion doc/examples/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ Machine Learning Examples
Reinforcement Learning Examples
-------------------------------

These are simple examples that show you how to leverage Ray Core. For Ray's production-grade reinforcement learning library, see `RLlib <http://docs.ray.io/en/latest/rllib.html>`__.
These are simple examples that show you how to leverage Ray Core. For Ray's production-grade reinforcement learning library, see `RLlib <http://docs.ray.io/en/master/rllib.html>`__.

.. raw:: html

Expand Down
2 changes: 1 addition & 1 deletion doc/examples/plot_example-a3c.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ View the `code for this example`_.

.. note::

For an overview of Ray's reinforcement learning library, see `RLlib <http://docs.ray.io/en/latest/rllib.html>`__.
For an overview of Ray's reinforcement learning library, see `RLlib <http://docs.ray.io/en/master/rllib.html>`__.

To run the application, first install **ray** and then some dependencies:

Expand Down
2 changes: 1 addition & 1 deletion doc/examples/plot_hyperparameter.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
hyperparameter tuning, use `Tune`_, a scalable hyperparameter
tuning library built using Ray's Actor API.

.. _`Tune`: https://docs.ray.io/en/latest/tune.html
.. _`Tune`: https://docs.ray.io/en/master/tune.html

Setup: Dependencies
-------------------
Expand Down
2 changes: 1 addition & 1 deletion doc/examples/plot_streaming.rst
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ the top 10 words in these articles together with their word count:
Note that this examples uses `distributed actor handles`_, which are still
considered experimental.

.. _`distributed actor handles`: http://docs.ray.io/en/latest/actors.html
.. _`distributed actor handles`: http://docs.ray.io/en/master/actors.html

There is a ``Mapper`` actor, which has a method ``get_range`` used to retrieve
word counts for words in a certain range:
Expand Down
2 changes: 1 addition & 1 deletion doc/site/_posts/2017-05-17-announcing-ray.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ date: 2017-05-20 14:00:00
This post announces Ray, a framework for efficiently running Python code on
clusters and large multi-core machines. The project is open source.
You can check out [the code](https://github.com/ray-project/ray) and
[the documentation](http://docs.ray.io/en/latest/?badge=latest).
[the documentation](http://docs.ray.io/en/master/?badge=latest).

Many AI algorithms are computationally intensive and exhibit complex
communication patterns. As a result, many researchers spend most of their
Expand Down
10 changes: 5 additions & 5 deletions doc/site/_posts/2017-09-30-ray-0.2-release.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -134,12 +134,12 @@ state of the actor. We are working on improving the speed of recovery by
enabling actor state to be restored from checkpoints. See [an overview of fault
tolerance in Ray][4].

[1]: http://docs.ray.io/en/latest/plasma-object-store.html
[2]: http://docs.ray.io/en/latest/webui.html
[3]: http://docs.ray.io/en/latest/rllib.html
[4]: http://docs.ray.io/en/latest/fault-tolerance.html
[1]: http://docs.ray.io/en/master/plasma-object-store.html
[2]: http://docs.ray.io/en/master/webui.html
[3]: http://docs.ray.io/en/master/rllib.html
[4]: http://docs.ray.io/en/master/fault-tolerance.html
[5]: https://github.com/apache/arrow
[6]: http://docs.ray.io/en/latest/example-a3c.html
[6]: http://docs.ray.io/en/master/example-a3c.html
[7]: https://github.com/openai/baselines
[8]: https://github.com/ray-project/ray/blob/b020e6bf1fb00d0745371d8674146d4a5b75d9f0/python/ray/rllib/test/tuned_examples.sh#L11
[9]: https://arrow.apache.org/docs/python/ipc.html#arbitrary-object-serialization
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -271,7 +271,7 @@ for i in range(len(test_objects)):
plot(*benchmark_object(test_objects[i]), titles[i], i)
```

[1]: http://docs.ray.io/en/latest/index.html
[1]: http://docs.ray.io/en/master/index.html
[2]: https://arrow.apache.org/
[3]: https://en.wikipedia.org/wiki/Serialization
[4]: https://github.com/cloudpipe/cloudpickle/
Expand Down
10 changes: 5 additions & 5 deletions doc/site/_posts/2017-11-30-ray-0.3-release.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -134,14 +134,14 @@ This feature is still considered experimental, but we've already found
distributed actor handles useful for implementing [**parameter server**][10] and
[**streaming MapReduce**][11] applications.

[1]: http://docs.ray.io/en/latest/actors.html#passing-around-actor-handles-experimental
[2]: http://docs.ray.io/en/latest/tune.html
[3]: http://docs.ray.io/en/latest/rllib.html
[1]: http://docs.ray.io/en/master/actors.html#passing-around-actor-handles-experimental
[2]: http://docs.ray.io/en/master/tune.html
[3]: http://docs.ray.io/en/master/rllib.html
[4]: https://research.google.com/pubs/pub46180.html
[5]: https://arxiv.org/abs/1603.06560
[6]: https://www.tensorflow.org/get_started/summaries_and_tensorboard
[7]: https://media.readthedocs.org/pdf/rllab/latest/rllab.pdf
[8]: https://en.wikipedia.org/wiki/Parallel_coordinates
[9]: https://github.com/ray-project/ray/tree/master/python/ray/tune
[10]: http://docs.ray.io/en/latest/example-parameter-server.html
[11]: http://docs.ray.io/en/latest/example-streaming.html
[10]: http://docs.ray.io/en/master/example-parameter-server.html
[11]: http://docs.ray.io/en/master/example-streaming.html
8 changes: 4 additions & 4 deletions doc/site/_posts/2018-03-27-ray-0.4-release.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -78,10 +78,10 @@ Training][9].

[1]: https://github.com/ray-project/ray
[2]: https://rise.cs.berkeley.edu/blog/pandas-on-ray/
[3]: http://docs.ray.io/en/latest/rllib.html
[4]: http://docs.ray.io/en/latest/tune.html
[3]: http://docs.ray.io/en/master/rllib.html
[4]: http://docs.ray.io/en/master/tune.html
[5]: https://rise.cs.berkeley.edu/blog/distributed-policy-optimizers-for-scalable-and-reproducible-deep-rl/
[6]: http://docs.ray.io/en/latest/resources.html
[6]: http://docs.ray.io/en/master/resources.html
[7]: https://pandas.pydata.org/
[8]: https://arxiv.org/abs/1803.00933
[9]: http://docs.ray.io/en/latest/pbt.html
[9]: http://docs.ray.io/en/master/pbt.html
6 changes: 3 additions & 3 deletions doc/site/_posts/2018-07-06-ray-0.5-release.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -76,8 +76,8 @@ Ray now supports Java thanks to contributions from [Ant Financial][4]:


[1]: https://github.com/ray-project/ray
[2]: http://docs.ray.io/en/latest/rllib.html
[3]: http://docs.ray.io/en/latest/tune.html
[2]: http://docs.ray.io/en/master/rllib.html
[3]: http://docs.ray.io/en/master/tune.html
[4]: https://www.antfin.com/
[5]: https://github.com/modin-project/modin
[6]: http://docs.ray.io/en/latest/autoscaling.html
[6]: http://docs.ray.io/en/master/autoscaling.html
Original file line number Diff line number Diff line change
Expand Up @@ -321,12 +321,12 @@ Questions should be directed to *ray-dev@googlegroups.com*.


[1]: https://github.com/ray-project/ray
[2]: http://docs.ray.io/en/latest/resources.html
[2]: http://docs.ray.io/en/master/resources.html
[3]: http://www.sysml.cc/doc/206.pdf
[4]: http://docs.ray.io/en/latest/rllib.html
[5]: http://docs.ray.io/en/latest/tune.html
[6]: http://docs.ray.io/en/latest
[7]: http://docs.ray.io/en/latest/api.html
[4]: http://docs.ray.io/en/master/rllib.html
[5]: http://docs.ray.io/en/master/tune.html
[6]: http://docs.ray.io/en/master
[7]: http://docs.ray.io/en/master/api.html
[8]: https://github.com/modin-project/modin
[9]: https://ray-project.github.io/2017/10/15/fast-python-serialization-with-ray-and-arrow.html
[10]: https://ray-project.github.io/2017/08/08/plasma-in-memory-object-store.html
Expand Down
2 changes: 1 addition & 1 deletion doc/site/get_ray.html
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ <h1>Getting Started with Ray</h1>
</p>
<ul>
<li>Ray Project <a href="https://ray.io">web site</a></li>
<li><a href="https://docs.ray.io/en/latest/">Documentation</a></li>
<li><a href="https://docs.ray.io/en/master/">Documentation</a></li>
<li><a href="https://github.com/ray-project/">GitHub project</a></li>
<li><a href="https://github.com/ray-project/tutorial">Tutorials</a></li>
</ul>
Expand Down
2 changes: 1 addition & 1 deletion doc/site/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,6 @@
</ul>

<p>
To get started, visit the Ray Project <a href="https://ray.io">web site</a>, <a href="https://docs.ray.io/en/latest/">documentation</a>, <a href="https://github.com/ray-project/">GitHub project</a>, or <a href="https://github.com/ray-project/tutorial">Tutorials</a>.
To get started, visit the Ray Project <a href="https://ray.io">web site</a>, <a href="https://docs.ray.io/en/master/">documentation</a>, <a href="https://github.com/ray-project/">GitHub project</a>, or <a href="https://github.com/ray-project/tutorial">Tutorials</a>.
</p>
</div>
2 changes: 1 addition & 1 deletion doc/source/cluster/cloud.rst
Original file line number Diff line number Diff line change
Expand Up @@ -302,4 +302,4 @@ Now that you have a working understanding of the cluster launcher, check out:
Questions or Issues?
--------------------

.. include:: /_help.rst
.. include:: /_help.rst
2 changes: 1 addition & 1 deletion doc/source/cluster/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ on each machine. To install Ray, follow the `installation instructions`_.

To configure the Ray cluster to run Java code, you need to add the ``--code-search-path`` option. See :ref:`code_search_path` for more details.

.. _`installation instructions`: http://docs.ray.io/en/latest/installation.html
.. _`installation instructions`: http://docs.ray.io/en/master/installation.html

Starting Ray on each machine
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down
4 changes: 0 additions & 4 deletions doc/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -119,10 +119,6 @@ def __getattr__(cls, name):
versionwarning_admonition_type = "tip"

versionwarning_messages = {
"master": (
"This document is for the master branch. "
'Visit the <a href="/en/latest/">latest pip release documentation here</a>.'
),
"latest": (
"This document is for the latest pip release. "
'Visit the <a href="/en/master/">master branch documentation here</a>.'
Expand Down
2 changes: 1 addition & 1 deletion doc/source/debugging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -97,4 +97,4 @@ This will print any ``RAY_LOG(DEBUG)`` lines in the source code to the


.. _`issues`: https://github.com/ray-project/ray/issues
.. _`Temporary Files`: http://docs.ray.io/en/latest/tempfile.html
.. _`Temporary Files`: http://docs.ray.io/en/master/tempfile.html
4 changes: 2 additions & 2 deletions doc/source/ray-dashboard.rst
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ You can view information for Ray objects in the memory tab. It is useful to debu

One common cause of these memory errors is that there are objects which never go out of scope. In order to find these, you can go to the Memory View, then select to "Group By Stack Trace." This groups memory entries by their stack traces up to three frames deep. If you see a group which is growing without bound, you might want to examine that line of code to see if you intend to keep that reference around.

Note that this is the same information as displayed in the `ray memory command <https://docs.ray.io/en/latest/memory-management.html#debugging-using-ray-memory>`_. For details about the information contained in the table, please see the `ray memory` documentation.
Note that this is the same information as displayed in the `ray memory command <https://docs.ray.io/en/master/memory-management.html#debugging-using-ray-memory>`_. For details about the information contained in the table, please see the `ray memory` documentation.

Inspect Memory Usage
~~~~~~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -283,7 +283,7 @@ Memory

**Object Size** Object Size of a Ray object in bytes.

**Reference Type**: Reference types of Ray objects. Checkout the `ray memory command <https://docs.ray.io/en/latest/memory-management.html#debugging-using-ray-memory>`_ to learn each reference type.
**Reference Type**: Reference types of Ray objects. Checkout the `ray memory command <https://docs.ray.io/en/master/memory-management.html#debugging-using-ray-memory>`_ to learn each reference type.

**Call Site**: Call site where this Ray object is referenced, up to three stack frames deep.

Expand Down
2 changes: 1 addition & 1 deletion doc/source/rllib-algorithms.rst
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,7 @@ Deep Deterministic Policy Gradients (DDPG, TD3)
-----------------------------------------------
|pytorch| |tensorflow|
`[paper] <https://arxiv.org/abs/1509.02971>`__ `[implementation] <https://github.com/ray-project/ray/blob/master/rllib/agents/ddpg/ddpg.py>`__
DDPG is implemented similarly to DQN (below). The algorithm can be scaled by increasing the number of workers or using Ape-X. The improvements from `TD3 <https://spinningup.openai.com/en/latest/algorithms/td3.html>`__ are available as ``TD3``.
DDPG is implemented similarly to DQN (below). The algorithm can be scaled by increasing the number of workers or using Ape-X. The improvements from `TD3 <https://spinningup.openai.com/en/master/algorithms/td3.html>`__ are available as ``TD3``.

.. figure:: dqn-arch.svg

Expand Down
2 changes: 1 addition & 1 deletion doc/source/rllib-dev.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Contributing to RLlib
Development Install
-------------------

You can develop RLlib locally without needing to compile Ray by using the `setup-dev.py <https://github.com/ray-project/ray/blob/master/python/ray/setup-dev.py>`__ script. This sets up links between the ``rllib`` dir in your git repo and the one bundled with the ``ray`` package. However if you have installed ray from source using [these instructions](https://docs.ray.io/en/latest/installation.html) then do not this as these steps should have already created this symlink. When using this script, make sure that your git branch is in sync with the installed Ray binaries (i.e., you are up-to-date on `master <https://github.com/ray-project/ray>`__ and have the latest `wheel <https://docs.ray.io/en/latest/installation.html>`__ installed.)
You can develop RLlib locally without needing to compile Ray by using the `setup-dev.py <https://github.com/ray-project/ray/blob/master/python/ray/setup-dev.py>`__ script. This sets up links between the ``rllib`` dir in your git repo and the one bundled with the ``ray`` package. However if you have installed ray from source using [these instructions](https://docs.ray.io/en/master/installation.html) then do not this as these steps should have already created this symlink. When using this script, make sure that your git branch is in sync with the installed Ray binaries (i.e., you are up-to-date on `master <https://github.com/ray-project/ray>`__ and have the latest `wheel <https://docs.ray.io/en/master/installation.html>`__ installed.)

API Stability
-------------
Expand Down
2 changes: 1 addition & 1 deletion doc/source/rllib-examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -123,5 +123,5 @@ Community Examples
Example of using the multi-agent API to model several `social dilemma games <https://arxiv.org/abs/1702.03037>`__.
- `StarCraft2 <https://github.com/oxwhirl/smac>`__:
Example of training in StarCraft2 maps with RLlib / multi-agent.
- `Traffic Flow <https://berkeleyflow.readthedocs.io/en/latest/flow_setup.html>`__:
- `Traffic Flow <https://berkeleyflow.readthedocs.io/en/master/flow_setup.html>`__:
Example of optimizing mixed-autonomy traffic simulations with RLlib / multi-agent.
2 changes: 1 addition & 1 deletion doc/source/troubleshooting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ on what Ray functionalities we use, let us see what cProfile's output might look
like if our example involved Actors (for an introduction to Ray actors, see our
`Actor documentation here`_).

.. _`Actor documentation here`: http://docs.ray.io/en/latest/actors.html
.. _`Actor documentation here`: http://docs.ray.io/en/master/actors.html

Now, instead of looping over five calls to a remote function like in ``ex1``,
let's create a new example and loop over five calls to a remote function
Expand Down
Loading