Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] link to master version of docs in source #10866

Merged
merged 1 commit into from
Sep 19, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Use master for links to docs in source
  • Loading branch information
sumanthratna committed Sep 17, 2020
commit cc1ed1624478c470583ba0d528718bb83b9d1b2f
2 changes: 1 addition & 1 deletion .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
## Checks

- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/latest/.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
Expand Down
2 changes: 1 addition & 1 deletion README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Ray is packaged with the following libraries for accelerating machine learning w
There are also many `community integrations <https://docs.ray.io/en/master/ray-libraries.html>`_ with Ray, including `Dask`_, `MARS`_, `Modin`_, `Horovod`_, `Hugging Face`_, `Scikit-learn`_, and others. Check out the `full list of Ray distributed libraries here <https://docs.ray.io/en/master/ray-libraries.html>`_.

Install Ray with: ``pip install ray``. For nightly wheels, see the
`Installation page <https://docs.ray.io/en/latest/installation.html>`__.
`Installation page <https://docs.ray.io/en/master/installation.html>`__.

.. _`Modin`: https://github.com/modin-project/modin
.. _`Hugging Face`: https://huggingface.co/transformers/main_classes/trainer.html#transformers.Trainer.hyperparameter_search
Expand Down
2 changes: 1 addition & 1 deletion python/ray/setup-dev.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ def do_link(package, force=False, local_path=""):
print("Created links.\n\nIf you run into issues initializing Ray, please "
"ensure that your local repo and the installed Ray are in sync "
"(pip install -U the latest wheels at "
"https://docs.ray.io/en/latest/installation.html, "
"https://docs.ray.io/en/master/installation.html, "
"and ensure you are up-to-date on the master branch on git).\n\n"
"Note that you may need to delete the package symlinks when pip "
"installing new Ray versions to prevent pip from overwriting files "
Expand Down
2 changes: 1 addition & 1 deletion python/ray/tests/test_metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -402,7 +402,7 @@ def test_memory_dashboard(shutdown_only):
"""Test Memory table.

These tests verify examples in this document.
https://docs.ray.io/en/latest/memory-management.html#debugging-using-ray-memory
https://docs.ray.io/en/master/memory-management.html#debugging-using-ray-memory
"""
addresses = ray.init(num_cpus=2)
webui_url = addresses["webui_url"].replace("127.0.0.1", "http://127.0.0.1")
Expand Down
2 changes: 1 addition & 1 deletion rllib/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,4 +27,4 @@ If you've found RLlib useful for your research, you can cite the [paper](https:/
Development Install
-------------------

You can develop RLlib locally without needing to compile Ray by using the [setup-dev.py](https://github.com/ray-project/ray/blob/master/python/ray/setup-dev.py) script. This sets up links between the ``rllib`` dir in your git repo and the one bundled with the ``ray`` package. When using this script, make sure that your git branch is in sync with the installed Ray binaries (i.e., you are up-to-date on [master](https://github.com/ray-project/ray) and have the latest [wheel](https://docs.ray.io/en/latest/installation.html) installed.)
You can develop RLlib locally without needing to compile Ray by using the [setup-dev.py](https://github.com/ray-project/ray/blob/master/python/ray/setup-dev.py) script. This sets up links between the ``rllib`` dir in your git repo and the one bundled with the ``ray`` package. When using this script, make sure that your git branch is in sync with the installed Ray binaries (i.e., you are up-to-date on [master](https://github.com/ray-project/ray) and have the latest [wheel](https://docs.ray.io/en/master/installation.html) installed.)
2 changes: 1 addition & 1 deletion rllib/agents/dqn/apex.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
distributed prioritization of experience prior to storage in replay buffers.

Detailed documentation:
https://docs.ray.io/en/latest/rllib-algorithms.html#distributed-prioritized-experience-replay-ape-x
https://docs.ray.io/en/master/rllib-algorithms.html#distributed-prioritized-experience-replay-ape-x
""" # noqa: E501

import collections
Expand Down
2 changes: 1 addition & 1 deletion rllib/agents/dqn/dqn.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
algorithm. See `dqn_[tf|torch]_policy.py` for the definition of the policies.

Detailed documentation:
https://docs.ray.io/en/latest/rllib-algorithms.html#deep-q-networks-dqn-rainbow-parametric-dqn
https://docs.ray.io/en/master/rllib-algorithms.html#deep-q-networks-dqn-rainbow-parametric-dqn
""" # noqa: E501

import logging
Expand Down
2 changes: 1 addition & 1 deletion rllib/agents/pg/pg.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
This file defines the distributed Trainer class for policy gradients.
See `pg_[tf|torch]_policy.py` for the definition of the policy loss.

Detailed documentation: https://docs.ray.io/en/latest/rllib-algorithms.html#pg
Detailed documentation: https://docs.ray.io/en/master/rllib-algorithms.html#pg
"""

from typing import Optional, Type
Expand Down
6 changes: 3 additions & 3 deletions rllib/agents/ppo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,19 +5,19 @@ Implementations of:

1) Proximal Policy Optimization (PPO).

**[Detailed Documentation](https://docs.ray.io/en/latest/rllib-algorithms.html#ppo)**
**[Detailed Documentation](https://docs.ray.io/en/master/rllib-algorithms.html#ppo)**

**[Implementation](https://github.com/ray-project/ray/blob/master/rllib/agents/ppo/ppo.py)**

2) Asynchronous Proximal Policy Optimization (APPO).

**[Detailed Documentation](https://docs.ray.io/en/latest/rllib-algorithms.html#appo)**
**[Detailed Documentation](https://docs.ray.io/en/master/rllib-algorithms.html#appo)**

**[Implementation](https://github.com/ray-project/ray/blob/master/rllib/agents/ppo/appo.py)**

3) Decentralized Distributed Proximal Policy Optimization (DDPPO)

**[Detailed Documentation](https://docs.ray.io/en/latest/rllib-algorithms.html#decentralized-distributed-proximal-policy-optimization-dd-ppo)**
**[Detailed Documentation](https://docs.ray.io/en/master/rllib-algorithms.html#decentralized-distributed-proximal-policy-optimization-dd-ppo)**

**[Implementation](https://github.com/ray-project/ray/blob/master/rllib/agents/ppo/ddppo.py)**

2 changes: 1 addition & 1 deletion rllib/agents/ppo/appo.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
See `appo_[tf|torch]_policy.py` for the definition of the policy loss.

Detailed documentation:
https://docs.ray.io/en/latest/rllib-algorithms.html#appo
https://docs.ray.io/en/master/rllib-algorithms.html#appo
"""
from typing import Optional, Type

Expand Down
2 changes: 1 addition & 1 deletion rllib/agents/ppo/ppo.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
optimization.
See `ppo_[tf|torch]_policy.py` for the definition of the policy loss.

Detailed documentation: https://docs.ray.io/en/latest/rllib-algorithms.html#ppo
Detailed documentation: https://docs.ray.io/en/master/rllib-algorithms.html#ppo
"""

import logging
Expand Down