Skip to content

Commit

Permalink
[RLlib] Fix broken links in docs. (ray-project#25013)
Browse files Browse the repository at this point in the history
  • Loading branch information
maxpumperla authored May 20, 2022
1 parent 8d6548a commit c4aa5a4
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion doc/source/rllib/core-concepts.rst
Original file line number Diff line number Diff line change
Expand Up @@ -320,7 +320,7 @@ Examples
return StandardMetricsReporting(train_op, workers, config)
Note that here we set ``output_indexes=[1]`` for the ``Concurrently`` operator, which makes it only return results from the replay op. See also the `DQN implementation of replay <https://github.com/ray-project/ray/blob/master/rllib/agents/dqn/dqn.py>`__ for a complete example including the implementation of options such as *training intensity*.
Note that here we set ``output_indexes=[1]`` for the ``Concurrently`` operator, which makes it only return results from the replay op. See also the `DQN implementation of replay <https://github.com/ray-project/ray/blob/master/rllib/algorithms/dqn/dqn.py>`__ for a complete example including the implementation of options such as *training intensity*.


.. dropdown:: **Example: Multi-agent**
Expand Down
2 changes: 1 addition & 1 deletion doc/source/rllib/package_ref/trainer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ and override one or more of its methods. Those are in particular:
* :py:meth:`~ray.rllib.agents.trainer.Trainer.step_attempt`
* :py:meth:`~ray.rllib.agents.trainer.Trainer.execution_plan`

`See here for an example on how to override Trainer <https://github.com/ray-project/ray/blob/master/rllib/agents/pg/pg.py>`_.
`See here for an example on how to override Trainer <https://github.com/ray-project/ray/blob/master/rllib/algorithms/pg/pg.py>`_.


Trainer base class (ray.rllib.agents.trainer.Trainer)
Expand Down
6 changes: 3 additions & 3 deletions doc/source/rllib/rllib-concepts.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ This page describes the internal concepts used to implement algorithms in RLlib.
You might find this useful if modifying or adding new algorithms to RLlib.

Policy classes encapsulate the core numerical components of RL algorithms.
This typically includes the policy model that determines actions to take, a trajectory postprocessor for experiences, and a loss function to improve the policy given postprocessed experiences.
For a simple example, see the policy gradients `policy definition <https://github.com/ray-project/ray/blob/master/rllib/agents/pg/pg_tf_policy.py>`__.
This typically includes the policy model that determines actions to take, a trajectory postprocessor for experiences, and a loss function to improve the policy given post-processed experiences.
For a simple example, see the policy gradients `policy definition <https://github.com/ray-project/ray/blob/master/rllib/algorithms/pg/pg_tf_policy.py>`__.

Most interaction with deep learning frameworks is isolated to the `Policy interface <https://github.com/ray-project/ray/blob/master/rllib/policy/policy.py>`__, allowing RLlib to support multiple frameworks.
To simplify the definition of policies, RLlib includes `Tensorflow <#building-policies-in-tensorflow>`__ and `PyTorch-specific <#building-policies-in-pytorch>`__ templates.
Expand Down Expand Up @@ -375,7 +375,7 @@ In PPO we run ``setup_mixins`` before the loss function is called (i.e., ``befor

**Example 2: Deep Q Networks**

Let's look at how to implement a different family of policies, by looking at the `SimpleQ policy definition <https://github.com/ray-project/ray/blob/master/rllib/agents/dqn/simple_q_tf_policy.py>`__:
Let's look at how to implement a different family of policies, by looking at the `SimpleQ policy definition <https://github.com/ray-project/ray/blob/master/rllib/algorithms/dqn/simple_q_tf_policy.py>`__:

.. code-block:: python
Expand Down

0 comments on commit c4aa5a4

Please sign in to comment.