Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tune/docs] Remove more reference to AIR concepts #39569

Merged
merged 5 commits into from
Sep 12, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/source/tune/api/execution.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Tuner Configuration

.. seealso::

The `Tuner` constructor also takes in a :class:`air.RunConfig <ray.train.RunConfig>`.
The `Tuner` constructor also takes in a :class:`RunConfig <ray.train.RunConfig>`.

Restoring a Tuner
~~~~~~~~~~~~~~~~~
Expand Down
9 changes: 5 additions & 4 deletions doc/source/tune/api/reporters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,14 +31,15 @@ Here's an example:

.. code-block:: python

from ray.train import RunConfig
from ray.tune import CLIReporter

# Limit the number of rows.
reporter = CLIReporter(max_progress_rows=10)
# Add a custom metric column, in addition to the default metrics.
# Note that this must be a metric that is returned in your training results.
reporter.add_metric_column("custom_metric")
tuner = tune.Tuner(my_trainable, run_config=air.RunConfig(progress_reporter=reporter))
tuner = tune.Tuner(my_trainable, run_config=RunConfig(progress_reporter=reporter))
results = tuner.fit()

Extending ``CLIReporter`` lets you control reporting frequency. For example:
Expand All @@ -52,7 +53,7 @@ Extending ``CLIReporter`` lets you control reporting frequency. For example:
"""Reports only on experiment termination."""
return done

tuner = tune.Tuner(my_trainable, run_config=air.RunConfig(progress_reporter=ExperimentTerminationReporter()))
tuner = tune.Tuner(my_trainable, run_config=RunConfig(progress_reporter=ExperimentTerminationReporter()))
results = tuner.fit()

class TrialTerminationReporter(CLIReporter):
Expand All @@ -66,7 +67,7 @@ Extending ``CLIReporter`` lets you control reporting frequency. For example:
self.num_terminated = len([t for t in trials if t.status == Trial.TERMINATED])
return self.num_terminated > old_num_terminated

tuner = tune.Tuner(my_trainable, run_config=air.RunConfig(progress_reporter=TrialTerminationReporter()))
tuner = tune.Tuner(my_trainable, run_config=RunConfig(progress_reporter=TrialTerminationReporter()))
results = tuner.fit()

The default reporting style can also be overridden more broadly by extending the ``ProgressReporter`` interface directly. Note that you can print to any output stream, file etc.
Expand All @@ -84,7 +85,7 @@ The default reporting style can also be overridden more broadly by extending the
print(*sys_info)
print("\n".join([str(trial) for trial in trials]))

tuner = tune.Tuner(my_trainable, run_config=air.RunConfig(progress_reporter=CustomReporter()))
tuner = tune.Tuner(my_trainable, run_config=RunConfig(progress_reporter=CustomReporter()))
results = tuner.fit()


Expand Down
6 changes: 3 additions & 3 deletions doc/source/tune/api/result_grid.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,13 +24,13 @@ ResultGrid (tune.ResultGrid)

.. _result-docstring:

Result (air.Result)
-------------------
Result (train.Result)
---------------------

.. autosummary::
:template: autosummary/class_without_autosummary.rst

~air.Result
~train.Result

.. _exp-analysis-docstring:

Expand Down
5 changes: 3 additions & 2 deletions doc/source/tune/api/suggestion.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ You can utilize these search algorithms as follows:
.. code-block:: python

from ray import train, tune
from ray.train import RunConfig
from ray.tune.search.optuna import OptunaSearch

def train_fn(config):
Expand Down Expand Up @@ -69,7 +70,7 @@ See ``Result logdir: ...`` in the output logs for this location.

Note that if you have two Tune runs with the same experiment folder,
the previous state checkpoint will be overwritten. You can
avoid this by making sure ``air.RunConfig(name=...)`` is set to a unique
avoid this by making sure ``RunConfig(name=...)`` is set to a unique
identifier:

.. code-block:: python
Expand All @@ -81,7 +82,7 @@ identifier:
num_samples=5,
search_alg=search_alg,
),
run_config=air.RunConfig(
run_config=RunConfig(
name="my-experiment-1",
storage_path="~/my_results",
)
Expand Down
2 changes: 1 addition & 1 deletion doc/source/tune/api/trainable.rst
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ then you should use :func:`tune.with_resources <ray.tune.with_resources>` like t
{"GPU": 1},
{"GPU": 1}
])),
run_config=air.RunConfig(name="my_trainable")
run_config=RunConfig(name="my_trainable")
)

The ``Trainable`` also provides the ``default_resource_requests`` interface to automatically
Expand Down
8 changes: 4 additions & 4 deletions doc/source/tune/tutorials/tune-distributed.rst
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Analyze your results on TensorBoard by starting TensorBoard on the remote head m
ray exec tune-default.yaml 'tensorboard --logdir=~/ray_results/ --port 6006' --port-forward 6006


Note that you can customize the directory of results by specifying: ``air.RunConfig(storage_path=..)``, taken in by ``Tuner``. You can then point TensorBoard to that directory to visualize results. You can also use `awless <https://github.com/wallix/awless>`_ for easy cluster management on AWS.
Note that you can customize the directory of results by specifying: ``RunConfig(storage_path=..)``, taken in by ``Tuner``. You can then point TensorBoard to that directory to visualize results. You can also use `awless <https://github.com/wallix/awless>`_ for easy cluster management on AWS.


Running a Distributed Tune Experiment
Expand Down Expand Up @@ -102,7 +102,7 @@ Storage Options in a Distributed Tune Run

In a distributed experiment, you should try to use :ref:`cloud checkpointing <tune-cloud-checkpointing>` to
reduce synchronization overhead. For this, you just have to specify a remote ``storage_path`` in the
:class:`air.RunConfig <ray.air.RunConfig>`.
:class:`RunConfig <ray.train.RunConfig>`.

`my_trainable` is a user-defined :ref:`Tune Trainable <tune_60_seconds_trainables>` in the following example:

Expand Down Expand Up @@ -212,7 +212,7 @@ To summarize, here are the commands to run:

You should see Tune eventually continue the trials on a different worker node. See the :ref:`Fault Tolerance <tune-fault-tol>` section for more details.

You can also specify ``storage_path=...``, as part of ``air.RunConfig``, which is taken in by ``Tuner``, to upload results to cloud storage like S3, allowing you to persist results in case you want to start and stop your cluster automatically.
You can also specify ``storage_path=...``, as part of ``RunConfig``, which is taken in by ``Tuner``, to upload results to cloud storage like S3, allowing you to persist results in case you want to start and stop your cluster automatically.

.. _tune-fault-tol:

Expand Down Expand Up @@ -254,7 +254,7 @@ Below are some commonly used commands for submitting experiments. Please see the

# Start a cluster and run an experiment in a detached tmux session,
# and shut down the cluster as soon as the experiment completes.
# In `tune_experiment.py`, set `air.RunConfig(storage_path="s3://...")`
# In `tune_experiment.py`, set `RunConfig(storage_path="s3://...")`
# to persist results
$ ray submit CLUSTER.YAML --tmux --start --stop tune_experiment.py -- --address=localhost:6379

Expand Down
5 changes: 3 additions & 2 deletions doc/source/tune/tutorials/tune-metrics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,14 @@ How to work with Callbacks in Ray Tune?
---------------------------------------

Ray Tune supports callbacks that are called during various times of the training process.
Callbacks can be passed as a parameter to ``air.RunConfig``, taken in by ``Tuner``, and the sub-method you provide will be invoked automatically.
Callbacks can be passed as a parameter to ``RunConfig``, taken in by ``Tuner``, and the sub-method you provide will be invoked automatically.

This simple callback just prints a metric each time a result is received:

.. code-block:: python

from ray import train, tune
from ray.train import RunConfig
from ray.tune import Callback


Expand All @@ -29,7 +30,7 @@ This simple callback just prints a metric each time a result is received:

tuner = tune.Tuner(
train_fn,
run_config=air.RunConfig(callbacks=[MyCallback()]))
run_config=RunConfig(callbacks=[MyCallback()]))
tuner.fit()

For more details and available hooks, please :ref:`see the API docs for Ray Tune callbacks <tune-callbacks-docs>`.
Expand Down
17 changes: 9 additions & 8 deletions doc/source/tune/tutorials/tune-output.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,12 @@ How to configure logging in Tune?

Tune will log the results of each trial to a sub-folder under a specified local dir, which defaults to ``~/ray_results``.

.. code-block:: bash
.. code-block:: python

# This logs to two different trial folders:
# ~/ray_results/trainable_name/trial_name_1 and ~/ray_results/trainable_name/trial_name_2
# trainable_name and trial_name are autogenerated.
tuner = tune.Tuner(trainable, run_config=air.RunConfig(num_samples=2))
tuner = tune.Tuner(trainable, run_config=RunConfig(num_samples=2))
results = tuner.fit()

You can specify the ``storage_path`` and ``trainable_name``:
Expand All @@ -30,7 +30,7 @@ You can specify the ``storage_path`` and ``trainable_name``:
# Only trial_name is autogenerated.
tuner = tune.Tuner(trainable,
tune_config=tune.TuneConfig(num_samples=2),
run_config=air.RunConfig(storage_path="./results", name="test_experiment"))
run_config=RunConfig(storage_path="./results", name="test_experiment"))
results = tuner.fit()


Expand Down Expand Up @@ -118,14 +118,14 @@ However, if you wish to collect Trainable logs in files for analysis, Tune offer
``log_to_file`` for this.
This applies to print statements, ``warnings.warn`` and ``logger.info`` etc.

By passing ``log_to_file=True`` to ``air.RunConfig``, which is taken in by ``Tuner``, stdout and stderr will be logged
By passing ``log_to_file=True`` to ``RunConfig``, which is taken in by ``Tuner``, stdout and stderr will be logged
to ``trial_logdir/stdout`` and ``trial_logdir/stderr``, respectively:

.. code-block:: python

tuner = tune.Tuner(
trainable,
run_config=air.RunConfig(log_to_file=True)
run_config=RunConfig(log_to_file=True)
)
results = tuner.fit()

Expand All @@ -137,13 +137,13 @@ respectively:

tuner = tune.Tuner(
trainable,
run_config=air.RunConfig(log_to_file="std_combined.log")
run_config=RunConfig(log_to_file="std_combined.log")
)
tuner.fit()

tuner = tune.Tuner(
trainable,
run_config=air.RunConfig(log_to_file=("my_stdout.log", "my_stderr.log")))
run_config=RunConfig(log_to_file=("my_stdout.log", "my_stderr.log")))
results = tuner.fit()

The file names are relative to the trial's logdir. You can pass absolute paths,
Expand Down Expand Up @@ -320,10 +320,11 @@ You can then pass in your own logger as follows:
.. code-block:: python

from ray import tune
from ray.train import RunConfig

tuner = tune.Tuner(
MyTrainableClass,
run_config=air.RunConfig(name="experiment_name", callbacks=[CustomLoggerCallback("log_test.txt")])
run_config=RunConfig(name="experiment_name", callbacks=[CustomLoggerCallback("log_test.txt")])
)
results = tuner.fit()

Expand Down
4 changes: 2 additions & 2 deletions doc/source/tune/tutorials/tune-search-spaces.rst
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ for a total of 90 trials, each with randomly sampled values of ``alpha`` and ``b

tuner = tune.Tuner(
my_trainable,
run_config=air.RunConfig(name="my_trainable"),
run_config=RunConfig(name="my_trainable"),
# num_samples will repeat the entire config 10 times.
tune_config=tune.TuneConfig(num_samples=10),
param_space={
Expand Down Expand Up @@ -173,7 +173,7 @@ This lets you specify conditional parameter distributions.

tuner = tune.Tuner(
my_trainable,
run_config=air.RunConfig(name="my_trainable"),
run_config=RunConfig(name="my_trainable"),
param_space={
"alpha": tune.sample_from(lambda spec: np.random.uniform(100)),
"beta": tune.sample_from(lambda spec: spec.config.alpha * np.random.normal()),
Expand Down