From d53d0e0f50312bf1094b83eb284a49c94a770498 Mon Sep 17 00:00:00 2001 From: Max Pumperla Date: Wed, 2 Mar 2022 10:34:46 +0100 Subject: [PATCH] [docs] Typo - fixes #22761 (#22763) Signed-off-by: Max Pumperla --- doc/source/ray-overview/index.md | 2 +- doc/source/train/train.rst | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/source/ray-overview/index.md b/doc/source/ray-overview/index.md index 3e768779bc12..08e67d5160c6 100644 --- a/doc/source/ray-overview/index.md +++ b/doc/source/ray-overview/index.md @@ -127,7 +127,7 @@ All you have to do is use the ``ray.train.torch.prepare_model`` and ``ray.train.torch.prepare_data_loader`` utility functions to easily setup your model & data for distributed training. This will automatically wrap your model with ``DistributedDataParallel`` -and place it on the right device, and add ``DisributedSampler`` to your DataLoaders. +and place it on the right device, and add ``DistributedSampler`` to your DataLoaders. ```{literalinclude} /../../python/ray/train/examples/torch_quick_start.py :language: python diff --git a/doc/source/train/train.rst b/doc/source/train/train.rst index 31c914a469f6..abee6ea86a63 100644 --- a/doc/source/train/train.rst +++ b/doc/source/train/train.rst @@ -87,7 +87,7 @@ system. Let's take following simple examples: ``ray.train.torch.prepare_data_loader`` utility functions to easily setup your model & data for distributed training. This will automatically wrap your model with ``DistributedDataParallel`` - and place it on the right device, and add ``DisributedSampler`` to your DataLoaders. + and place it on the right device, and add ``DistributedSampler`` to your DataLoaders. .. literalinclude:: /../../python/ray/train/examples/torch_quick_start.py :language: python