Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] website refresh #2123

Merged
merged 71 commits into from
Jul 21, 2022
Merged
Show file tree
Hide file tree
Changes from 70 commits
Commits
Show all changes
71 commits
Select commit Hold shift + click to select a range
0a4f2de
update with Samyam's new structure/content.
awan-10 Jul 19, 2022
d9d33ee
add 3p figure and pages
awan-10 Jul 19, 2022
df788b1
add ds-compression part
yaozhewei Jul 20, 2022
cfe642f
DeepSpeed-Inference, And software artifact updates
samyam Jul 20, 2022
f880419
Extreme Speed and Scale
samyam Jul 20, 2022
e43c177
add links and logos
jeffra Jul 20, 2022
f82ee26
add hf logo png
jeffra Jul 20, 2022
5b23670
Update index.md
jeffra Jul 20, 2022
53d8abc
add hf logos
jeffra Jul 20, 2022
d1849b6
lightning
jeffra Jul 20, 2022
6af3545
lightning dark
jeffra Jul 20, 2022
dde699d
trim logo
jeffra Jul 20, 2022
3080349
add lightning svg
jeffra Jul 20, 2022
775d3cc
Update navigation.yml
jeffra Jul 20, 2022
1747a78
Update config-json.md
jeffra Jul 20, 2022
3fb2760
Update navigation.yml
jeffra Jul 20, 2022
8c9caa0
add logos
jeffra Jul 20, 2022
c05ec38
Update README.md
jeffra Jul 20, 2022
c72a154
Merge branch 'master' of github.com:jeffra/DeepSpeed
jeffra Jul 20, 2022
dab86d0
Update README.md
jeffra Jul 20, 2022
c88ccc4
Update index.md
jeffra Jul 20, 2022
30ce281
Update index.md
jeffra Jul 21, 2022
19b9395
Update index.md
jeffra Jul 21, 2022
befa33a
Update index.md
jeffra Jul 21, 2022
ef516be
fix logos
jeffra Jul 21, 2022
059506a
Update index.md
samyam Jul 21, 2022
079a391
Merge pull request #3 from jeffra/samyam-rebranding
samyam Jul 21, 2022
be1f254
Update index.md
samyam Jul 21, 2022
773374b
Update index.md
samyam Jul 21, 2022
5079e02
Update index.md
samyam Jul 21, 2022
3541dab
Update index.md
jeffra Jul 21, 2022
00cf764
Update index.md
awan-10 Jul 21, 2022
af0d36c
add model links
jeffra Jul 21, 2022
028aa1f
Update index.md
samyam Jul 21, 2022
8a8fa50
Update index.md
samyam Jul 21, 2022
4219d3d
Update index.md
samyam Jul 21, 2022
b51faa8
add tocs
jeffra Jul 21, 2022
e63f66c
Update index.md
samyam Jul 21, 2022
b6a4c38
Update inference.md
samyam Jul 21, 2022
e0b8fb7
Update index.md
jeffra Jul 21, 2022
7daf5f8
Update _config.yml
jeffra Jul 21, 2022
0d64d8d
Update _config.yml
jeffra Jul 21, 2022
7cab2c2
Update README.md
samyam Jul 21, 2022
4eddbb6
Update README.md
samyam Jul 21, 2022
e64e8d3
Update README.md
samyam Jul 21, 2022
352daf2
Update README.md
samyam Jul 21, 2022
73e8142
update compression part
yaozhewei Jul 21, 2022
7a65ea0
Update README.md
jeffra Jul 21, 2022
04bcb22
Update inference.md
awan-10 Jul 21, 2022
82bd53a
Update inference.md
awan-10 Jul 21, 2022
3ed553e
update link
yaozhewei Jul 21, 2022
5f19194
Update README.md
samyam Jul 21, 2022
18b0bab
Update README.md
samyam Jul 21, 2022
305910b
Update README.md
jeffra Jul 21, 2022
a1a3fb8
Update index.md
samyam Jul 21, 2022
99b5689
changes
jeffra Jul 21, 2022
8c46e4c
Update README.md
jeffra Jul 21, 2022
02e0b0b
Update README.md
jeffra Jul 21, 2022
f4957fd
Update index.md
samyam Jul 21, 2022
ab78e91
Update index.md
samyam Jul 21, 2022
0c41339
Update README.md
jeffra Jul 21, 2022
6607ede
Update index.md
samyam Jul 21, 2022
6330188
Update README.md
jeffra Jul 21, 2022
9001616
Update index.md
samyam Jul 21, 2022
5ea10e5
Update README.md
jeffra Jul 21, 2022
f362c95
Update index.md
samyam Jul 21, 2022
fce5356
Update index.md
jeffra Jul 21, 2022
3f0d950
Update index.md
jeffra Jul 21, 2022
b0f25e9
Update README.md
jeffra Jul 21, 2022
496941f
Merge branch 'master' into new-docs
jeffra Jul 21, 2022
e8c5034
formatting
jeffra Jul 21, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
195 changes: 79 additions & 116 deletions README.md

Large diffs are not rendered by default.

2 changes: 2 additions & 0 deletions docs/_config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,8 @@ defaults:
path: "_pages"
values:
permalink: /docs/:basename/
toc: true
toc_label: "Contents"
- scope:
path: ""
type: posts
Expand Down
37 changes: 7 additions & 30 deletions docs/_data/navigation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,20 +11,15 @@ main:
url: https://github.com/microsoft/DeepSpeed

lnav:
- title: 'Feature Overview'
url: /features/
- title: 'Training'
url: /training/
- title: 'Inference'
url: /inference/
- title: 'Compression'
url: /compression/
- title: 'Getting Started'
url: /getting-started/
children:
- title: 'Installation'
url: /getting-started/#installation
- title: 'Writing models'
url: /getting-started/#writing-deepspeed-models
- title: 'Training'
url: /getting-started/#training
- title: 'Launching'
url: /getting-started/#launching-deepspeed-training
- title: 'Configuration'
- title: 'ds_config'
url: /docs/config-json/
children:
- title: 'Autotuning'
Expand All @@ -33,34 +28,16 @@ lnav:
url: /docs/config-json/#batch-size-related-parameters
- title: 'Optimizer'
url: /docs/config-json/#optimizer-parameters
- title: 'Scheduler'
url: /docs/config-json/#scheduler-parameters
- title: 'Communication'
url: /docs/config-json/#communication-options
- title: 'FP16'
url: /docs/config-json/#fp16-training-options
- title: 'BFLOAT16'
url: /docs/config-json/#bfloat16-training-options
- title: 'Gradient Clipping'
url: /docs/config-json/#gradient-clipping
- title: 'ZeRO optimizations'
url: /docs/config-json/#zero-optimizations-for-fp16-training
- title: 'Parameter Offloading'
url: /docs/config-json/#parameter-offloading
- title: 'Optimizer Offloading'
url: /docs/config-json/#optimizer-offloading
- title: 'Asynchronous I/O'
url: /docs/config-json/#asynchronous-io
- title: 'Logging'
url: /docs/config-json/#logging
- title: 'Flops Profiler'
url: /docs/config-json/#flops-profiler
- title: 'PyTorch Profiler'
url: /docs/config-json/#pytorch-profiler
- title: 'Activation checkpointing'
url: /docs/config-json/#activation-checkpointing
- title: 'Sparse Attention'
url: /docs/config-json/#sparse-attention
- title: 'Monitoring'
url: /docs/config-json/#monitoring-module-tensorboard-wandb-csv
- title: 'Model Compression'
Expand Down
12 changes: 12 additions & 0 deletions docs/_pages/compression.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
title: "Compression Overview and Features"
layout: single
permalink: /compression/
toc: true
toc_label: "Contents"
---


DeepSpeed Compression is a library purposely built to make it easy to compress models for researchers and practitioners while delivering faster speed, smaller model size, and significantly reduced compression cost. Please refer to our [blog](https://www.microsoft.com/en-us/research/blog/deepspeed-compression-a-composable-library-for-extreme-compression-and-zero-cost-quantization/) for more details.

DeepSpeed Compression offers novel state-of-the-art compression techniques to achieve faster model compression with better model quality and lower compression cost. DeepSpeed Compression also takes an end-to-end approach to improve the computation efficiency of compressed models via a highly optimized inference engine. Furthermore, our library has multiple built-in state-of-the-art compression methods. It supports the synergistic composition of these methods and the system optimizations, offering the best of both worlds while allowing a seamless and easy-to-use pipeline for efficient DL model inference. We highly recommend you also to read our blog to learn more about (at a high level) why we build DeepSpeed Compression and what benefits it provides to users. To try compress your model using DeepSpeed compression library, please checkout our [tutorial](https://www.deepspeed.ai/tutorials/model-compression/).
2 changes: 2 additions & 0 deletions docs/_pages/config-json.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
---
title: "DeepSpeed Configuration JSON"
toc: true
toc_label: "Contents"
---

### Batch Size Related Parameters
Expand Down
13 changes: 13 additions & 0 deletions docs/_pages/inference.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
title: "Inference Overview and Features"
layout: single
permalink: /inference/
toc: true
toc_label: "Contents"
---

DeepSpeed-Inference introduces several features to efficiently serve transformer-based PyTorch models. It supports model parallelism (MP) to fit large models that would otherwise not fit in GPU memory. Even for smaller models, MP can be used to reduce latency for inference. To further reduce latency and cost, we introduce inference-customized kernels. Finally, we propose a novel approach to quantize models, called MoQ, to both shrink the model and reduce the inference cost at production. For more details on the inference related optimizations in DeepSpeed, please refer to our [blog post](https://www.microsoft.com/en-us/research/blog/deepspeed-accelerating-large-scale-model-inference-and-training-via-system-optimizations-and-compression/).

DeepSpeed provides a seamless inference mode for compatible transformer based models trained using DeepSpeed, Megatron, and HuggingFace, meaning that we don’t require any change on the modeling side such as exporting the model or creating a different checkpoint from your trained checkpoints. To run inference on multi-GPU for compatible models, provide the model parallelism degree and the checkpoint information or the model which is already loaded from a checkpoint, and DeepSpeed will do the rest. It will automatically partition the model as necessary, inject compatible high performance kernels into your model and manage the inter-gpu communication. For list of compatible models please see [here](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/module_inject/replace_policy.py).

To get started with DeepSpeed-Inference, please checkout our [tutorial](https://www.deepspeed.ai/tutorials/inference-tutorial/).
177 changes: 177 additions & 0 deletions docs/_pages/features.md → docs/_pages/training.md
100755 → 100644
Original file line number Diff line number Diff line change
@@ -1,3 +1,180 @@
---
title: "Training Overview and Features"
layout: single
permalink: /training/
toc: true
toc_label: "Contents"
---

# Overview
Training advanced deep learning models is challenging. Beyond model design,
model scientists also need to set up the state-of-the-art training techniques
such as distributed training, mixed precision, gradient accumulation, and
checkpointing. Yet still, scientists may not achieve the desired system
performance and convergence rate. Large model sizes are even more challenging:
a large model easily runs out of memory with pure data parallelism and it is
difficult to use model parallelism. DeepSpeed addresses these challenges to
accelerate model development *and* training.

## Distributed, Effective, and Efficient Training with Ease
The DeepSpeed API is a lightweight wrapper on [PyTorch](https://pytorch.org/). This
means that you can use everything you love in PyTorch and without learning a new
platform. In addition, DeepSpeed manages all of the boilerplate state-of-the-art
training techniques, such as distributed training, mixed precision, gradient
accumulation, and checkpoints so that you can focus on your model development. Most
importantly, you can leverage the distinctive efficiency and effectiveness benefit of
DeepSpeed to boost speed and scale with just a few lines of code changes to your PyTorch
models.

## Speed
DeepSpeed achieves high performance and fast convergence through a combination of
efficiency optimizations on compute/communication/memory/IO and effectiveness
optimizations on advanced hyperparameter tuning and optimizers. For example:

* <span style="color:dodgerblue">DeepSpeed trains BERT-large to parity in 44
mins using 1024 V100 GPUs (64 DGX-2 boxes) and in 2.4 hours using 256 GPUs
(16 DGX-2 boxes).</span>

**BERT-large Training Times**

| Devices | Source | Training Time |
| -------------- | --------- | ---------------------:|
| 1024 V100 GPUs | DeepSpeed | **44** min|
| 256 V100 GPUs | DeepSpeed | **2.4** hr|
| 64 V100 GPUs | DeepSpeed | **8.68** hr|
| 16 V100 GPUs | DeepSpeed | **33.22** hr|

*BERT codes and tutorials will be available soon.*

* DeepSpeed trains GPT2 (1.5 billion parameters) 3.75x faster than state-of-art, NVIDIA
Megatron on Azure GPUs.

*Read more*: [GPT tutorial](/tutorials/megatron/)



## Memory efficiency
DeepSpeed provides memory-efficient data parallelism and enables training models without
model parallelism. For example, DeepSpeed can train models with up to 13 billion parameters on
a single GPU. In comparison, existing frameworks (e.g.,
PyTorch's Distributed Data Parallel) run out of memory with 1.4 billion parameter models.

DeepSpeed reduces the training memory footprint through a novel solution called Zero
Redundancy Optimizer (ZeRO). Unlike basic data parallelism where memory states are
replicated across data-parallel processes, ZeRO partitions model states and gradients to save
significant memory. Furthermore, it also reduces activation memory and fragmented memory.
The current implementation (ZeRO-2) reduces memory by up to
8x relative to the state-of-art. You can read more about ZeRO in our [paper](https://arxiv.org/abs/1910.02054), and
in our blog posts related to
[ZeRO-1](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/) and [ZeRO-2](https://www.microsoft.com/en-us/research/blog/zero-2-deepspeed-shattering-barriers-of-deep-learning-speed-scale/).

With this impressive memory reduction, early adopters of DeepSpeed have already
produced a language model (LM) with over 17B parameters called
<a href="https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft">
<span style="color:dodgerblue">Turing-NLG</span></a>,
establishing a new SOTA in the LM category.

For model scientists with limited GPU resources, ZeRO-Offload leverages both CPU and GPU memory for training large models. Using a machine with **a single GPU**, our users can run **models of up to 13 billion parameters** without running out of memory, 10x bigger than the existing approaches, while obtaining competitive throughput. This feature democratizes multi-billion-parameter model training and opens the window for many deep learning practitioners to explore bigger and better models.

## Scalability
DeepSpeed supports efficient data parallelism, model parallelism, pipeline parallelism and their
combinations, which we call 3D parallelism.
* <span style="color:dodgerblue">3D parallelism of DeepSpeed provides system support to run models with trillions of parameters, read more in our [press-release]({{ site.press_release_v3 }}) and [tutorial](/tutorials/pipeline).</span>
* <span style="color:dodgerblue">DeepSpeed can run large models more efficiently, up to 10x
faster for models with
various sizes spanning 1.5B to hundred billion.</span> More specifically, the data parallelism powered by ZeRO
is complementary and can be combined with different types of model parallelism. It allows
DeepSpeed to fit models using lower degree of model parallelism and higher batch size, offering
significant performance gains compared to using model parallelism alone.

*Read more*: [ZeRO paper](https://arxiv.org/abs/1910.02054),
and [GPT tutorial](/tutorials/megatron).

![DeepSpeed Speedup](/assets/images/deepspeed-speedup.png)
<p align="center">
<em>The figure depicts system throughput improvements of DeepSpeed (combining ZeRO-powered data parallelism with model parallelism of NVIDIA Megatron-LM) over using Megatron-LM alone.</em>
</p>

## Communication efficiency
Pipeline parallelism of DeepSpeed reduce communication volume during distributed training, which allows users to train multi-billion-parameter models 2–7x faster on clusters with limited network bandwidth.
![Low-bandwidth GPT-2 Performance](/assets/images/pp-lowbw-gpt2.png)

1-bit Adam, 0/1 Adam and 1-bit LAMB reduce communication volume by up to 26x while achieving similar convergence efficiency to Adam, allowing for scaling to different types of GPU clusters and networks. [1-bit Adam blog post](https://www.deepspeed.ai/2020/09/08/onebit-adam-blog-post.html), [1-bit Adam tutorial](https://www.deepspeed.ai/tutorials/onebit-adam/), [0/1 Adam tutorial](https://www.deepspeed.ai/tutorials/zero-one-adam/), [1-bit LAMB tutorial](https://www.deepspeed.ai/tutorials/onebit-lamb/).

## Supporting long sequence length
DeepSpeed offers sparse attention kernels—an instrumental technology to support long sequences of model inputs, whether for text, image, or sound. Compared with the classic dense Transformers, it powers **an order-of-magnitude longer input sequence** and obtains up to 6x faster execution with comparable accuracy. It also outperforms state-of-the-art sparse implementations with 1.5–3x faster execution. Furthermore, our sparse kernels support efficient execution of flexible sparse format and empower users to innovate on their custom sparse structures. [Read more here](https://www.deepspeed.ai/2020/09/08/sparse-attention.html).


## Fast convergence for effectiveness
DeepSpeed supports advanced hyperparameter tuning and large batch size
optimizers such as [LAMB](https://arxiv.org/abs/1904.00962). These improve the
effectiveness of model training and reduce the number of samples required to
convergence to desired accuracy.

*Read more*: [Tuning tutorial](/tutorials/one-cycle).


## Good Usability
Only a few lines of code changes are needed to enable a PyTorch model to use DeepSpeed and ZeRO. Compared to current model parallelism libraries, DeepSpeed does not require a code redesign or model refactoring. It also does not put limitations on model dimensions (such as number of attention heads, hidden sizes, and others), batch size, or any other training parameters. For models of up to 13 billion parameters, you can use ZeRO-powered data parallelism conveniently without requiring model parallelism, while in contrast, standard data parallelism will run out of memory for models with more than 1.4 billion parameters. In addition, DeepSpeed conveniently supports flexible combination of ZeRO-powered data parallelism with custom model parallelisms, such as tensor slicing of NVIDIA's Megatron-LM.


## Features

Below we provide a brief feature list, see our detailed [feature overview](https://www.deepspeed.ai/features/) for descriptions and usage.

* [Distributed Training with Mixed Precision](https://www.deepspeed.ai/features/#distributed-training-with-mixed-precision)
* 16-bit mixed precision
* Single-GPU/Multi-GPU/Multi-Node
* [Model Parallelism](https://www.deepspeed.ai/features/#model-parallelism)
* Support for Custom Model Parallelism
* Integration with Megatron-LM
* [Pipeline Parallelism](https://www.deepspeed.ai/tutorials/pipeline/)
* 3D Parallelism
* [The Zero Redundancy Optimizer](https://www.deepspeed.ai/tutorials/zero/)
* Optimizer State and Gradient Partitioning
* Activation Partitioning
* Constant Buffer Optimization
* Contiguous Memory Optimization
* [ZeRO-Offload](https://www.deepspeed.ai/tutorials/zero-offload/)
* Leverage both CPU/GPU memory for model training
* Support 10B model training on a single GPU
* [Ultra-fast dense transformer kernels](https://www.deepspeed.ai/2020/05/18/bert-record.html)
* [Sparse attention](https://www.deepspeed.ai/2020/09/08/sparse-attention-news.html)
* Memory- and compute-efficient sparse kernels
* Support 10x long sequences than dense
* Flexible support to different sparse structures
* [1-bit Adam](https://www.deepspeed.ai/2020/09/08/onebit-adam-blog-post.html), [0/1 Adam](https://www.deepspeed.ai/tutorials/zero-one-adam/) and [1-bit LAMB](https://www.deepspeed.ai/tutorials/onebit-lamb/)
* Custom communication collective
* Up to 26x communication volume saving
* [Additional Memory and Bandwidth Optimizations](https://www.deepspeed.ai/features/#additional-memory-and-bandwidth-optimizations)
* Smart Gradient Accumulation
* Communication/Computation Overlap
* [Training Features](https://www.deepspeed.ai/features/#training-features)
* Simplified training API
* Gradient Clipping
* Automatic loss scaling with mixed precision
* [Training Optimizers](https://www.deepspeed.ai/features/#training-optimizers)
* Fused Adam optimizer and arbitrary `torch.optim.Optimizer`
* Memory bandwidth optimized FP16 Optimizer
* Large Batch Training with LAMB Optimizer
* Memory efficient Training with ZeRO Optimizer
* CPU-Adam
* [Training Agnostic Checkpointing](https://www.deepspeed.ai/features/#training-agnostic-checkpointing)
* [Advanced Parameter Search](https://www.deepspeed.ai/features/#advanced-parameter-search)
* Learning Rate Range Test
* 1Cycle Learning Rate Schedule
* [Simplified Data Loader](https://www.deepspeed.ai/features/#simplified-data-loader)
* [Curriculum Learning](https://www.deepspeed.ai/tutorials/curriculum-learning/)
* A curriculum learning-based data pipeline that presents easier or simpler examples earlier during training
* Stable and 3.3x faster GPT-2 pre-training with 8x/4x larger batch size/learning rate while maintaining token-wise convergence speed
* Complementary to many other DeepSpeed features
* [Progressive Layer Dropping](https://www.deepspeed.ai/2020/10/28/progressive-layer-dropping-news.html)
* Efficient and robust compressed training
* Up to 2.5x convergence speedup for pre-training
* [Performance Analysis and Debugging](https://www.deepspeed.ai/features/#performance-analysis-and-debugging)
* [Mixture of Experts (MoE)](https://www.deepspeed.ai/tutorials/mixture-of-experts/)


---
title: "Feature Overview"
layout: single
Expand Down
Binary file added docs/assets/images/3pillars.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/images/accelerate-dark.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/images/accelerate-light.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/images/accelerate.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/images/hf-logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/images/hf-transformers.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/images/lightning-dark.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading