Skip to content

update documentation for 3x API #1923

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 17 commits into from
Jul 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 28 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,54 +116,58 @@ quantized_model = fit(model=float_model, conf=static_quant_conf, calib_dataloade
</thead>
<tbody>
<tr>
<td colspan="2" align="center"><a href="./docs/source/design.md#architecture">Architecture</a></td>
<td colspan="2" align="center"><a href="./docs/source/design.md#workflow">Workflow</a></td>
<td colspan="1" align="center"><a href="https://intel.github.io/neural-compressor/latest/docs/source/api-doc/apis.html">APIs</a></td>
<td colspan="1" align="center"><a href="./docs/source/llm_recipes.md">LLMs Recipes</a></td>
<td colspan="2" align="center"><a href="examples/README.md">Examples</a></td>
<td colspan="2" align="center"><a href="./docs/3x/design.md#architecture">Architecture</a></td>
<td colspan="2" align="center"><a href="./docs/3x/design.md#workflow">Workflow</a></td>
<td colspan="2" align="center"><a href="https://intel.github.io/neural-compressor/latest/docs/source/api-doc/apis.html">APIs</a></td>
<td colspan="1" align="center"><a href="./docs/3x/llm_recipes.md">LLMs Recipes</a></td>
<td colspan="1" align="center">Examples</td>
</tr>
</tbody>
<thead>
<tr>
<th colspan="8">Python-based APIs</th>
<th colspan="8">PyTorch Extension APIs</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="2" align="center"><a href="./docs/source/quantization.md">Quantization</a></td>
<td colspan="2" align="center"><a href="./docs/source/mixed_precision.md">Advanced Mixed Precision</a></td>
<td colspan="2" align="center"><a href="./docs/source/pruning.md">Pruning (Sparsity)</a></td>
<td colspan="2" align="center"><a href="./docs/source/distillation.md">Distillation</a></td>
<td colspan="2" align="center"><a href="./docs/3x/PyTorch.md">Overview</a></td>
<td colspan="2" align="center"><a href="./docs/3x/PT_StaticQuant.md">Static Quantization</a></td>
<td colspan="2" align="center"><a href="./docs/3x/PT_DynamicQuant.md">Dynamic Quantization</a></td>
<td colspan="2" align="center"><a href="./docs/3x/PT_SmoothQuant.md">Smooth Quantization</a></td>
</tr>
<tr>
<td colspan="2" align="center"><a href="./docs/source/orchestration.md">Orchestration</a></td>
<td colspan="2" align="center"><a href="./docs/source/benchmark.md">Benchmarking</a></td>
<td colspan="2" align="center"><a href="./docs/source/distributed.md">Distributed Compression</a></td>
<td colspan="2" align="center"><a href="./docs/source/export.md">Model Export</a></td>
<td colspan="4" align="center"><a href="./docs/3x/PT_WeightOnlyQuant.md">Weight-Only Quantization</a></td>
<td colspan="2" align="center"><a href="./docs/3x/PT_MXQuant.md">MX Quantization</a></td>
<td colspan="2" align="center"><a href="./docs/3x/PT_MixedPrecision.md">Mixed Precision</a></td>
</tr>
</tbody>
<thead>
<tr>
<th colspan="8">Advanced Topics</th>
<th colspan="8">Tensorflow Extension APIs</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="2" align="center"><a href="./docs/source/adaptor.md">Adaptor</a></td>
<td colspan="2" align="center"><a href="./docs/source/tuning_strategies.md">Strategy</a></td>
<td colspan="2" align="center"><a href="./docs/source/distillation_quantization.md">Distillation for Quantization</a></td>
<td colspan="2" align="center"><a href="./docs/source/smooth_quant.md">SmoothQuant</td>
<td colspan="3" align="center"><a href="./docs/3x/TensorFlow.md">Overview</a></td>
<td colspan="3" align="center"><a href="./docs/3x/TF_Quant.md">Static Quantization</a></td>
<td colspan="2" align="center"><a href="./docs/3x/TF_SQ.md">Smooth Quantization</a></td>
</tr>
</tbody>
<thead>
<tr>
<td colspan="4" align="center"><a href="./docs/source/quantization_weight_only.md">Weight-Only Quantization (INT8/INT4/FP4/NF4) </td>
<td colspan="2" align="center"><a href="https://github.com/intel/neural-compressor/blob/fp8_adaptor/docs/source/fp8.md">FP8 Quantization </td>
<td colspan="2" align="center"><a href="./docs/source/quantization_layer_wise.md">Layer-Wise Quantization </td>
<th colspan="8">Other Modules</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="4" align="center"><a href="./docs/3x/autotune.md">Auto Tune</a></td>
<td colspan="4" align="center"><a href="./docs/3x/benchmark.md">Benchmark</a></td>
</tr>
</tbody>
</table>

> **Note**:
> Further documentations can be found at [User Guide](https://github.com/intel/neural-compressor/blob/master/docs/source/user_guide.md).
> **Note**:
> From 3.0 release, we recommend to use 3.X API. Compression techniques during training such as QAT, Pruning, Distillation only available in [2.X API](https://github.com/intel/neural-compressor/blob/master/docs/source/2x_user_guide.md) currently.

## Selected Publications/Events
* Blog by Intel: [Neural Compressor: Boosting AI Model Efficiency](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Neural-Compressor-Boosting-AI-Model-Efficiency/post/1604740) (June 2024)
Expand Down
File renamed without changes.
15 changes: 15 additions & 0 deletions docs/3x/PyTorch.md
Original file line number Diff line number Diff line change
Expand Up @@ -194,6 +194,21 @@ def load(output_dir="./saved_results", model=None):
<td class="tg-9wq8">&#10004</td>
<td class="tg-9wq8"><a href="PT_DynamicQuant.md">link</a></td>
</tr>
<tr>
<td class="tg-9wq8">MX Quantization</td>
<td class="tg-9wq8"><a href=https://arxiv.org/pdf/2310.10537>Microscaling Data Formats for
Deep Learning</a></td>
<td class="tg-9wq8">PyTorch eager mode</td>
<td class="tg-9wq8">&#10004</td>
<td class="tg-9wq8"><a href="PT_MXQuant.md">link</a></td>
</tr>
<tr>
<td class="tg-9wq8">Mixed Precision</td>
<td class="tg-9wq8"><a href=https://arxiv.org/abs/1710.03740>Mixed precision</a></td>
<td class="tg-9wq8">PyTorch eager mode</td>
<td class="tg-9wq8">&#10004</td>
<td class="tg-9wq8"><a href="PT_MixPrecision.md">link</a></td>
</tr>
<tr>
<td class="tg-9wq8">Quantization Aware Training</td>
<td class="tg-9wq8"><a href=https://pytorch.org/docs/master/quantization.html#quantization-aware-training-for-static-quantization>Quantization Aware Training</a></td>
Expand Down
16 changes: 16 additions & 0 deletions docs/3x/design.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
Design
=====

## Architecture

<a target="_blank" href="imgs/architecture.png">
<img src="imgs/architecture.png" alt="Architecture">
</a>

## Workflows

Intel® Neural Compressor provides two workflows: Quantization and Auto-tune.

<a target="_blank" href="imgs/workflow.png">
<img src="imgs/workflow.png" alt="Workflow">
</a>
88 changes: 88 additions & 0 deletions docs/3x/get_started.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Getting Started

1. [Quick Samples](#quick-samples)

2. [Feature Matrix](#feature-matrix)

## Quick Samples

```shell
# Install Intel Neural Compressor
pip install neural-compressor-pt
```
```python
from transformers import AutoModelForCausalLM
from neural_compressor.torch.quantization import RTNConfig, prepare, convert

user_model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-125m")
quant_config = RTNConfig()
prepared_model = prepare(model=user_model, quant_config=quant_config)
quantized_model = convert(model=prepared_model)
```

## Feature Matrix
Intel Neural Compressor 3.X extends PyTorch and TensorFlow's APIs to support compression techniques.
The below table provides a quick overview of the APIs available in Intel Neural Compressor 3.X.
The Intel Neural Compressor 3.X mainly focuses on quantization-related features, especially for algorithms that benefit LLM accuracy and inference.
It also provides some common modules across different frameworks. For example, Auto-tune support accuracy driven quantization and mixed precision, benchmark aimed to measure the multiple instances performance of the quantized model.

<table class="docutils">
<thead>
<tr>
<th colspan="8">Overview</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="2" align="center"><a href="design.md#architecture">Architecture</a></td>
<td colspan="2" align="center"><a href="design.md#workflow">Workflow</a></td>
<td colspan="2" align="center"><a href="https://intel.github.io/neural-compressor/latest/docs/source/api-doc/apis.html">APIs</a></td>
<td colspan="1" align="center"><a href="llm_recipes.md">LLMs Recipes</a></td>
<td colspan="1" align="center">Examples</td>
</tr>
</tbody>
<thead>
<tr>
<th colspan="8">PyTorch Extension APIs</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="2" align="center"><a href="PyTorch.md">Overview</a></td>
<td colspan="2" align="center"><a href="PT_StaticQuant.md">Static Quantization</a></td>
<td colspan="2" align="center"><a href="PT_DynamicQuant.md">Dynamic Quantization</a></td>
<td colspan="2" align="center"><a href="PT_SmoothQuant.md">Smooth Quantization</a></td>
</tr>
<tr>
<td colspan="3" align="center"><a href="PT_WeightOnlyQuant.md">Weight-Only Quantization</a></td>
<td colspan="3" align="center"><a href="PT_MXQuant.md">MX Quantization</a></td>
<td colspan="2" align="center"><a href="PT_MixedPrecision.md">Mixed Precision</a></td>
</tr>
</tbody>
<thead>
<tr>
<th colspan="8">Tensorflow Extension APIs</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="3" align="center"><a href="TensorFlow.md">Overview</a></td>
<td colspan="3" align="center"><a href="TF_Quant.md">Static Quantization</a></td>
<td colspan="2" align="center"><a href="TF_SQ.md">Smooth Quantization</a></td>
</tr>
</tbody>
<thead>
<tr>
<th colspan="8">Other Modules</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="4" align="center"><a href="autotune.md">Auto Tune</a></td>
<td colspan="4" align="center"><a href="benchmark.md">Benchmark</a></td>
</tr>
</tbody>
</table>

> **Note**:
> From 3.0 release, we recommend to use 3.X API. Compression techniques during training such as QAT, Pruning, Distillation only available in [2.X API](https://github.com/intel/neural-compressor/blob/master/docs/source/2x_user_guide.md) currently.
File renamed without changes
Empty file added docs/3x/llm_recipes.md
Empty file.
6 changes: 3 additions & 3 deletions docs/source/user_guide.md → docs/source/2x_user_guide.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
User Guide
2.X API User Guide
===========================

Intel® Neural Compressor aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search to help the user optimize their model. The below documents could help you to get familiar with concepts and modules in Intel® Neural Compressor. Learn how to utilize the APIs in Intel® Neural Compressor to conduct quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks.

## Overview
This part helps user to get a quick understand about design structure and workflow of Intel® Neural Compressor. We provided broad examples to help users get started.
This part helps user to get a quick understand about design structure and workflow of 2.X Intel® Neural Compressor. We provided broad examples to help users get started.
<table class="docutils">
<tbody>
<tr>
Expand Down Expand Up @@ -53,7 +53,7 @@ In 2.X API, it's very important to create the `DataLoader` and `Metrics` for you
</table>

## Advanced Topics
This part provides the advanced topics that help user dive deep into Intel® Neural Compressor.
This part provides the advanced topics that help user dive deep into Intel® Neural Compressor 2.X API.
<table class="docutils">
<tbody>
<tr>
Expand Down
86 changes: 0 additions & 86 deletions docs/source/NAS.md

This file was deleted.

Binary file removed docs/source/imgs/dynas.png
Binary file not shown.
Binary file removed docs/source/imgs/release_data.png
Binary file not shown.
Binary file not shown.
Binary file removed docs/source/imgs/tensorboard_tune_1_v0_cg_conv0.png
Binary file not shown.
Binary file not shown.
Binary file removed docs/source/imgs/terminal-ops.jpg
Binary file not shown.
Binary file removed docs/source/imgs/terminal-profiling.jpg
Binary file not shown.
Binary file removed docs/source/imgs/terminal-weights.jpg
Binary file not shown.
Binary file removed docs/source/imgs/tutorial.png
Binary file not shown.
Binary file removed docs/source/imgs/workflow.jpg
Binary file not shown.
13 changes: 0 additions & 13 deletions docs/source/infrastructure.md
Original file line number Diff line number Diff line change
Expand Up @@ -182,19 +182,6 @@ Intel® Neural Compressor has unified interfaces which dispatch tasks to differe
</table>


</br>
</br>

[Neural architecture search](NAS.md):
|Approach |Framework |
|------------------------------------------------|:-----------:|
|Basic |PyTorch |
|DyNas |PyTorch |

</br>
</br>


[Mixed precision](mixed_precision.md):
|Framework | |
|--------------|:-----------:|
Expand Down
4 changes: 2 additions & 2 deletions docs/source/installation_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,8 +59,8 @@ The following prerequisites and requirements must be satisfied for a successful
cd neural-compressor
pip install -r requirements.txt
python setup.py install
[optional] pip install requirements_pt.txt # for PyTorch framework extension API
[optional] pip install requirements_tf.txt # for TensorFlow framework extension API
[optional] pip install -r requirements_pt.txt # for PyTorch framework extension API
[optional] pip install -r requirements_tf.txt # for TensorFlow framework extension API
```

### Install from AI Kit
Expand Down
Loading
Loading