You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,7 +31,7 @@ In particular, the tool provides the key features, typical examples, and open co
31
31
```Shell
32
32
pip install neural-compressor
33
33
```
34
-
> More installation methods can be found at [Installation Guide](./docs/source/installation_guide.md). Please check out our [FAQ](./docs/source/faq.md) for more details.
34
+
> More installation methods can be found at [Installation Guide](/docs/source/installation_guide.md). Please check out our [FAQ](/docs/source/faq.md) for more details.
35
35
36
36
## Getting Started
37
37
### Quantization with Python API
@@ -137,7 +137,7 @@ q_model = fit(
137
137
</tbody>
138
138
</table>
139
139
140
-
> More documentations can be found at [User Guide](./docs/source/user_guide.md).
140
+
> More documentations can be found at [User Guide](/docs/source/user_guide.md).
141
141
142
142
## Selected Publications/Events
143
143
* Blog by Intel: [Effective Weight-Only Quantization for Large Language Models with Intel® Neural Compressor](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Effective-Weight-Only-Quantization-for-Large-Language-Models/post/1529552) (Oct 2023)
@@ -147,13 +147,13 @@ q_model = fit(
147
147
* NeurIPS'2022: [Fast Distilbert on CPUs](https://arxiv.org/abs/2211.07715) (Oct 2022)
148
148
* NeurIPS'2022: [QuaLA-MiniLM: a Quantized Length Adaptive MiniLM](https://arxiv.org/abs/2210.17114) (Oct 2022)
0 commit comments