16
16
For example, you can run this command ` rm -rf /tmp/torch_extensions/quant_cuda /tmp/torch_extensions/quant_cpu ` if
17
17
you are using the default directory for pytorch extensions.
18
18
19
-
19
+ # Overview
20
20
QPyTorch is a low-precision arithmetic simulation package in
21
21
PyTorch. It is designed to support researches on low-precision machine
22
22
learning, especially for researches in low-precision training.
@@ -32,6 +32,8 @@ example replication of [WAGE](https://arxiv.org/abs/1802.04680) in a downstream
32
32
repo [ WAGE] ( https://github.com/Tiiiger/QPyTorch/blob/master/examples/WAGE ) . We also provide a list
33
33
of working examples under [ Examples] ( #examples ) .
34
34
35
+ A more comprehensive write-up can be found [ here] ( https://arxiv.org/abs/1910.04540 )
36
+
35
37
* Note* : QPyTorch relies on PyTorch functions for the underlying computation,
36
38
such as matrix multiplication. This means that the actual computation is done in
37
39
single precision. Therefore, QPyTorch is not intended to be used to study the
@@ -41,6 +43,18 @@ numerical behavior of different **accumulation** strategies.
41
43
PyTorch does round-to-nearest-even. This will create a discrepancy between the PyTorch half-precision tensor
42
44
and QPyTorch's simulation of half-precision numbers.
43
45
46
+ if you find this repo useful please cite
47
+ ```
48
+ @misc{zhang2019qpytorch,
49
+ title={QPyTorch: A Low-Precision Arithmetic Simulation Framework},
50
+ author={Tianyi Zhang and Zhiqiu Lin and Guandao Yang and Christopher De Sa},
51
+ year={2019},
52
+ eprint={1910.04540},
53
+ archivePrefix={arXiv},
54
+ primaryClass={cs.LG}
55
+ }
56
+ ```
57
+
44
58
## Installation
45
59
46
60
requirements:
0 commit comments