You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Updated documentation for Donut model
* Update docs/source/en/model_doc/donut.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/donut.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/donut.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/donut.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Updated code suggestions
* Update docs/source/en/model_doc/donut.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Updated code suggestion to Align with the AutoModel example
* Update docs/source/en/model_doc/donut.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Updated notes section included code examples
* close hfoption block and indent
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
@@ -13,180 +13,191 @@ rendered properly in your Markdown viewer.
13
13
14
14
specific language governing permissions and limitations under the License. -->
15
15
16
-
# Donut
17
-
18
-
## Overview
19
-
20
-
The Donut model was proposed in [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by
21
-
Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
22
-
Donut consists of an image Transformer encoder and an autoregressive text Transformer decoder to perform document understanding
23
-
tasks such as document image classification, form understanding and visual question answering.
24
-
25
-
The abstract from the paper is the following:
26
-
27
-
*Understanding document images (e.g., invoices) is a core but challenging task since it requires complex functions such as reading text and a holistic understanding of the document. Current Visual Document Understanding (VDU) methods outsource the task of reading text to off-the-shelf Optical Character Recognition (OCR) engines and focus on the understanding task with the OCR outputs. Although such OCR-based approaches have shown promising performance, they suffer from 1) high computational costs for using OCR; 2) inflexibility of OCR models on languages or types of document; 3) OCR error propagation to the subsequent process. To address these issues, in this paper, we introduce a novel OCR-free VDU model named Donut, which stands for Document understanding transformer. As the first step in OCR-free VDU research, we propose a simple architecture (i.e., Transformer) with a pre-training objective (i.e., cross-entropy loss). Donut is conceptually simple yet effective. Through extensive experiments and analyses, we show a simple OCR-free VDU model, Donut, achieves state-of-the-art performances on various VDU tasks in terms of both speed and accuracy. In addition, we offer a synthetic data generator that helps the model pre-training to be flexible in various languages and domains.*
<small> Donut high-level overview. Taken from the <ahref="https://arxiv.org/abs/2111.15664">original paper</a>. </small>
33
-
34
-
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found
35
-
[here](https://github.com/clovaai/donut).
22
+
# Donut
36
23
37
-
## Usage tips
24
+
[Donut (Document Understanding Transformer)](https://huggingface.co/papers2111.15664) is a visual document understanding model that doesn't require an Optical Character Recognition (OCR) engine. Unlike traditional approaches that extract text using OCR before processing, Donut employs an end-to-end Transformer-based architecture to directly analyze document images. This eliminates OCR-related inefficiencies making it more accurate and adaptable to diverse languages and formats.
38
25
39
-
- The quickest way to get started with Donut is by checking the [tutorial
40
-
notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Donut), which show how to use the model
41
-
at inference time as well as fine-tuning on custom data.
42
-
- Donut is always used within the [VisionEncoderDecoder](vision-encoder-decoder) framework.
26
+
Donut features vision encoder ([Swin](./swin)) and a text decoder ([BART](./bart)). Swin converts document images into embeddings and BART processes them into meaningful text sequences.
43
27
44
-
## Inference examples
28
+
You can find all the original Donut checkpoints under the [Naver Clova Information Extraction](https://huggingface.co/naver-clova-ix) organization.
45
29
46
-
Donut's [`VisionEncoderDecoder`] model accepts images as input and makes use of
47
-
[`~generation.GenerationMixin.generate`] to autoregressively generate text given the input image.
30
+
> [!TIP]
31
+
> Click on the Donut models in the right sidebar for more examples of how to apply Donut to different language and vision tasks.
48
32
49
-
The [`DonutImageProcessor`] class is responsible for preprocessing the input image and
50
-
[`XLMRobertaTokenizer`/`XLMRobertaTokenizerFast`] decodes the generated target tokens to the target string. The
51
-
[`DonutProcessor`] wraps [`DonutImageProcessor`] and [`XLMRobertaTokenizer`/`XLMRobertaTokenizerFast`]
52
-
into a single instance to both extract the input features and decode the predicted token ids.
33
+
The examples below demonstrate how to perform document understanding tasks using Donut with [`Pipeline`] and [`AutoModel`]
{'question': 'When is the coffee break?', 'answer': '11-14 to 11:39 a.m.'}
183
-
```
86
+
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
184
87
185
-
See the [model hub](https://huggingface.co/models?filter=donut) to look for Donut checkpoints.
88
+
The example below uses [torchao](../quantization/torchao) to only quantize the weights to int4.
186
89
187
-
## Training
90
+
```py
91
+
# pip install datasets torchao
92
+
import torch
93
+
from datasets import load_dataset
94
+
from transformers import TorchAoConfig, AutoProcessor, AutoModelForVision2Seq
0 commit comments