You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Gemma 3 model was proposed in the [Gemma 3 Techncial Report](https://goo.gle/Gemma3Report)by Google. It is a vision-language model composed by a [SigLIP](siglip) vision encoder and a [Gemma 2](gemma_2) language decoder, linked by a multimodal linear projection. It cuts an image into a fixed number of tokens, in the same way as SigLIP, as long as the image does not exceed certain aspect ratio. For images that exceed the given aspect ratio, it crops the image into multiple smaller patches and concatenates them with the base image embedding. One particularity is that the model uses bidirectional attention on all the image tokens. In addition, the model interleaves sliding window local attention with full causal attention in the language backbone, where each sixth layer is a full causal attention layer.
27
+
[Gemma 3](https://goo.gle/Gemma3Report) is a multimodal model with pretrained and instruction-tuned variants, available in 1B, 13B, and 27B parameters. The architecture is mostly the same as the previous Gemma versions. The key differences are alternating 5 local sliding window self-attention layers for every global self-attention layer, support for a longer context length of 128K tokens, and a [SigLip](./siglip) encoder that can "pan & scan" high-resolution images to prevent information from disappearing in high resolution images or images with non-square aspect ratios.
23
28
24
-
This model was contributed by [Ryan Mullins](https://huggingface.co/RyanMullins), [Raushan Turganbay](https://huggingface.co/RaushanTurganbay)[Arthur Zucker](https://huggingface.co/ArthurZ), and [Pedro Cuenca](https://huggingface.co/pcuenq).
29
+
The instruction-tuned variant was post-trained with knowledge distillation and reinforcement learning.
25
30
31
+
You can find all the original Gemma 3 checkpoints under the [Gemma 3](https://huggingface.co/collections/meta-llama/llama-2-family-661da1f90a9d678b6f55773b) release.
26
32
27
-
## Usage tips
33
+
> [!TIP]
34
+
> Click on the Gemma 3 models in the right sidebar for more examples of how to apply Gemma to different vision and language tasks.
28
35
36
+
The example below demonstrates how to generate text based on an image with [`Pipeline`] or the [`AutoModel`] class.
29
37
30
-
- For image+text and image-only inputs use `Gemma3ForConditionalGeneration`.
31
-
- For text-only inputs use `Gemma3ForCausalLM` for generation to avoid loading the vision tower.
32
-
- Each sample can contain multiple images, and the number of images can vary between samples. However, make sure to pass correctly batched images to the processor, where each batch is a list of one or more images.
33
-
- The text passed to the processor should have a `<start_of_image>` token wherever an image should be inserted.
34
-
- The processor has its own `apply_chat_template` method to convert chat messages to model inputs. See the examples below for more details on how to use it.
38
+
<hfoptionsid="usage">
39
+
<hfoptionid="Pipeline">
35
40
41
+
```py
42
+
import torch
43
+
from transformers import pipeline
36
44
37
-
### Image cropping for high resolution images
38
-
39
-
The model supports cropping images into smaller patches when the image aspect ratio exceeds a certain value. By default the images are not cropped and only the base image is forwarded to the model. Users can set `do_pan_and_scan=True` to obtain several crops per image along with the base image to improve the quality in DocVQA or similar tasks requiring higher resolution images.
text="<start_of_image> What is shown in this image?"
54
+
)
55
+
```
40
56
41
-
Pan and scan is an inference time optimization to handle images with skewed aspect ratios. When enabled, it improves performance on tasks related to document understanding, infographics, OCR, etc.
57
+
</hfoption>
58
+
<hfoptionid="AutoModel">
42
59
43
-
```python
60
+
```py
61
+
import torch
62
+
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
{"type": "text", "text": "You are a helpful assistant."}
91
-
]
92
-
},
93
-
{
94
-
"role": "user", "content": [
95
-
{"type": "image", "url": url},
96
-
{"type": "text", "text": "What is shown in this image?"},
97
-
]
98
-
},
99
-
]
100
-
inputs = processor.apply_chat_template(
101
-
messages,
102
-
tokenize=True,
103
-
return_dict=True,
104
-
return_tensors="pt",
105
-
add_generation_prompt=True,
106
-
).to(model.device)
111
+
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
You can use the VLMs for text-only generation by omitting images in your input. However, you can also load the models in text-only mode as shown below. This will skip loading the vision tower and will save resources when you just need the LLM capabilities.
152
-
```python
153
-
from transformers import AutoTokenizer, Gemma3ForCausalLM
text = tokenizer.batch_decode(outputs, skip_special_tokens=True)
158
+
Use the [AttentionMaskVisualizer](https://github.com/huggingface/transformers/blob/beb9b5b02246b9b7ee81ddf938f93f44cfeaad19/src/transformers/utils/attention_visualizer.py#L139) to better understand what tokens the model can and cannot attend to.
164
159
165
-
print(text)
160
+
```py
161
+
from transformers.utils.attention_visualizer import AttentionMaskVisualizer
- Use [`Gemma3ForConditionalGeneration`] for image-and-text and image-only inputs.
170
+
- Gemma 3 supports multiple input images, but make sure the images are correctly batched before passing them to the processor. Each batch should be a list of one or more images.
{"type": "text", "text": "You are a helpful assistant."}
181
+
]
182
+
},
183
+
{
184
+
"role": "user",
185
+
"content": [
186
+
{"type": "image", "url": url_cow},
187
+
{"type": "image", "url": url_cat},
188
+
{"type": "text", "text": "Which image is cuter?"},
189
+
]
190
+
},
191
+
]
192
+
```
193
+
- Text passed to the processor should have a `<start_of_image>` token wherever an image should be inserted.
194
+
- The processor has its own [`~ProcessorMixin.apply_chat_template`] method to convert chat messages to model inputs.
195
+
- By default, images aren't cropped and only the base image is forwarded to the model. In high resolution images or images with non-square aspect ratios, artifacts can result because the vision encoder uses a fixed resolution of 896x896. To prevent these artifacts and improve performance during inference, set `do_pan_and_scan=True` to crop the image into multiple smaller patches and concatenate them with the base image embedding. You can disable pan and scan for faster inference.
196
+
197
+
```diff
198
+
inputs = processor.apply_chat_template(
199
+
messages,
200
+
tokenize=True,
201
+
return_dict=True,
202
+
return_tensors="pt",
203
+
add_generation_prompt=True,
204
+
+do_pan_and_scan=True,
205
+
).to("cuda")
206
+
```
207
+
- For text-only inputs, use [`AutoModelForCausalLM`] instead to skip loading the vision components and save resources.
208
+
209
+
```py
210
+
import torch
211
+
from transformers import AutoModelForCausalLM, AutoTokenizer
212
+
213
+
tokenizer = AutoTokenizer.from_pretrained(
214
+
"google/gemma-3-1b-pt",
215
+
)
216
+
model = AutoModelForCausalLM.from_pretrained(
217
+
"google/gemma-3-1b-pt",
218
+
torch_dtype=torch.bfloat16,
219
+
device_map="auto",
220
+
attn_implementation="sdpa"
221
+
)
222
+
input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to("cuda")
0 commit comments