Skip to content

Qwen3 thinking flag is flipped #38871

Closed
Closed
@rasbt

Description

@rasbt

System Info

import transformers
transformers.__version__
'4.52.4'

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

Init the Qwen3 tokenizer and run it with and without thinking enabled as shown below.

Expected behavior

Looks like enable_thinking=True removes the <think></think> tokens, but shouldn't it be the other way around?

I.e.,

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B-Base")


prompt = "Give me a short introduction to large language models."
messages = [
    {"role": "user", "content": prompt},
]

token_ids = tokenizer.apply_chat_template(
    messages,
    tokenize=True,
    add_generation_prompt=True,
    enable_thinking=True,


)
tokenizer.decode(token_ids)

does not have any think tokens:

Image

However, enable_thinking=False adds thinking tokens:

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions