Skip to content

Conversation

@ochafik
Copy link
Collaborator

@ochafik ochafik commented Dec 24, 2025

TL;DR: it's a lot, but there's a lot more testing than before.

Building on the PEG parser infrastructure introduced in #17136 by @aldehir, this is an experimental effort to migrate all chat template formats to the unified PEG approach.

Why migrate? The current monolithic common/chat.cpp has grown to ~25 ad-hoc parser implementations that are difficult to maintain. Lots of parsing bugs are hard to reproduce and diagnose (esp. if the user wasn't in --verbose mode).

The PEG infrastructure offers a cleaner path forward, w/ strong guarantees (modulo bugs) that what is allowed to be generated should be parseable.

How to Test

# Add the branch as a remote
git remote add ochafik https://github.com/ochafik/llama.cpp.git
git fetch ochafik peg-migration-complete
git checkout ochafik/peg-migration-complete

# Build
cmake -B build && cmake --build build -j

# Run llama-server w/ and w/o experimental new parsers
./build/bin/llama-server --experimental-new-parsers --jinja -m your-model.gguf

# Run parser tests (new & legacy)
./build/bin/test-chat

Changes:

  • common/chat-parsers/*.cpp - 28 modular parser implementations
  • Feature flag --experimental-new-parsers - defaults to off, nothing changes by default
    • Kept old codepaths more or less intact to avoid regressions
  • PEG Infrastructure Changes to make it easier to write many PEG parsers (cc/ @aldehir - happy to revert if you disagree with any of these)
    • Enum-based tags - Replaced string AST tags with integer tag IDs for type safety and faster dispatch
    • Lambda mappers - Mapper functions defined as lambdas to reduce boilerplate
    • Shared tag enums - Many parsers now share common tags (CONTENT, REASONING, TOOL_CALL, etc.)

New "Needle" Streaming Tests

Existing streaming tests (tools/server/tests/unit/test_tool_call.py) required loading real models and cover only a subset of formats. This PR adds systematic coverage for all 21 formats without the model-loading overhead.

This migration was designed to be safe through systematic test constraints:

21 formats x 6+ scenarios = up to 126 regression tests (some scenarios filtered based on format capabilities)

Each format tests:

  • Content-only streaming
  • Reasoning/thinking tags
  • Tool calls (single and parallel)
  • Tool choices (none, auto, required)
  • Thinking enabled/disabled

How Needle Tests Work

The "needle" technique injects unique marker pairs into each semantic field. For example, in Hermes 2 Pro format with thinking and a tool call:

<think>Thinking $N1R$ deeply $N2R$ done</think>
Before $N1C$ middle $N2C$ after
<tool_call>{"name":"$N1TN$_0$N2TN$_0","arguments":{"$N1AK$_0$N2AK$_0":"$N1AV$_0$N2AV$_0"}}</tool_call>

The test parses this message at every character boundary (simulating streaming), and verifies:

Check What it catches
Both needles present Content not lost or truncated
N1 before N2 (each pair) Out-of-order emission, lack of streaming, buffering bugs
Tool names atomic Function name never split mid-stream (tool name needles must land together, or none of them)
Arguments never regress Tool args never get shorter during streaming
Keys complete in order Key N finishes before key N+1 starts

This aims to prove parsers are truly incremental: partial input produces partial output, fields stream in proper order, and nothing is buffered unnecessarily.

Known Limitations

The PEG implementation has gaps vs legacy (TBC):

  1. JSON schema edge cases: allOf/anyOf/$ref patterns not fully handled
  2. maxLength grammar growth: Large constraints can explode grammar size (WIP: added until_max w/ weird implementation, maybe we just drop maxLength on xml formats)
  3. Streaming edge cases: Malformed input handled differently
  4. Ambiguous grammars: PEG requires unambiguous grammars

Proposed Migration Plan

  1. Merge with safe default (legacy parsers active)
  2. Gather feedback from users opting in via --experimental-new-parsers
  3. Debug issues as they're reported
  4. Flip default once stable
  5. Drop legacy code entirely:
    • common/chat-parser.cpp: ~28 legacy parser functions (~900 lines)
    • common/chat.cpp: ~19 legacy init functions (~600 lines)
    • common/chat-peg-parser.cpp/.h: class-based builders/mappers (~220 lines)
    • common/chat-parser-xml-toolcall.cpp/.h: XML grammar builder (~900 lines) - new PEG parsers generate grammars directly from their parser definitions

Follow up work

  • Move new capability detections to Minja (minja#20) to simplify the test configuration:
    • supports_tool_call_id - Whether tool calls include IDs
    • reasoning_requires_tools - Whether thinking mode only works with tools
    • tools_emit_content_with_calls - Whether tool calls can include content

ochafik and others added 6 commits December 24, 2025 16:28
Core PEG parser infrastructure improvements:
- Replace string AST tags with integer tag IDs for type safety and faster dispatch
- Mapper functions defined as lambdas to reduce boilerplate
- Add token_tag and literal_tag helper methods for token-aware parsing
- Improve GBNF grammar generation for optional repetitions and lazy mode
- Make SPACE_RULE optional in JSON schema grammar generation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Migrate 28 chat template formats to the unified PEG parser approach:
- Each format gets its own module in common/chat-parsers/
- Shared internal header for common patterns and utilities
- XML toolcall infrastructure for formats using XML-style tool calls
- Updated routing in chat.cpp and chat-parser.cpp

Formats migrated: Apertus, Apriel 1.5, Command R7B, DeepSeek R1/V3.1,
Firefunction V2, Functionary V3.1/V3.2, Generic, GLM 4.5, GPT-OSS,
Granite, Hermes 2 Pro, Kimi K2, LFM2, Llama 3.x, Magistral, MiniMax M2,
Ministral 3, Mistral Nemo, Nemotron V2/V3, Qwen3 Coder XML, Seed OSS,
Xiaomi MIMO.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add systematic streaming test coverage for all 28 parser formats:
- Needle streaming tests inject unique markers into each semantic field
- Tests verify incremental parsing at every character boundary
- Cross-check declared capabilities against minja's runtime detection
- 21 formats x 6+ scenarios = 126+ regression tests

The needle technique catches:
- Content loss or truncation (both needles must be present)
- Out-of-order emission and buffering bugs (N1 before N2 per pair)
- Function name splitting during streaming (atomic tool names)
- Tool argument regression (args never get shorter)
- Key ordering violations (key N finishes before key N+1)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add feature flag infrastructure for safe migration:
- --experimental-new-parsers CLI flag (defaults to off)
- LLAMA_USE_NEW_PARSERS env var for testing
- Dual-path testing: legacy parsers active by default
- Server integration for runtime parser selection

Also includes:
- Nemotron Nano template fix
- Granite template rename for consistency
- Test model fetching updates

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When tool_choice is "required", the model MUST output tool calls without any
preceding content. This change ensures that the following parsers enforce this
constraint:

- apertus.cpp
- glm-4-5.cpp
- lfm2.cpp
- minimax-m2.cpp
- nemotron-v2.cpp
- nemotron-v3.cpp
- qwen3-coder-xml.cpp

Previously, these parsers would accept content before tool calls even in required
mode, allowing the model to bypass the tool call requirement. Now, when
require_tools is true, the parser only matches tool calls (with optional
reasoning blocks where supported).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@aldehir
Copy link
Collaborator

aldehir commented Dec 24, 2025

Holy...

happy to revert if you disagree with any of these

I'll have to look more closely, but nothing particular stands out as disagreeable.

JSON schema edge cases: allOf/anyOf/$ref patterns not fully handled

Is this in reference to the raw string parameters that circumvent json-schema-to-grammar? I had an implementation that used it to generate raw string patterns but unfortunately found many edge cases from lazy schema generators. It's also a challenge to constrain an arbitrary pattern from matching beyond the delimiter.

I intend to add a schema optimization step that should help a bit. Unfortunately, I'm starting to think we might have to loosen the grammar constraining and rely more on the agent feedback loop if model creators no longer want to emit JSON.

@github-actions github-actions bot added documentation Improvements or additions to documentation script Script related testing Everything test related examples python python script changes server labels Dec 24, 2025
@pwilkin
Copy link
Collaborator

pwilkin commented Dec 24, 2025

@ochafik actually I've been working on an alternative approach - an automatic peg parser generator based on differential template output analysis that will handle like 90% of all templates out of the box - then we only have to handle the really hard/weird/atypical ones.

ochafik and others added 2 commits December 24, 2025 22:14
Consolidates the common pattern in XML-based chat parsers for handling
schema-based parameter values:
- For string schemas: use until[_max](delimiter, maxLength?)
- For non-string schemas: use schema() with optional space wrapping

This reduces boilerplate across glm-4-5, minimax-m2, qwen3-coder-xml,
and seed-oss parsers (removes ~99 lines, adds ~94 lines with docs).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
These templates' parsers don't allow content in tool_choice=required mode,
but the test was generating content because tool_required_allows_content
defaulted to true.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@ochafik
Copy link
Collaborator Author

ochafik commented Dec 24, 2025

JSON schema edge cases: allOf/anyOf/$ref patterns not fully handled

Is this in reference to the raw string parameters that circumvent json-schema-to-grammar? I had an implementation that used it to generate raw string patterns but unfortunately found many edge cases from lazy schema generators. It's also a challenge to constrain an arbitrary pattern from matching beyond the delimiter.

@aldehir Sorry no, I meant for the handling of anyOf and some other schema refinements (minItem / maxItem, minimum/maximum, etc) which don't seem handled by common_peg_parser_builder::schema (although not $ref - that's before i realised you did resolve them)... and in particular of maxLength for string arguments, which feeds into the larger problem of raw string arguments, indeed.

Without a clean way to escape xml arg end blocks in string arguments (and corresponding fine-tuning) all the xml formats seem inherently fragile, unless we accept they contain a JSON string (which I'd done in my first Qwen3 parser). XML-esque escaping of < as &lt; would be enough, but doesn't seem to be a thing.

Alternatively we could introduce token-based parsing (e.g. until_token(...) which resolves the token id from the vocab (tried in this branch) but many models don't have dedicated tokens for these special boundaries; when they do, it's likely when a model samples the arg string "</end_arg>" it would do so w/ individual chars, as opposed to a single special token).

Re/ strings args, I've added a schema_or_raw_string_until to do the same treatment across the afflicted parsers.

automatic peg parser generator based on differential template output analysis that will handle like 90% of all templates out of the box

@pwilkin this sounds super exciting, I wanted to do this in Minja at some point but never finished my prototype / got desperate as i realized how diverse the practices were in Jinja templates 😓

But more importantly... lots of small models output tool calls with an alarmingly high creativity, not at all guessable by just analyzing templates (Qwen 2.5 comes to my mind: I had to sample many tool calls to see what it comes up in practice, hence the horribly long and accommodating list of triggers in the Hermes 2 grammar & parser). Not to mention outright broken templates.

(could you integrate alternative syntaxes by having multiple templates feed into the same diff engine?)

Anyway, would love to see what you come up with! (oh and btw, feel free to host this in https://github.com/ochafik/minja if it needs too much testing infra for llama.cpp: it already has all the templates, could be a good fit / playground)

@aldehir
Copy link
Collaborator

aldehir commented Dec 24, 2025

Sorry no, I meant for the handling of anyOf and some other schema refinements (minItem / maxItem, minimum/maximum, etc) which don't seem handled by common_peg_parser_builder::schema

Oh, interesting. I thought this would be handled by delegating to builder.add_schema(). It does require resolving refs outside. I must have missed something.

I came to the same conclusions with the XML formats. The vLLM tool call parsing seems affected by the same problems. I'm not sure there's much more we can do.

unless we accept they contain a JSON string

I worry this will impact model performance. I've noticed that with Devstral (not an XML format), heavy constraining is detrimental to its performance.

I rather accept that it will fail when </parameter> is in the argument value and get the best performance for the remaining 95% use cases.

// GBNF doesn't have token-awareness, so same as text-based until

It does now (#17816).I was slowly working my way to making the parser token aware, but for the same reasons--i.e. the argument delimiter is not a unique token--it became a low priority. I mainly added it in to handle grammar enforced reasoning budget.

- Remove unused skip_content_before/skip_content_after in nemotron-v3.cpp
- Fix trailing whitespace in test-chat.cpp

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@ochafik ochafik force-pushed the peg-migration-squashed branch 2 times, most recently from 2c68c82 to e8aa015 Compare December 25, 2025 01:24
- Update SPACE_RULE from `| " " | "\n"...` to `( " " | "\n"... )?` in
  Python and JavaScript implementations to match C++ changes
- Update test-json-schema-to-grammar.cpp expected outputs
- Update test-grammar-parser.cpp expectations for empty group parsing

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@ochafik ochafik force-pushed the peg-migration-squashed branch from e8aa015 to 427afe6 Compare December 25, 2025 01:39
The template file had Windows-style CRLF line endings which were
causing test failures on Windows CI when parsing reasoning content.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@aldehir
Copy link
Collaborator

aldehir commented Dec 25, 2025

There's a desire to refactor the common chat API (ref #18215). I was working on this, but given this PR and @pwilkin's effort, I'll wait to see what direction we go. Otherwise merging will be a challenge.

This PR accomplishes part of what I had in mind: splitting the parsers into separate files. I'd also like to maintain a stateful object during an entire generation request. That would let us remove the serialization mechanics in the PEG parser and implement a variant of incremental Packrat parsing (https://ohmjs.org/pubs/sle2017/incremental-packrat-parsing.pdf).

@ochafik
Copy link
Collaborator Author

ochafik commented Dec 26, 2025

I came to the same conclusions with the XML formats. The vLLM tool call parsing seems affected by the same problems. I'm not sure there's much more we can do.

@aldehir We could experiment w/ a bit of jinja hackery: ideally we'd want models to think of escaping </end_arg> to something like &lt;end_tag>, and & to &amp; (that's it I guess?). We could then (in minja) detect when strings are unescaped (after a teeny bit of that diff analysis @pwilkin mentioned to infer their end arg tag), and have an opt-in polyfill that would auto escape said arg, along with an extra system prompt that explains the escape scheme. Not a fan of prompt engineering in a low-level layer like minja, but as an opt-in, w/ said extra system prompt configurable by the library users, might be acceptable. Then, to see if it helps we could use the tool call benchmark stuff from #12034 on some coding task involving XML tags similar to the model's syntax

Longer term we should probably start a page about best practices for template writers / finetuning folks (how to pick your tool call syntax and make it work reliably by using the right special tokens, how to escape things properly, etc).

I'd also like to maintain a stateful object during an entire generation request. That would let us remove the serialization mechanics in the PEG parser

Sounds fab, just note that in the OG tool calls support PR it was pointed out sharing objects between the two sides could be a bit iffy wrt/ threading.

Parser fixes:
- Fix 6 parsers to reject content in tool_choice=required mode
  (llama-3-x, functionary-v3-1/v3-2, hermes-2-pro, gpt-oss, mistral-nemo)
- Required mode now only allows tool calls (and optionally thinking)

Test improvements:
- Add test_required_tool_rejects_content using init_delta for proper
  template rendering instead of manual tag construction
- Create shared get_template_capabilities() function to avoid duplication
- Convert template_capabilities bools to named scoped enums for type safety:
  ThinkingSupport, ToolSupport, Skip, ReasoningRequiresTools,
  ToolsEmitContentWithCalls, InjectReasoningAfterFormat,
  SupportsDisableThinking, SupportsReasoningOnly, ToolCallsHaveIds
- Remove tool_required_allows_content field (was documenting buggy behavior)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
ochafik and others added 5 commits December 26, 2025 03:18
Adds test_format_detection_with_tools() to verify that when
experimental_new_parsers is enabled with tools, templates correctly
detect their expected format (not CONTENT_ONLY) and generate
non-empty grammar and parser.

This catches "Pattern 1" failures where templates are incorrectly
detected as Content-only when they should have a proper tool-calling
format.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
README.md was accidentally included in the list of jinja templates
to test, causing test failures.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- removed old grammars that had slopped into new peg parsers
- fence triggers by inputs.tool_choice != COMMON_CHAT_TOOL_CHOICE_REQUIRED checks
- derive laziness from presence of triggers
- foreach_function gives resolved parameters to everyone!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation examples python python script changes script Script related server testing Everything test related

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants