Skip to content

Conversation

@loci-dev
Copy link

Mirrored from ggml-org/llama.cpp#18399

Adds pre-tokenizer hash for MiniMaxAI/MiniMax-M2.1.

The model uses the same architecture (MiniMaxM2ForCausalLM) and tokenizer type as MiniMax-M2, but requires an additional hash entry due to differences in how the tokenizer is processed.

Without this fix, conversion fails with:

WARNING: The BPE pre-tokenizer was not recognized!
NotImplementedError: BPE pre-tokenizer was not recognized - update get_vocab_base_pre()

Tested conversion successfully: https://huggingface.co/AaryanK/MiniMax-M2.1-GGUF

@loci-agentic-ai
Copy link

Explore the complete analysis inside the Version Insights

I've generated a summary report for your project. Here are the key findings:

Summary Report for llama.cpp PR #715

Project Information:

Performance Analysis Results:

No significant performance changes detected - No modified functions show performance changes greater than 2% in either response time or throughput time.

Conclusion:
This pull request maintains performance stability and appears safe to merge from a performance perspective. The changes do not introduce any measurable performance regressions.

@loci-dev loci-dev force-pushed the main branch 15 times, most recently from f2e8c7f to b3f45e1 Compare December 29, 2025 06:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants