Skip to content

Integrate Tokenizers for Grok, Gemini, and Claude Platforms #7

@ivelin-web

Description

@ivelin-web

Story Title:
Integrate Tokenizers for Grok, Gemini, and Claude Platforms


Story Description:
We need to extend Tokenflow beyond ChatGPT by adding support for three additional AI chat platforms: Grok (xAI), Gemini (Google), and Claude (Anthropic). For each platform, we will integrate its official (or best‐available) tokenizer library, configure model‐to‐tokenizer mappings, and ensure that Tokenflow dynamically loads the correct tokenizer and context limits when a user switches to that platform or model. This is an overarching story; once complete, users will see accurate token‐usage metrics not only in ChatGPT but also in Grok, Gemini, and Claude.

Context & Resources:
Below are the links to the tokenizer packages or repositories we will use for each platform. Please include them as references in the implementation:

Goals:

  1. Dynamic Tokenizer Loading: Based on the detected platform (hostname) and selected model, Tokenflow must import the correct tokenizer module at runtime.
  2. Accurate Token Counts: Each platform’s tokenizer should produce counts within ±3 % of the official server counts for that model.
  3. Configurable Context Limits: For each model (e.g., Claude 4 Opus, Gemini 2.5 Pro, Grok 3, etc.), Tokenflow must know and display the correct maximum context window (e.g., 200 k for Claude 3 Opus).
  4. Fallback Handling: If a tokenizer fails to load, we must fall back to our heuristic (e.g., Math.ceil(text.length / 4)) and display an approximation indicator.

Notes:
This story and its subtasks will lay the foundation for Tokenflow v0.2.0, enabling multi‐platform support and preparing us to release a truly universal token‐meter for AI chat services.

Sub-issues

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions