The MCP Chain of Draft (CoD) Prompt Tool is a powerful Model Context Protocol tool that enhances LLM reasoning by transforming standard prompts into either Chain of Draft (CoD) or Chain of Thought (CoT) format. Here's how it works:
- Input Transformation: Your regular prompt is automatically transformed into a CoD/CoT format
- LLM Processing: The transformed prompt is passed to your chosen LLM (Claude, GPT, Ollama, or local models)
- Enhanced Reasoning: The LLM processes the request using structured reasoning steps
- Result Transformation: The response is transformed back into a clear, concise format
This approach significantly improves reasoning quality while reducing token usage and maintaining high accuracy.
This tool supports a "Bring Your Own LLM" approach, allowing you to use any language model of your choice:
- Cloud Services
- Anthropic Claude
- OpenAI GPT models
- Mistral AI
- Local Models
- Ollama (all models)
- Local LLama variants
- Any model supporting chat completion API
-
Cloud Services
# For Anthropic Claude export ANTHROPIC_API_KEY=your_key_here # For OpenAI export OPENAI_API_KEY=your_key_here # For Mistral AI export MISTRAL_API_KEY=your_key_here
-
Local Models with Ollama
# First install Ollama curl https://ollama.ai/install.sh | sh # Pull your preferred model ollama pull llama2 # or ollama pull mistral # or any other model # Configure the tool to use Ollama export MCP_LLM_PROVIDER=ollama export MCP_OLLAMA_MODEL=llama2 # or your chosen model
-
Custom Local Models
# Point to your local model API export MCP_LLM_PROVIDER=custom export MCP_CUSTOM_LLM_ENDPOINT=http://localhost:your_port
This project implements the Chain of Draft (CoD) reasoning approach as a Model Context Protocol (MCP) prompt tool for Claude. The core Chain of Draft implementation is based on the work by stat-guy. We extend our gratitude for their pioneering work in developing this efficient reasoning approach.
Original Repository: https://github.com/stat-guy/chain-of-draft
- Efficiency: Significantly reduced token usage (as little as 7.6% of standard CoT)
- Speed: Faster responses due to shorter generation time
- Cost Savings: Lower API costs for LLM calls
- Maintained Accuracy: Similar or even improved accuracy compared to CoT
- Flexibility: Applicable across various reasoning tasks and domains
-
Core Chain of Draft Implementation
- Concise reasoning steps (typically 5 words or less)
- Format enforcement
- Answer extraction
-
Performance Analytics
- Token usage tracking
- Solution accuracy monitoring
- Execution time measurement
- Domain-specific performance metrics
-
Adaptive Word Limits
- Automatic complexity estimation
- Dynamic adjustment of word limits
- Domain-specific calibration
-
Comprehensive Example Database
- CoT to CoD transformation
- Domain-specific examples (math, code, biology, physics, chemistry, puzzle)
- Example retrieval based on problem similarity
-
Format Enforcement
- Post-processing to ensure adherence to word limits
- Step structure preservation
- Adherence analytics
-
Hybrid Reasoning Approaches
- Automatic selection between CoD and CoT
- Domain-specific optimization
- Historical performance-based selection
-
OpenAI API Compatibility
- Drop-in replacement for standard OpenAI clients
- Support for both completions and chat interfaces
- Easy integration into existing workflows
- Python 3.10+ (for Python implementation)
- Node.js 22+ (for JavaScript implementation)
- Nx (for building Single Executable Applications)
- Clone the repository
- Install dependencies:
pip install -r requirements.txt
- Configure API keys in
.env
file:ANTHROPIC_API_KEY=your_api_key_here
- Run the server:
python server.py
- Clone the repository
- Install dependencies:
npm install
- Configure API keys in
.env
file:ANTHROPIC_API_KEY=your_api_key_here
- Build and run the server:
# Build TypeScript files using Nx npm run nx build # Start the server npm start # For development with auto-reload: npm run dev
Available scripts:
npm run nx build
: Compiles TypeScript to JavaScript using Nx build systemnpm run build:sea
: Creates Single Executable Applications for all platformsnpm start
: Runs the compiled server fromdist
npm test
: Runs the test query against the servernpm run dev
: Runs the TypeScript server directly using ts-node (useful for development)
The project uses Nx as its build system, providing:
- Efficient caching and incremental builds
- Cross-platform build support
- Integrated SEA generation
- Dependency graph visualization
- Consistent build process across environments
This project supports building Single Executable Applications (SEA) using Node.js 22+ and the @getlarge/nx-node-sea plugin. This allows you to create standalone executables that don't require Node.js to be installed on the target system.
The project includes several scripts for building SEA executables:
# Build for all platforms
npm run build:sea
# Build for specific platforms
npm run build:macos # macOS
npm run build:linux # Linux
npm run build:windows # Windows
The project uses Nx for managing the build process. The SEA configuration is handled through the nx-node-sea plugin, which provides a streamlined way to create Node.js single executable applications.
Key features of the SEA build process:
- Cross-platform support (macOS, Linux, Windows)
- Automatic dependency bundling
- Optimized binary size
- No runtime dependencies required
Once built, the SEA executables can be found in the dist
directory. These executables:
- Are completely standalone
- Don't require Node.js installation
- Can be distributed and run directly
- Maintain all functionality of the original application
For Claude Desktop integration with SEA executables, update your configuration to use the executable path:
{
"mcpServers": {
"chain-of-draft-prompt-tool": {
"command": "/path/to/mcp-chain-of-draft-prompt-tool",
"env": {
"ANTHROPIC_API_KEY": "your_api_key_here"
}
}
}
}
To integrate with Claude Desktop:
-
Install Claude Desktop from claude.ai/download
-
Create or edit the Claude Desktop config file:
~/Library/Application Support/Claude/claude_desktop_config.json
-
Add the tool configuration (Python version):
{ "mcpServers": { "chain-of-draft-prompt-tool": { "command": "python3", "args": ["/absolute/path/to/cod/server.py"], "env": { "ANTHROPIC_API_KEY": "your_api_key_here" } } } }
Or for the JavaScript version:
{ "mcpServers": { "chain-of-draft-prompt-tool": { "command": "node", "args": ["/absolute/path/to/cod/index.js"], "env": { "ANTHROPIC_API_KEY": "your_api_key_here" } } } }
-
Restart Claude Desktop
You can also use the Claude CLI to add the tool:
# For Python implementation
claude mcp add chain-of-draft-prompt-tool -e ANTHROPIC_API_KEY="your_api_key_here" "python3 /absolute/path/to/cod/server.py"
# For JavaScript implementation
claude mcp add chain-of-draft-prompt-tool -e ANTHROPIC_API_KEY="your_api_key_here" "node /absolute/path/to/cod/index.js"
Dive is an excellent open-source MCP Host Desktop Application that provides a user-friendly GUI for interacting with MCP tools like this one. It supports multiple LLMs including ChatGPT, Anthropic Claude, Ollama, and other OpenAI-compatible models.
-
Download and install Dive from their releases page
-
Configure the Chain of Draft tool in Dive's MCP settings:
{
"mcpServers": {
"chain-of-draft-prompt-tool": {
"command": "/path/to/mcp-chain-of-draft-prompt-tool",
"enabled": true,
"env": {
"ANTHROPIC_API_KEY": "your_api_key_here"
}
}
}
}
If you're using the non-SEA version:
{
"mcpServers": {
"chain-of-draft-prompt-tool": {
"command": "node",
"args": ["/path/to/dist/index.js"],
"enabled": true,
"env": {
"ANTHROPIC_API_KEY": "your_api_key_here"
}
}
}
}
- π Universal LLM Support with multiple API key management
- π» Cross-platform availability (Windows, MacOS, Linux)
- π Seamless MCP integration in both stdio and SSE modes
- π Multi-language interface
- π‘ Custom instructions and system prompts
- π Automatic updates
Using Dive provides a convenient way to interact with the Chain of Draft tool through a modern, feature-rich interface while maintaining all the benefits of the MCP protocol.
The project includes integration with the MCP Inspector tool, which provides a visual interface for testing and debugging MCP tools. This is especially useful during development or when you want to inspect the tool's behavior.
You can start the MCP Inspector using the provided npm script:
# Start the MCP Inspector with the tool
npm run test-inspector
# Or run it manually
npx @modelcontextprotocol/inspector -e ANTHROPIC_API_KEY=$ANTHROPIC_API_KEY -- node dist/index.js
This will:
- Start the MCP server in the background
- Launch the MCP Inspector interface in your default browser
- Connect to the running server for testing
The MCP Inspector provides:
- π Real-time visualization of tool calls and responses
- π Interactive testing of MCP functions
- π Request/response history
- π Debug information for each interaction
- π Performance metrics and timing data
This makes it an invaluable tool for:
- Development and debugging
- Understanding tool behavior
- Testing different inputs and scenarios
- Verifying MCP compliance
- Performance optimization
The Inspector will be available at http://localhost:5173
by default.
The Chain of Draft server provides the following tools:
Tool | Description |
---|---|
chain_of_draft_solve |
Solve a problem using Chain of Draft reasoning |
math_solve |
Solve a math problem with CoD |
code_solve |
Solve a coding problem with CoD |
logic_solve |
Solve a logic problem with CoD |
get_performance_stats |
Get performance stats for CoD vs CoT |
get_token_reduction |
Get token reduction statistics |
analyze_problem_complexity |
Analyze problem complexity |
If you want to use the Chain of Draft client directly in your Python code:
from client import ChainOfDraftClient
# Create client with specific LLM provider
cod_client = ChainOfDraftClient(
llm_provider="ollama", # or "anthropic", "openai", "mistral", "custom"
model_name="llama2" # specify your model
)
# Use directly
result = await cod_client.solve_with_reasoning(
problem="Solve: 247 + 394 = ?",
domain="math"
)
print(f"Answer: {result['final_answer']}")
print(f"Reasoning: {result['reasoning_steps']}")
print(f"Tokens used: {result['token_count']}")
For TypeScript/Node.js applications:
import { ChainOfDraftClient } from './lib/chain-of-draft-client';
// Create client with your preferred LLM
const client = new ChainOfDraftClient({
provider: 'ollama', // or 'anthropic', 'openai', 'mistral', 'custom'
model: 'llama2', // your chosen model
endpoint: 'http://localhost:11434' // for custom endpoints
});
// Use the client
async function solveMathProblem() {
const result = await client.solveWithReasoning({
problem: "Solve: 247 + 394 = ?",
domain: "math",
max_words_per_step: 5
});
console.log(`Answer: ${result.final_answer}`);
console.log(`Reasoning: ${result.reasoning_steps}`);
console.log(`Tokens used: ${result.token_count}`);
}
solveMathProblem();
The server is available in both Python and JavaScript implementations, both consisting of several integrated components:
- AnalyticsService: Tracks performance metrics across different problem domains and reasoning approaches
- ComplexityEstimator: Analyzes problems to determine appropriate word limits
- ExampleDatabase: Manages and retrieves examples, transforming CoT examples to CoD format
- FormatEnforcer: Ensures reasoning steps adhere to word limits
- ReasoningSelector: Intelligently chooses between CoD and CoT based on problem characteristics
- analyticsDb: In-memory database for tracking performance metrics
- complexityEstimator: Analyzes problems to determine complexity and appropriate word limits
- formatEnforcer: Ensures reasoning steps adhere to word limits
- reasoningSelector: Automatically chooses between CoD and CoT based on problem characteristics and historical performance
Both implementations follow the same core principles and provide identical MCP tools, making them interchangeable for most use cases.
This project is open-source and available under the MIT license.