TetherAI is a standalone-first TypeScript platform for integrating different AI providers.
Each package is completely self-contained with no external dependencies - includes all types, utilities, and middleware built-in.
Think of it as "Express for AI providers" with everything included.
- Standalone Packages: Each provider is completely independent
 - Enhanced Configuration: Timeouts, custom endpoints, organization support
 - Advanced Middleware: Retry with exponential backoff, multi-provider fallback
 - Rich Error Handling: Provider-specific error classes with HTTP status codes
 - Edge Runtime: Works everywhere from Node.js to Cloudflare Workers
 - SSE Utilities: Built-in Server-Sent Events parsing
 - Multiple Providers: OpenAI, Anthropic, Mistral AI, Grok AI, and Local LLM support
 
packages/provider/– standalone provider packages (no external deps)@tetherai/openai– OpenAI provider@tetherai/anthropic– Anthropic provider@tetherai/mistral– Mistral AI provider@tetherai/grok– Grok AI (xAI) provider@tetherai/local– Local LLM provider (Ollama, LM Studio, etc.)
packages/shared/– internal development tooling (not published)examples/– demo applications (Next.js, Node.js, etc.)
- 
Install any provider package (everything included):
npm install @tetherai/openai # or npm install @tetherai/anthropic # or npm install @tetherai/mistral # or npm install @tetherai/grok # or npm install @tetherai/local
 - 
Run an example locally:
a. Next.js example:
cd examples/nextjs npm install export OPENAI_API_KEY=sk-... npm run dev
b. Node.js example:
cd examples/node npm install export OPENAI_API_KEY=sk-... npm run dev
 - 
Try it out:
a. Next.js: Open http://localhost:3000
b. Node.js: POST to http://localhost:8787/chat:
curl -N -X POST http://localhost:8787/chat \ -H "Content-Type: application/json" \ -d '{"model":"gpt-4o-mini","messages":[{"role":"user","content":"Hello!"}]}'
 
import { openAI } from "@tetherai/openai";
const provider = openAI({ 
  apiKey: process.env.OPENAI_API_KEY!,
  timeout: 30000,        // 30 second timeout
  organization: process.env.OPENAI_ORG_ID  // Organization support
});import { withRetry, withFallback } from "@tetherai/openai";
import { anthropic } from "@tetherai/anthropic";
import { mistral } from "@tetherai/mistral";
const resilientProvider = withFallback([
  withRetry(openAI({ apiKey: process.env.OPENAI_API_KEY! }), { 
    retries: 3,
    baseMs: 300,
    factor: 2,
    jitter: true
  }),
  withRetry(anthropic({ apiKey: process.env.ANTHROPIC_API_KEY! }), { 
    retries: 2 
  }),
  withRetry(mistral({ apiKey: process.env.MISTRAL_API_KEY! }), { 
    retries: 2 
  })
], {
  onFallback: (error, providerIndex) => {
    console.log(`Provider ${providerIndex} failed, trying next...`);
  }
});import type { ChatRequest } from "@tetherai/openai";
const req: ChatRequest = {
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "Tell me a joke." }],
};
for await (const chunk of resilientProvider.streamChat(req)) {
  if (chunk.done) break;
  process.stdout.write(chunk.delta);
}- Streaming-First: Token stream via AsyncIterable with SSE support
 - Retry Middleware: Automatic retry with exponential backoff on transient errors (429, 5xx)
 - Fallback Middleware: Multi-provider failover with configurable callbacks
 - Edge Compatible: Built on fetch, ReadableStream, works in all modern runtimes
 - Strict TypeScript: 100% typed, zero 
anytypes - Rich Error Handling: Provider-specific error classes with HTTP status codes
 - Highly Configurable: Timeouts, custom endpoints, organization support
 
Standalone OpenAI provider - Everything you need in one package!
- Zero Dependencies: Everything included, no external packages needed
 - Production Ready: Built-in retry, fallback, and error handling
 - Highly Configurable: Timeouts, custom endpoints, organization support
 - Edge Compatible: Works everywhere from Node.js to Cloudflare Workers
 
Standalone Anthropic provider - Everything you need in one package!
- Zero Dependencies: Everything included, no external packages needed
 - Production Ready: Built-in retry, fallback, and error handling
 - Highly Configurable: Timeouts, custom endpoints, API version control
 - Edge Compatible: Works everywhere from Node.js to Cloudflare Workers
 
Standalone Mistral provider - Everything you need in one package!
- Zero Dependencies: Everything included, no external packages needed
 - Production Ready: Built-in retry, fallback, and error handling
 - Highly Configurable: Timeouts, custom endpoints, API version control
 - Edge Compatible: Works everywhere from Node.js to Cloudflare Workers
 
Standalone Grok AI (xAI) provider - Everything you need in one package!
- Zero Dependencies: Everything included, no external packages needed
 - Production Ready: Built-in retry, fallback, and error handling
 - Highly Configurable: Timeouts, custom endpoints, API version control
 - Edge Compatible: Works everywhere from Node.js to Cloudflare Workers
 - xAI Integration: Native support for Grok models (grok-beta, grok-beta-vision, etc.)
 
Standalone Local LLM provider - Everything you need in one package!
- Zero Dependencies: Everything included, no external packages needed
 - Production Ready: Built-in retry, fallback, and error handling
 - Highly Configurable: Timeouts, custom endpoints, API version control
 - Edge Compatible: Works everywhere from Node.js to Cloudflare Workers
 - Local Endpoint Support: Ollama, LM Studio, and custom OpenAI-compatible APIs
 
- Zero Dependencies: Each package is completely standalone
 - Production Ready: Built-in retry, fallback, and error handling
 - Highly Configurable: Timeouts, custom endpoints, organization support
 - Edge Compatible: Works everywhere from Node.js to Cloudflare Workers
 - Streaming First: Real-time token streaming with AsyncIterable
 - Enterprise Ready: Organization support, custom fetch, comprehensive error handling
 
See examples/ for ready-to-run demos:
- Next.js Chat – Full Edge runtime chat UI with streaming and retry/fallback middleware
 - Node.js Server – Minimal backend HTTP server exposing 
/chatendpoint with SSE streaming 
# Build all providers
npm run build:providers
# Build individual providers
npm run build:openai
npm run build:anthropic
# Copy shared files to providers
npm run copy-shared# Test standalone providers
node test-standalone.js