A lightweight, portable Node.js/TypeScript library providing a unified interface for interacting with multiple Generative AI providers—both cloud-based (OpenAI, Anthropic, Google Gemini, Mistral) and local (llama.cpp, stable-diffusion.cpp). Supports both LLM chat and AI image generation.
- 🔌 Unified API - Single interface for multiple AI providers
- 🏠 Local & Cloud Models - Run models locally with llama.cpp or use cloud APIs
- 🖼️ Image Generation - First-class support for AI image generation (OpenAI, local diffusion)
- 🔐 Flexible API Key Management - Bring your own key storage solution
- 📦 Zero Electron Dependencies - Works in any Node.js environment
- 🎯 TypeScript First - Full type safety and IntelliSense support
- ⚡ Lightweight - Minimal dependencies, focused functionality
- 🛡️ Provider Normalization - Consistent responses across different AI APIs
- 🎨 Configurable Model Presets - Built-in presets with full customization options
- 🎭 Template Engine - Sophisticated templating with conditionals and variable substitution
npm install genai-liteSet API keys as environment variables:
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export GEMINI_API_KEY=AIza...import { LLMService, fromEnvironment } from 'genai-lite';
const llmService = new LLMService(fromEnvironment);
const response = await llmService.sendMessage({
providerId: 'openai',
modelId: 'gpt-4.1-mini',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello, how are you?' }
]
});
if (response.object === 'chat.completion') {
console.log(response.choices[0].message.content);
}import { LLMService } from 'genai-lite';
// Start llama.cpp server first: llama-server -m /path/to/model.gguf --port 8080
const llmService = new LLMService(async () => 'not-needed');
const response = await llmService.sendMessage({
providerId: 'llamacpp',
modelId: 'llamacpp', // Generic ID for whatever model is loaded
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain quantum computing briefly.' }
]
});
if (response.object === 'chat.completion') {
console.log(response.choices[0].message.content);
}import { ImageService, fromEnvironment } from 'genai-lite';
const imageService = new ImageService(fromEnvironment);
const result = await imageService.generateImage({
providerId: 'openai-images',
modelId: 'gpt-image-1-mini',
prompt: 'A serene mountain lake at sunrise, photorealistic',
settings: {
width: 1024,
height: 1024,
quality: 'high'
}
});
if (result.object === 'image.result') {
require('fs').writeFileSync('output.png', result.data[0].data);
}Comprehensive documentation is available in the genai-lite-docs folder.
- Documentation Hub - Navigation and overview
- Core Concepts - API keys, presets, settings, errors
- LLM Service - Text generation and chat
- Image Service - Image generation (cloud and local)
- llama.cpp Integration - Local LLM inference
- Prompting Utilities - Template engine, token counting, content parsing
- TypeScript Reference - Type definitions
- Providers & Models - Supported providers and models
- Example: Chat Demo - Reference implementation for chat applications
- Example: Image Demo - Reference implementation for image generation applications
- Troubleshooting - Common issues and solutions
- OpenAI - GPT-4.1, o4-mini
- Anthropic - Claude 4, Claude 3.7, Claude 3.5
- Google Gemini - Gemini 2.5, Gemini 2.0
- Mistral - Codestral, Devstral
- llama.cpp - Run any GGUF model locally (no API keys required)
- OpenAI Images - gpt-image-1, dall-e-3, dall-e-2
- genai-electron - Local Stable Diffusion models
See Providers & Models for complete model listings and capabilities.
genai-lite uses a flexible API key provider pattern. Use the built-in fromEnvironment provider or create your own:
import { ApiKeyProvider, LLMService } from 'genai-lite';
const myKeyProvider: ApiKeyProvider = async (providerId: string) => {
const key = await mySecureStorage.getKey(providerId);
return key || null;
};
const llmService = new LLMService(myKeyProvider);See Core Concepts for detailed examples including Electron integration.
The library includes two complete demo applications showcasing all features:
- chat-demo - Interactive chat application with all LLM providers, template rendering, and advanced features
- image-gen-demo - Interactive image generation UI with OpenAI and local diffusion support
Both demos are production-ready React + Express applications that serve as reference implementations and testing environments. See Example: Chat Demo and Example: Image Demo for detailed documentation.
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
npm install
npm run build
npm testSee Troubleshooting for information about E2E tests and development workflows.
This project is licensed under the MIT License - see the LICENSE file for details.
Originally developed as part of the Athanor project, genai-lite has been extracted and made standalone to benefit the wider developer community.