Transform VRM 3D avatars into interactive AI characters with expressions, animations, and voice.
Build immersive AI experiences with realistic 3D avatars that can talk, express emotions, and respond intelligently to users.
- π€ Real-time Voice Chat - OpenAI Realtime API with WebRTC (no backend needed)
- π Automatic Lip Sync - MFCC-based phoneme detection syncs with AI speech
- π¬ Talking Animations - Auto-plays gestures during AI conversations
- π¨ Facial Expressions - Control 30+ VRM expressions with smooth transitions
- π Body Animations - Load and play Mixamo animations via simple URLs
- π RAG Support - Built-in vector search with Qdrant for knowledge bases
- ποΈ Natural Blinking - Randomized blinking for lifelike avatars
- π οΈ Function Calling - OpenAI tools for custom functions and RAG
- π― Simple API - URL-based animations, no complex setup
- π¦ Provider System - Plug-and-play OpenAI Realtime, Mock, or custom providers
- π Auto-Remapping - Mixamo animations work out of the box
- πͺ TypeScript - Full type safety and IntelliSense support
- β‘ React Three Fiber - Built on the industry-standard 3D React framework
First, install the core 3D rendering libraries:
npm install three @react-three/fiber @react-three/drei
# or
pnpm add three @react-three/fiber @react-three/drei
# or
yarn add three @react-three/fiber @react-three/dreinpm install @khaveeai/react @khaveeai/core
# or
pnpm add @khaveeai/react @khaveeai/core
# or
yarn add @khaveeai/react @khaveeai/core# For OpenAI Realtime API (voice chat + lip sync)
npm install @khaveeai/providers-openai-realtime
# For RAG (Retrieval-Augmented Generation)
npm install @khaveeai/providers-rag
# For development/testing (no API keys needed)
npm install @khaveeai/providers-mockThe SDK requires these peer dependencies (most React projects already have them):
{
"react": "^18.0.0 || ^19.0.0",
"react-dom": "^18.0.0 || ^19.0.0",
"three": "^0.160.0"
}import { Canvas } from '@react-three/fiber';
import { KhaveeProvider, VRMAvatar } from '@khaveeai/react';
export default function App() {
return (
<KhaveeProvider>
<Canvas>
<ambientLight intensity={0.5} />
<directionalLight position={[10, 10, 5]} />
<VRMAvatar
src="/models/character.vrm"
position={[0, -1, 0]}
/>
</Canvas>
</KhaveeProvider>
);
}const animations = {
idle: '/animations/idle.fbx', // Auto-plays on load
walk: '/animations/walk.fbx',
dance: '/animations/dance.fbx',
talking: '/animations/talking.fbx', // Played during AI speech
gesture1: '/animations/gesture.fbx' // Also played during speech
};
function App() {
return (
<KhaveeProvider>
<Canvas>
<VRMAvatar
src="/models/character.vrm"
animations={animations}
enableBlinking={true} // Natural blinking
enableTalkingAnimations={true} // Gestures during speech
/>
</Canvas>
</KhaveeProvider>
);
}Note: Animations with 'talk', 'gesture', or 'speak' in the name are automatically played randomly when the AI is speaking.
"use client";
import { KhaveeProvider, VRMAvatar, useRealtime } from '@khaveeai/react';
import { OpenAIRealtimeProvider } from '@khaveeai/providers-openai-realtime';
import { Canvas } from '@react-three/fiber';
import { useMemo } from 'react';
function ChatInterface() {
const {
isConnected,
connect,
disconnect,
sendMessage,
conversation,
chatStatus
} = useRealtime();
return (
<div>
{!isConnected ? (
<button onClick={connect}>π€ Start Voice Chat</button>
) : (
<div>
<div>Status: {chatStatus}</div>
<button onClick={() => sendMessage('Hello!')}>Say Hello</button>
<button onClick={disconnect}>Disconnect</button>
{/* Conversation history */}
{conversation.map((msg, i) => (
<div key={i}>{msg.role}: {msg.text}</div>
))}
</div>
)}
</div>
);
}
export default function App() {
// Memoize provider to prevent recreation
const realtime = useMemo(() =>
new OpenAIRealtimeProvider({
apiKey: process.env.NEXT_PUBLIC_OPENAI_API_KEY!,
voice: 'coral',
instructions: 'You are a helpful AI assistant.',
}), []
);
return (
<KhaveeProvider config={{ realtime }}>
<Canvas>
{/* Lip sync happens automatically! */}
<VRMAvatar src="/models/character.vrm" />
</Canvas>
<ChatInterface />
</KhaveeProvider>
);
}β¨ Automatic Features:
- Lip sync with MFCC phoneme detection
- Talking animations during speech
- Natural blinking
- WebRTC connection (no backend needed)
import { useVRMExpressions } from '@khaveeai/react';
function ExpressionControls() {
const { setExpression, resetExpressions, setMultipleExpressions } = useVRMExpressions();
return (
<div>
{/* Single expression */}
<button onClick={() => setExpression('happy', 1)}>
π Happy
</button>
{/* Partial intensity */}
<button onClick={() => setExpression('happy', 0.5)}>
π Slightly Happy
</button>
{/* Multiple expressions */}
<button onClick={() => setMultipleExpressions({
happy: 0.8,
surprised: 0.4
})}>
π² Excited
</button>
{/* Reset all */}
<button onClick={() => resetExpressions()}>
π Neutral
</button>
</div>
);
}import { useVRMAnimations } from '@khaveeai/react';
function AnimationControls() {
const { animate, stopAnimation, currentAnimation } = useVRMAnimations();
return (
<div>
<button onClick={() => animate('walk')}>
πΆ Walk
</button>
<button onClick={() => animate('dance')}>
π Dance
</button>
<button onClick={() => animate('idle')}>
π§ Idle
</button>
<button onClick={() => stopAnimation()}>
βΉοΈ Stop
</button>
<p>Current: {currentAnimation || 'none'}</p>
</div>
);
}your-project/
βββ public/
β βββ models/
β β βββ character.vrm # Your VRM model
β βββ animations/
β βββ idle.fbx # Mixamo animations
β βββ walk.fbx
β βββ dance.fbx
βββ src/
β βββ app/
β β βββ page.tsx # Your main component
β βββ ...
βββ package.json
Root provider that manages VRM state and optional LLM/TTS configuration.
<KhaveeProvider config={config}>
{children}
</KhaveeProvider>Props:
config?- Optional LLM/TTS provider configurationchildren- React children
Renders a VRM 3D character with animations and expressions.
<VRMAvatar
src="/models/character.vrm"
animations={animations}
position={[0, -1, 0]}
rotation={[0, Math.PI, 0]}
scale={[1, 1, 1]}
enableBlinking={true}
enableTalkingAnimations={true}
/>Props:
src- URL to VRM model file (required)animations?- Animation configuration (URLs to FBX files)position?- 3D position[x, y, z](default:[0, 0, 0])rotation?- 3D rotation[x, y, z](default:[0, Math.PI, 0])scale?- 3D scale[x, y, z](default:[1, 1, 1])enableBlinking?- Enable natural blinking (default:true)enableTalkingAnimations?- Enable gestures during AI speech (default:true)
Control facial expressions with smooth transitions.
const {
expressions, // Current expression values
setExpression, // Set single expression
resetExpressions, // Reset all to neutral
setMultipleExpressions // Set multiple at once
} = useVRMExpressions();Example:
setExpression('happy', 1); // Full happiness
setExpression('happy', 0.5); // Partial
setMultipleExpressions({ // Multiple
happy: 0.8,
surprised: 0.3
});
resetExpressions(); // Reset allPlay and control body animations.
const {
animate, // Play animation by name
stopAnimation, // Stop all animations
currentAnimation // Currently playing animation name
} = useVRMAnimations();Example:
animate('walk'); // Play walk animation
animate('dance'); // Play dance animation
stopAnimation(); // Stop allReal-time voice chat with OpenAI Realtime API.
const {
isConnected,
connect,
disconnect,
sendMessage,
conversation,
chatStatus,
currentPhoneme, // Current phoneme for lip sync
interrupt // Interrupt AI speech
} = useRealtime();
// Usage
await connect(); // Start voice chat
await sendMessage('Hello!'); // Send text message
interrupt(); // Stop AI from speaking
await disconnect(); // End sessionChat Status Values:
stopped- Not connectedready- Connected, waitinglistening- User is speakingthinking- AI is processingspeaking- AI is responding
Analyze audio files for lip sync (separate from realtime).
const {
analyzeLipSync,
stopLipSync,
isAnalyzing,
currentPhoneme
} = useAudioLipSync();
// Usage
await analyzeLipSync('/audio/speech.wav', {
sensitivity: 0.8,
intensityMultiplier: 3.0
});Access the raw VRM instance for advanced use cases.
const vrm = useVRM();
if (vrm) {
console.log('VRM loaded:', vrm.meta.name);
}Access all SDK functionality at once.
const {
vrm,
setExpression,
animate,
// ... all functions
} = useKhavee();function VoiceChat() {
const { isConnected, connect, chatStatus } = useRealtime();
const { setExpression } = useVRMExpressions();
// Set expressions based on chat status
useEffect(() => {
if (chatStatus === 'listening') {
setExpression('surprised', 0.3);
} else if (chatStatus === 'thinking') {
setExpression('neutral', 1);
} else if (chatStatus === 'speaking') {
setExpression('happy', 0.7);
}
}, [chatStatus]);
return (
<button onClick={connect} disabled={isConnected}>
π€ Start Voice Chat
</button>
);
}function DanceWithJoy() {
const { animate } = useVRMAnimations();
const { setExpression } = useVRMExpressions();
const danceHappily = () => {
animate('dance');
setExpression('happy', 1);
};
return <button onClick={danceHappily}>Dance!</button>;
}function TextChat() {
const { sendMessage, conversation, chatStatus } = useRealtime();
const [input, setInput] = useState('');
const handleSend = async () => {
if (!input.trim()) return;
await sendMessage(input);
setInput('');
};
return (
<div>
<div className="messages">
{conversation.map((msg, i) => (
<div key={i}>
<strong>{msg.role}:</strong> {msg.text}
</div>
))}
</div>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyPress={(e) => e.key === 'Enter' && handleSend()}
disabled={chatStatus === 'speaking'}
/>
<button onClick={handleSend}>Send</button>
</div>
);
}Real-time voice chat with automatic lip sync:
import { OpenAIRealtimeProvider } from '@khaveeai/providers-openai-realtime';
import { useMemo } from 'react';
function App() {
const realtime = useMemo(() =>
new OpenAIRealtimeProvider({
apiKey: process.env.NEXT_PUBLIC_OPENAI_API_KEY!,
voice: 'coral', // or: alloy, echo, sage, shimmer
instructions: 'You are a helpful AI assistant.',
temperature: 0.8,
tools: [] // Optional: Add RAG or custom functions
}), []
);
return (
<KhaveeProvider config={{ realtime }}>
{/* Your app */}
</KhaveeProvider>
);
}Add knowledge base search to your AI:
// app/lib/rag.ts (server-side)
"use server";
import { RAGProvider } from '@khaveeai/providers-rag';
export async function searchKnowledgeBase(query: string) {
const rag = new RAGProvider({
qdrantUrl: process.env.QDRANT_URL!,
qdrantApiKey: process.env.QDRANT_API_KEY,
collectionName: process.env.QDRANT_COLLECTION!,
openaiApiKey: process.env.OPENAI_API_KEY!,
});
return await rag.search(query);
}
// app/page.tsx (client-side)
"use client";
const realtime = new OpenAIRealtimeProvider({
apiKey: process.env.NEXT_PUBLIC_OPENAI_API_KEY!,
tools: [
{
name: 'search_knowledge_base',
description: 'Search the knowledge base',
parameters: {
query: { type: 'string', description: 'Search query', required: true }
},
execute: async (args) => await searchKnowledgeBase(args.query)
}
]
});Perfect for testing without API keys:
import { MockLLM, MockTTS } from '@khaveeai/providers-mock';
const config = {
llm: new MockLLM(),
tts: new MockTTS(),
};
<KhaveeProvider config={config}>
{/* Test your UI without API costs */}
</KhaveeProvider>- VRoid Hub - Free VRM characters
- VRoid Studio - Create your own
- Booth.pm - Buy premium models
- Go to Mixamo
- Select any animation
- Download as FBX format
- No skeleton, just animation
- Use the URL in your
animationsconfig
Recommended Animations:
- Idle β Breathing Idle
- Walk β Walking
- Dance β Hip Hop Dancing, Swing Dancing
- Talk β Talking with Hands
- Wave β Waving
Check these:
- β VRM file is valid (test in VRoid Hub)
- β
Wrapped in
<Canvas>from@react-three/fiber - β
Wrapped in
<KhaveeProvider> - β
Lights added to scene (
<ambientLight>,<directionalLight>)
Check these:
- β FBX files are from Mixamo
- β Downloaded as FBX (not BVH)
- β "Without Skin" option selected
- β URLs are correct and accessible
- β Animation name matches config key
Check these:
- β VRM model has expression support
- β Expression names are correct (check VRM in VRoid Hub)
- β Values between 0 and 1
- β
Called inside component wrapped by
<KhaveeProvider>
Check these:
- β
Provider configured in
<KhaveeProvider config={...}> - β API keys are valid
- API Reference - Complete API docs
- Examples - Working examples
- Function Documentation - All functions documented
- IntelliSense Guide - IDE integration guide
Contributions are welcome! Please read our Contributing Guide for details.
MIT Β© Khavee AI
Check out our example app to see:
- β Expression controls
- β Animation panel
- β LLM chat integration
- β Voice synthesis
- β Combined interactions
- π§ Email: support@khaveeai.com
- π¬ Discord: Join our community
- π Issues: GitHub Issues
Built with β€οΈ by the Khavee AI Team