React components for streaming-safe Markdown and AI chat interfaces.
Prefer Chinese docs? See README.zh-CN.md.
- Streaming-safe rendering:
useSmoothStreamqueues graphemes so partially streamed Markdown never breaks code fences or inline structures. - Shiki-powered code blocks:
useShikiHighlightlazy-loads themes and languages, falling back gracefully while syntax highlighting boots. - Message-aware primitives:
MessageItem,MessageBlockRenderer, andMessageBlockStoremodel complex assistant replies (thinking, tool calls, media, etc.). - Highly customizable: Extend
react-markdownvia thecomponentsprop, swap the defaultCodeBlock, or plug in your own themes and callbacks. - Tiny API surface: Stream text, toggle
status, and receiveonCompletewhen everything has flushed—no heavy state machines required.
pnpm add streaming-markdown-react
# or
npm install streaming-markdown-react
# or
yarn add streaming-markdown-reactimport { StreamingMarkdown, StreamingStatus } from 'streaming-markdown-react';
export function MessageBubble({
text,
status,
}: {
text: string;
status: StreamingStatus;
}) {
return (
<StreamingMarkdown
status={status}
className="prose prose-neutral max-w-none"
onComplete={() => console.log('stream finished')}
>
{text}
</StreamingMarkdown>
);
}Pass the latest chunked Markdown through children, keep status="streaming" until the LLM closes the stream, and use onComplete for follow-up UI work once every queued token is painted.
import { useState, useEffect } from 'react';
import { StreamingMarkdown, StreamingStatus } from 'streaming-markdown-react';
export function LiveAssistantMessage({ stream }: { stream: ReadableStream<string> }) {
const [text, setText] = useState('');
const [status, setStatus] = useState<StreamingStatus>('streaming');
useEffect(() => {
const reader = stream.getReader();
let cancelled = false;
async function read() {
while (!cancelled) {
const { value, done } = await reader.read();
if (done) {
setStatus('success');
break;
}
setText((prev) => prev + (value ?? ''));
}
}
read();
return () => {
cancelled = true;
reader.releaseLock();
};
}, [stream]);
return (
<StreamingMarkdown
status={status}
minDelay={12}
onComplete={() => console.log('assistant block done')}
>
{text}
</StreamingMarkdown>
);
}minDelay throttles animation frames for high-throughput streams, while status flips to 'success' the moment upstream tokenization ends.
| Export | Description |
|---|---|
StreamingMarkdown |
Streaming-safe Markdown renderer with GFM and overridable components. |
StreamingStatus |
'idle' | 'streaming' | 'success' | 'error' helper union for UI state. |
MessageItem |
Splits assistant responses into typed blocks backed by MessageBlockStore. |
MessageBlockRenderer |
Default renderer for text, thinking, tool, media, and error blocks. |
MessageBlockStore |
Lightweight in-memory store for diffing and hydrating message blocks. |
useSmoothStream |
Grapheme-level streaming queue powered by Intl.Segmenter. |
useShikiHighlight |
Lazy-loaded Shiki highlighter with light/dark themes. |
CodeBlock |
Default code block component; wrap or replace it for custom UI. |
| Prop | Type | Description |
|---|---|---|
children |
ReactNode |
Markdown (partial or complete) to render. |
className |
string |
Utility classes for the container. |
components |
Partial<Components> |
Extend/override react-markdown element renderers. |
status |
StreamingStatus |
Controls the internal streaming lifecycle. |
onComplete |
() => void |
Fires once the queue drains after the stream finishes. |
minDelay |
number |
Minimum milliseconds between animation frames (default 10). |
blockId |
string |
Reserved for coordinating multi-block updates. |
-
Override Markdown elements: provide a
componentsmap to inject callouts, alerts, or custom typography.<StreamingMarkdown components={{ blockquote: (props) => ( <div className="rounded-lg border-l-4 border-amber-500 bg-amber-50 p-3 text-sm"> {props.children} </div> ), }} > {text} </StreamingMarkdown>
-
Theme-aware code blocks: use the exported
CodeBlockor composeuseShikiHighlightwith your own chrome.import { CodeBlock, useShikiHighlight } from 'streaming-markdown-react';
-
Message-first UIs:
MessageItemandMessageBlockRenderercoordinate per-block rendering so chat transcripts stay in sync during streaming diffs.
All message-related types (Message, MessageBlock, MessageMetadata, etc.) are exported so your AI pipeline and UI can share a single contract.
import type { Message, MessageBlockType } from 'streaming-markdown-react';
const assistant: Message = {
id: 'msg-1',
role: 'assistant',
blocks: [
{
id: 'block-1',
type: MessageBlockType.MAIN_TEXT,
content: 'Here is your SQL query...',
},
],
};This repository also serves as a development playground for streaming-markdown-react. The root project is a full-featured Next.js AI Chatbot that demonstrates the package in action.
- Next.js App Router with React Server Components
- AI SDK integration with xAI (Grok) models via Vercel AI Gateway
- shadcn/ui components styled with Tailwind CSS
- Neon Serverless Postgres for chat history
- Vercel Blob for file storage
- Auth.js authentication
- Install dependencies:
pnpm install- Set up environment variables (see
.env.example):
# For Vercel users:
vercel env pull
# Or manually create .env.local with:
# - POSTGRES_URL
# - AUTH_SECRET
# - AI_GATEWAY_API_KEY (for non-Vercel deployments)- Run database migrations:
pnpm db:migrate- Start the development server:
pnpm devThe playground will run on localhost:4000.
pnpm dev # Start dev server
pnpm build # Build for production
pnpm lint # Check code with Ultracite
pnpm format # Auto-fix code
# Database (Drizzle ORM)
pnpm db:migrate # Apply migrations
pnpm db:generate # Generate new migrations
pnpm db:studio # Open Drizzle Studio GUI
# Testing
pnpm test # Run Playwright e2e testsFor detailed development instructions, see packages/streaming-markdown/README.md.
MIT © 2024-present. Feel free to use it in production or open-source projects.