Skip to content

Latest commit

 

History

History
151 lines (120 loc) · 3.57 KB

stream.md

File metadata and controls

151 lines (120 loc) · 3.57 KB

Streaming Interface

xmllm provides several ways to process AI responses, from simple one-shot requests to complex streaming scenarios.

Quick Results with simple()

For when you just want the final result, and are unconcerned about the streaming updates:

import { simple } from 'xmllm';

// Get a clean, complete result
const result = await simple('Analyze this text: ' + TEXT, {
  schema: {
    sentiment: String,
    score: Number
  }
});

console.log(result);
// { sentiment: 'positive', score: 0.92 }

Streaming with stream()

For when you need to process updates as they arrive:

1. Raw XML Streaming (details)

const thoughts = stream('Share some thoughts')
  .select('thought')    // Find <thought> elements
  .text();             // Get text content

for await (const thought of thoughts) {
  console.log(thought);
}

2. Schema-Based Streaming (details)

const analysis = stream('Analyze this text', {
  schema: {
    sentiment: String,
    score: Number
  }
});

for await (const update of analysis) {
  console.log(update);
}

Configuration

stream(promptOrConfig, options)

promptOrConfig

Either a string prompt or configuration object:

{
  prompt: string,              // The prompt to send
  model?: string | string[],   // Model selection
  strategy?: string,           // Prompt strategy (see strategies.md)
  schema?: SchemaType,         // Enable schema processing
  hints?: HintType,
  temperature?: number,        // 0-2, default 0.72
  maxTokens?: number,         // Max response length
  cache?: boolean,            // Enable caching
  
  // Schema-specific options:
  system?: string,             // System prompt
  mode?: 'state_open' | 'state_closed' | 'root_open' | 'root_closed'
}

options

Additional options that override promptOrConfig:

{
  llmStream?: StreamFunction,  // Custom stream provider
  keys?: Record<string, string>,  // Provider API keys
  clientProvider?: ClientProvider    // For browser usage
}

Chainable Methods

Selection & Extraction

.select(selector: string)     // CSS selector for elements
.text()                       // Extract text content
.closedOnly()                 // Only complete elements

Transformation

.map(fn: (value: T) => U)    // Transform values
.filter(fn: (value: T) => boolean)  // Filter values
.reduce(fn: (acc: U, value: T) => U, initial: U)  // Reduce values

Collection

.first()                     // Get first result
.last(n?: number)            // Get last n results (default 1)
.all()                       // Get all results as array
.merge()                     // Deep merge all results

Pagination

.take(n: number)             // Take first n results
.skip(n: number)             // Skip first n results

Debug

.debug(label?: string)       // Log debug information
.raw()                       // Get raw response chunks

Browser Usage

For browser environments, see the Provider Setup Guide:

import { stream, ClientProvider } from 'xmllm/client';

const client = new ClientProvider('http://localhost:3124/api/stream');

const result = await stream('Query', {
  clientProvider: client
}).last();

Error Handling

try {
  const result = await stream('Query')
    .select('answer')
    .first();
} catch (error) {
  if (error.message.includes('Failed to connect')) {
    // Handle network error
  }
}