Skip to content

Commit

Permalink
Update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
jaredpalmer committed May 24, 2023
1 parent a628b09 commit b10e483
Showing 1 changed file with 9 additions and 6 deletions.
15 changes: 9 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,12 +24,11 @@ pnpm install @vercel/ai-utils
- [Wire up the UI](#wire-up-the-ui)
- [API Reference](#api-reference)
- [`OpenAIStream(res: Response, cb: AIStreamCallbacks): ReadableStream`](#openaistreamres-response-cb-aistreamcallbacks-readablestream)
- [`HuggingFaceStream(iter: AsyncGenerator<any>, cb: AIStreamCallbacks): ReadableStream`](#huggingfacestreamiter-asyncgeneratorany-cb-aistreamcallbacks-readablestream)
- [`HuggingFaceStream(iter: AsyncGenerator<any>, cb?: AIStreamCallbacks): ReadableStream`](#huggingfacestreamiter-asyncgeneratorany-cb-aistreamcallbacks-readablestream)
- [`StreamingTextResponse(res: ReadableStream, init?: ResponseInit)`](#streamingtextresponseres-readablestream-init-responseinit)

<!-- END doctoc generated TOC please keep comment here to allow auto update -->


## Background

Creating UIs with contemporary AI providers is a daunting task. Ideally, language models/providers would be fast enough where developers could just fetch complete responses data with JSON in a few hundred milliseconds, but the reality is starkly different. It's quite common for these LLMs to take 5-40s to whip up a response.
Expand All @@ -44,7 +43,6 @@ Many AI utility helpers so far in the JS ecosystem tend to overcomplicate things

The goal of this library lies in its commitment to work directly with each AI/Model Hosting Provider's SDK, an equivalent edge-compatible version, or a vanilla `fetch` function. Its job is simply to cut through the confusion and handle the intricacies of streaming text, leaving you to concentrate on building your next big thing instead of wasting another afternoon tweaking `TextEncoder` with trial and error.


## Usage

```tsx
Expand Down Expand Up @@ -103,7 +101,10 @@ Create a Next.js Route Handler that uses the Edge Runtime that we'll use to gene
```tsx
// ./app/api/generate/route.ts
import { Configuration, OpenAIApi } from 'openai-edge';
import { OpenAITextStream, StreamingTextResponse } from '@vercel/ai-utils';
import {
OpenAITextStream,
StreamingTextResponse
} from '@vercel/ai-utils';

// Create an OpenAI API client (that's edge friendly!)
const config = new Configuration({
Expand All @@ -124,13 +125,15 @@ export async function POST(req: Request) {
stream: true,
prompt,
});
// Convert the response into a React-friendly text-stream
// Convert the response into a friendly text-stream
const stream = OpenAITextStream(response);
// Respond with the stream
return new StreamingTextResponse(stream);
}
```

Vercel AI Utils provides 2 utility helpers to make the above seamless: First, we pass the streaming `response` we receive from OpenAI to `OpenAITextStream`. This method decodes/extracts the text tokens in the response and then re-encodes them properly for simple consumption. We can then pass that new stream directly to `StreamingTextResponse`. This is another utility class that extends the normal Node/Edge Runtime `Response` class with the default headers you probably want (hint: `'Content-Type': 'text/plain; charset=utf-8'` is already set for you).

### Wire up the UI

Create a Client component with a form that we'll use to gather the prompt from the user and then stream back the completion from.
Expand Down Expand Up @@ -203,7 +206,7 @@ export async function POST() {
}
```

### `HuggingFaceStream(iter: AsyncGenerator<any>, cb: AIStreamCallbacks): ReadableStream`
### `HuggingFaceStream(iter: AsyncGenerator<any>, cb?: AIStreamCallbacks): ReadableStream`

A transform that will extract the text from _most_ chat and completion HuggingFace models and return them as a `ReadableStream`.

Expand Down

0 comments on commit b10e483

Please sign in to comment.