Skip to content

Commit

Permalink
Moar docs (#9)
Browse files Browse the repository at this point in the history
* more docs

* Fix use-chat
  • Loading branch information
jaredpalmer authored May 25, 2023
1 parent 0f50deb commit 8965e2e
Show file tree
Hide file tree
Showing 5 changed files with 22 additions and 22 deletions.
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ The goal of this library lies in its commitment to work directly with each AI/Mo
```tsx
// app/api/generate/route.ts
import { Configuration, OpenAIApi } from "openai-edge";
import { OpenAITextStream, StreamingTextResponse } from "@vercel/ai-utils";
import { OpenAIStream, StreamingTextResponse } from "@vercel/ai-utils";

const config = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
Expand All @@ -62,7 +62,7 @@ export async function POST() {
stream: true,
messages: [{ role: "user", content: "What is love?" }],
});
const stream = OpenAITextStream(response);
const stream = OpenAIStream(response);
return new StreamingTextResponse(stream);
}
```
Expand Down Expand Up @@ -100,7 +100,7 @@ Create a Next.js Route Handler that uses the Edge Runtime that we'll use to gene
```tsx
// ./app/api/generate/route.ts
import { Configuration, OpenAIApi } from "openai-edge";
import { OpenAITextStream, StreamingTextResponse } from "@vercel/ai-utils";
import { OpenAIStream, StreamingTextResponse } from "@vercel/ai-utils";

// Create an OpenAI API client (that's edge friendly!)
const config = new Configuration({
Expand All @@ -122,13 +122,13 @@ export async function POST(req: Request) {
prompt,
});
// Convert the response into a friendly text-stream
const stream = OpenAITextStream(response);
const stream = OpenAIStream(response);
// Respond with the stream
return new StreamingTextResponse(stream);
}
```

Vercel AI Utils provides 2 utility helpers to make the above seamless: First, we pass the streaming `response` we receive from OpenAI to `OpenAITextStream`. This method decodes/extracts the text tokens in the response and then re-encodes them properly for simple consumption. We can then pass that new stream directly to `StreamingTextResponse`. This is another utility class that extends the normal Node/Edge Runtime `Response` class with the default headers you probably want (hint: `'Content-Type': 'text/plain; charset=utf-8'` is already set for you).
Vercel AI Utils provides 2 utility helpers to make the above seamless: First, we pass the streaming `response` we receive from OpenAI to `OpenAIStream`. This method decodes/extracts the text tokens in the response and then re-encodes them properly for simple consumption. We can then pass that new stream directly to `StreamingTextResponse`. This is another utility class that extends the normal Node/Edge Runtime `Response` class with the default headers you probably want (hint: `'Content-Type': 'text/plain; charset=utf-8'` is already set for you).

### Wire up the UI

Expand Down Expand Up @@ -171,7 +171,7 @@ A transform that will extract the text from all chat and completion OpenAI model
```tsx
// app/api/generate/route.ts
import { Configuration, OpenAIApi } from 'openai-edge';
import { OpenAITextStream, StreamingTextResponse } from '@vercel/ai-utils';
import { OpenAIStream, StreamingTextResponse } from '@vercel/ai-utils';

const config = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
Expand All @@ -186,7 +186,7 @@ export async function POST() {
stream: true,
messages: [{ role: 'user', content: 'What is love?' }],
});
const stream = OpenAITextStream(response, {
const stream = OpenAIStream(response, {
async onStart() {
console.log('streamin yo')
},
Expand Down Expand Up @@ -239,7 +239,7 @@ This is a tiny wrapper around `Response` class that makes returning `ReadableStr

```tsx
// app/api/generate/route.ts
import { OpenAITextStream, StreamingTextResponse } from "@vercel/ai-utils";
import { OpenAIStream, StreamingTextResponse } from "@vercel/ai-utils";

export const runtime = "edge";

Expand All @@ -249,7 +249,7 @@ export async function POST() {
stream: true,
messages: { role: "user", content: "What is love?" },
});
const stream = OpenAITextStream(response);
const stream = OpenAIStream(response);
return new StreamingTextResponse(stream, {
"X-RATE-LIMIT": "lol",
}); // => new Response(stream, { status: 200, headers: { 'Content-Type': 'text/plain; charset=utf-8', 'X-RATE-LIMIT': 'lol' }})
Expand Down
8 changes: 4 additions & 4 deletions apps/docs/pages/docs/api.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ A transform that will extract the text from all chat and completion OpenAI model
```tsx
// app/api/generate/route.ts
import { Configuration, OpenAIApi } from 'openai-edge';
import { OpenAITextStream, StreamingTextResponse } from '@vercel/ai-utils';
import { OpenAIStream, StreamingTextResponse } from '@vercel/ai-utils';

const config = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
Expand All @@ -26,7 +26,7 @@ export async function POST() {
stream: true,
messages: [{ role: 'user', content: 'What is love?' }],
});
const stream = OpenAITextStream(response, {
const stream = OpenAIStream(response, {
async onStart() {
console.log('streamin yo')
},
Expand Down Expand Up @@ -79,7 +79,7 @@ This is a tiny wrapper around `Response` class that makes returning `ReadableStr

```tsx
// app/api/generate/route.ts
import { OpenAITextStream, StreamingTextResponse } from '@vercel/ai-utils';
import { OpenAIStream, StreamingTextResponse } from '@vercel/ai-utils';

export const runtime = 'edge';

Expand All @@ -89,7 +89,7 @@ export async function POST() {
stream: true,
messages: { role: 'user', content: 'What is love?' },
});
const stream = OpenAITextStream(response);
const stream = OpenAIStream(response);
return new StreamingTextResponse(stream, {
'X-RATE-LIMIT': 'lol',
}); // => new Response(stream, { status: 200, headers: { 'Content-Type': 'text/plain; charset=utf-8', 'X-RATE-LIMIT': 'lol' }})
Expand Down
10 changes: 5 additions & 5 deletions apps/docs/pages/docs/getting-started.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ yarn add @vercel/ai-utils
```tsx {3,18-19}
// app/api/generate/route.ts
import { Configuration, OpenAIApi } from 'openai-edge';
import { OpenAITextStream, StreamingTextResponse } from '@vercel/ai-utils';
import { OpenAIStream, StreamingTextResponse } from '@vercel/ai-utils';

const config = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
Expand All @@ -51,7 +51,7 @@ export async function POST() {
stream: true,
messages: [{ role: 'user', content: 'What is love?' }],
});
const stream = OpenAITextStream(response);
const stream = OpenAIStream(response);
return new StreamingTextResponse(stream);
}
```
Expand Down Expand Up @@ -89,7 +89,7 @@ Create a Next.js Route Handler that uses the Edge Runtime that we'll use to gene
```tsx
// ./app/api/generate/route.ts
import { Configuration, OpenAIApi } from 'openai-edge';
import { OpenAITextStream, StreamingTextResponse } from '@vercel/ai-utils';
import { OpenAIStream, StreamingTextResponse } from '@vercel/ai-utils';

// Create an OpenAI API client (that's edge friendly!)
const config = new Configuration({
Expand All @@ -111,13 +111,13 @@ export async function POST(req: Request) {
prompt,
});
// Convert the response into a friendly text-stream
const stream = OpenAITextStream(response);
const stream = OpenAIStream(response);
// Respond with the stream
return new StreamingTextResponse(stream);
}
```

Vercel AI Utils provides 2 utility helpers to make the above seamless: First, we pass the streaming `response` we receive from OpenAI to `OpenAITextStream`. This method decodes/extracts the text tokens in the response and then re-encodes them properly for simple consumption. We can then pass that new stream directly to `StreamingTextResponse`. This is another utility class that extends the normal Node/Edge Runtime `Response` class with the default headers you probably want (hint: `'Content-Type': 'text/plain; charset=utf-8'` is already set for you).
Vercel AI Utils provides 2 utility helpers to make the above seamless: First, we pass the streaming `response` we receive from OpenAI to `OpenAIStream`. This method decodes/extracts the text tokens in the response and then re-encodes them properly for simple consumption. We can then pass that new stream directly to `StreamingTextResponse`. This is another utility class that extends the normal Node/Edge Runtime `Response` class with the default headers you probably want (hint: `'Content-Type': 'text/plain; charset=utf-8'` is already set for you).

### Wire up the UI

Expand Down
6 changes: 3 additions & 3 deletions apps/docs/pages/docs/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ Another core tenet of this library lies in its commitment to work directly with
```tsx {3,18-19}
// app/api/generate/route.ts
import { Configuration, OpenAIApi } from 'openai-edge';
import { OpenAITextStream, StreamingTextResponse } from '@vercel/ai-utils';
import { OpenAIStream, StreamingTextResponse } from '@vercel/ai-utils';

const config = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
Expand All @@ -63,9 +63,9 @@ export async function POST() {
stream: true,
messages: [{ role: 'user', content: 'What is love?' }],
});
const stream = OpenAITextStream(response);
const stream = OpenAIStream(response);
return new StreamingTextResponse(stream);
}
```

Vercel AI Utils provides 2 utility helpers to make the above seamless: First, we pass the streaming `response` we receive from OpenAI to `OpenAITextStream`. This method decodes/extracts the text tokens in the response and then re-encodes them properly for simple consumption. We can then pass that new stream directly to `StreamingTextResponse`. This is another utility class that extends the normal Node/Edge Runtime `Response` class with the default headers you probably want (hint: `'Content-Type': 'text/plain; charset=utf-8'` is already set for you).
Vercel AI Utils provides 2 utility helpers to make the above seamless: First, we pass the streaming `response` we receive from OpenAI to `OpenAIStream`. This method decodes/extracts the text tokens in the response and then re-encodes them properly for simple consumption. We can then pass that new stream directly to `StreamingTextResponse`. This is another utility class that extends the normal Node/Edge Runtime `Response` class with the default headers you probably want (hint: `'Content-Type': 'text/plain; charset=utf-8'` is already set for you).
2 changes: 1 addition & 1 deletion packages/core/src/use-chat.ts
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@ export function useChat({
return null;
} catch (err) {
// Ignore abort errors as they are expected.
if (err.name === "AbortError") {
if ((err as any).name === "AbortError") {
setAbortController(null);
return null;
}
Expand Down

1 comment on commit 8965e2e

@vercel
Copy link

@vercel vercel bot commented on 8965e2e May 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Successfully deployed to the following URLs:

ai-utils-docs – ./

ai-utils-docs.vercel.sh
ai-utils-docs-git-main.vercel.sh

Please sign in to comment.