Conversation
- Updated POST handler in route.ts to dynamically fetch available agents and validate agentId. - Improved error handling for missing or invalid messages and agentId. - Refactored createAgentStreamResponse in client-stream-to-ai-sdk.ts to streamline response creation and added support for new options. - Deprecated createMastraStreamResponse in favor of createAgentStreamResponse for better clarity and usage. - Updated mastra-client.ts to export new types and functions for better integration.
|
Keep this PR in a mergeable state → Learn moreAll Green is an AI agent that automatically: ✅ Addresses code review comments ✅ Fixes failing CI checks ✅ Resolves merge conflicts |
Reviewer's GuideRefactors the chat API route to use a new generic agent streaming helper that integrates Mastra agents with the AI SDK, adds stronger request validation and dynamic agent lookup, and exposes the helper from the Mastra client while deprecating the old client-stream function. Sequence diagram for POST chat API using createAgentStreamResponsesequenceDiagram
actor Client
participant ChatRoute as ApiChatRoute
participant Mastra as MastraInstance
participant Agent as MastraAgent
participant Helper as createAgentStreamResponse
Client->>ChatRoute: POST /api/chat
ChatRoute->>ChatRoute: Parse body to ChatRequestBody
ChatRoute->>Mastra: getAgents()
Mastra-->>ChatRoute: agentsMap
ChatRoute->>ChatRoute: Derive agentId and validate
ChatRoute->>ChatRoute: Validate messages
alt Invalid agentId or messages
ChatRoute-->>Client: 400 Response json(error)
else Valid request
ChatRoute->>Helper: createAgentStreamResponse(mastra, agentId, messages, options)
Helper->>Mastra: getAgent(agentId)
Mastra-->>Helper: agent
Helper->>Agent: stream(messages, streamOptions)
Agent-->>Helper: stream
alt Agent stream supports toUIMessageStreamResponse
Helper-->>ChatRoute: Response from toUIMessageStreamResponse
else Fallback to manual AISDK format
Helper->>Helper: toAISdkFormat(stream, from agent)
Helper->>Helper: createUIMessageStream
Helper-->>ChatRoute: createUIMessageStreamResponse
end
ChatRoute-->>Client: 200 streaming Response
end
Note over ChatRoute,Helper: Errors in streaming are caught and returned as 500 JSON
Class diagram for new agent streaming helper and chat route typesclassDiagram
class StreamToAISdkOptions {
+string agentId
+string messages
+string threadId
+string resourceId
}
class AgentStreamOptions {
+string format
+string threadId
+string resourceId
+memory memory
+number maxSteps
}
class AgentStreamOptions_memory {
+thread thread
+string resource
+memory_options options
}
class AgentStreamOptions_memory_thread {
+string id
+string resourceId
}
class AgentStreamOptions_memory_options {
+number lastMessages
+boolean semanticRecall
+workingMemory workingMemory
}
class AgentStreamOptions_memory_options_workingMemory {
+boolean enabled
}
class UIMessage {
+string id
+string role
+string content
}
class ChatRequestBody {
+UIMessage[] messages
+string agentId
+string threadId
+string resourceId
+AgentStreamOptions_memory memory
+number maxSteps
}
class MastraModelOutput {
}
class MastraAgent {
+stream(messages, options) MastraModelOutput
}
class MastraInstance {
+getAgent(id) MastraAgent
+getAgents() Map
}
class createAgentStreamResponse {
+createAgentStreamResponse(mastra, agentId, messages, options) Response
}
class createMastraStreamResponse {
+createMastraStreamResponse(client, options) Response
}
MastraInstance --> MastraAgent : getAgent
MastraInstance --> MastraModelOutput : getAgents returns agents map
MastraAgent --> MastraModelOutput : stream
AgentStreamOptions --> AgentStreamOptions_memory : memory
AgentStreamOptions_memory --> AgentStreamOptions_memory_thread : thread
AgentStreamOptions_memory --> AgentStreamOptions_memory_options : options
AgentStreamOptions_memory_options --> AgentStreamOptions_memory_options_workingMemory : workingMemory
ChatRequestBody --> UIMessage : messages
ChatRequestBody --> AgentStreamOptions_memory : memory
createAgentStreamResponse --> MastraInstance : uses
createAgentStreamResponse --> MastraAgent : uses
createAgentStreamResponse --> MastraModelOutput : streams
createMastraStreamResponse <|.. createAgentStreamResponse : deprecated in favor of
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
🤖 Hi @ssdeanx, I've received your request, and I'm working on it now! You can track my progress in the logs for more details. |
Summary of ChangesHello @ssdeanx, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the chat API by introducing dynamic agent management and more robust streaming capabilities. It refactors the core logic for handling agent responses, making the API more flexible and developer-friendly. The changes aim to improve the overall reliability and functionality of agent-based interactions, providing better control over agent selection and stream processing. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings. WalkthroughThe chat API undergoes a fundamental redesign, shifting from simple streaming to a Mastra agent-based architecture. The POST endpoint now dynamically selects agents, validates inputs, and delegates streaming to Changes
Sequence DiagramsequenceDiagram
participant Client
participant API as API Route
participant Mastra
participant Agent
participant Stream
Client->>API: POST /chat (messages, optional agentId)
API->>Mastra: getAgents()
Mastra-->>API: List of available agents
alt agentId provided
API->>API: Validate agentId
else agentId not provided
API->>API: Select first available agent
end
API->>API: Validate messages
API->>Mastra: createAgentStreamResponse(mastra, agentId, messages, options)
Mastra->>Agent: getAgent(agentId)
Agent-->>Mastra: MastraAgent instance
Mastra->>Agent: stream(messages, options)
Agent-->>Mastra: MastraModelOutput with stream
alt format === "aisdk"
Mastra->>Stream: Use agent's AI SDK format
else fallback
Mastra->>Stream: toAISdkFormat + toUIMessageStream
end
Stream-->>Mastra: Response stream
Mastra-->>API: Response
API-->>Client: Streamed response (200)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Hey there - I've reviewed your changes - here's some feedback:
- In
createAgentStreamResponse, whenformat === "aisdk"andtoUIMessageStreamResponseis missing, you callagent.streama second time for the fallback path; consider reusing the firststreaminstance to avoid double invocation of the agent. - The
mastra as Parameters<typeof createAgentStreamResponse>[0]cast in the POST handler is unnecessary and makes the call harder to read; you can rely on the concretemastratype or a simpler explicit interface instead. - The exported
StreamToAISdkOptionstype is now only used in the deprecatedcreateMastraStreamResponse; consider either aligning it with the new API or clearly marking it as legacy to avoid confusion for consumers.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `createAgentStreamResponse`, when `format === "aisdk"` and `toUIMessageStreamResponse` is missing, you call `agent.stream` a second time for the fallback path; consider reusing the first `stream` instance to avoid double invocation of the agent.
- The `mastra as Parameters<typeof createAgentStreamResponse>[0]` cast in the POST handler is unnecessary and makes the call harder to read; you can rely on the concrete `mastra` type or a simpler explicit interface instead.
- The exported `StreamToAISdkOptions` type is now only used in the deprecated `createMastraStreamResponse`; consider either aligning it with the new API or clearly marking it as legacy to avoid confusion for consumers.
## Individual Comments
### Comment 1
<location> `lib/client-stream-to-ai-sdk.ts:16-25` </location>
<code_context>
+}
+
+export interface AgentStreamOptions {
+ format?: "aisdk" | "mastra";
+ threadId?: string;
+ resourceId?: string;
+ memory?: {
+ thread?: string | { id: string; resourceId?: string };
+ resource?: string;
+ options?: {
+ lastMessages?: number;
+ semanticRecall?: boolean;
+ workingMemory?: { enabled?: boolean };
+ };
+ };
+ maxSteps?: number;
+}
+
+type MastraAgent = {
+ stream: (
+ messages: unknown,
+ options?: {
+ format?: string;
+ threadId?: string;
+ resourceId?: string;
+ memory?: AgentStreamOptions["memory"];
+ maxSteps?: number;
}
</code_context>
<issue_to_address>
**issue:** The `format` option currently doesn’t change behavior when set to `"mastra"`, which can be misleading.
Currently only `format === "aisdk"` changes behavior; all other values (including `"mastra"`) follow the same path and still run `toAISdkFormat`. A caller passing `format: "mastra"` gets identical behavior to omitting `format`. I’d suggest either making `"mastra"` a distinct mode (e.g., skip `toAISdkFormat` and return raw `MastraModelOutput`) or removing `"mastra"` from the type until it has dedicated semantics, so the API contract matches actual behavior.
</issue_to_address>
### Comment 2
<location> `lib/client-stream-to-ai-sdk.ts:89-91` </location>
<code_context>
+ };
+
+ // Preferred: Use built-in AI SDK format
+ if (streamOptions.format === "aisdk") {
+ const stream = await agent.stream(messages, streamOptions);
+ if (stream.toUIMessageStreamResponse) {
+ return stream.toUIMessageStreamResponse();
+ }
</code_context>
<issue_to_address>
**suggestion (performance):** Calling `agent.stream` twice for the same request can be wasteful and may have unintended side effects.
When `format === "aisdk"` but `toUIMessageStreamResponse` is missing, `agent.stream` is invoked once in the `if` block and again in the fallback, doubling LLM/tool work and any side effects (threads, logs, etc.). You can reuse the first result instead:
```ts
let stream = await agent.stream(messages, streamOptions);
if (streamOptions.format === "aisdk" && stream.toUIMessageStreamResponse) {
return stream.toUIMessageStreamResponse();
}
// reuse `stream` in the fallback instead of calling again
```
Alternatively, move the `agent.stream` call outside the conditional and adjust options as needed.
Suggested implementation:
```typescript
const streamOptions = {
format: options?.format ?? "aisdk",
threadId: options?.threadId,
resourceId: options?.resourceId,
memory: options?.memory,
maxSteps: options?.maxSteps,
};
// Call agent.stream once and reuse the result below
const stream = await agent.stream(messages, streamOptions);
// Preferred: Use built-in AI SDK format when available
if (streamOptions.format === "aisdk" && stream.toUIMessageStreamResponse) {
return stream.toUIMessageStreamResponse();
}
```
To fully avoid double-calling `agent.stream`, also:
1. Search in `lib/client-stream-to-ai-sdk.ts` (in this same function) for any other occurrences of `await agent.stream(messages, streamOptions)` and remove/replace them with the already-created `stream` variable.
2. Ensure the fallback logic that runs when `format !== "aisdk"` or `toUIMessageStreamResponse` is missing uses the existing `stream` object rather than creating a new one.
</issue_to_address>
### Comment 3
<location> `lib/client-stream-to-ai-sdk.ts:106-114` </location>
<code_context>
+
+ const uiMessageStream = createUIMessageStream({
+ execute: async ({ writer }) => {
+ const aiSdkStream = toAISdkFormat(stream, { from: "agent" });
+ const reader = aiSdkStream.getReader();
+ try {
+ while (true) {
+ const { done, value } = await reader.read();
+ if (done) break;
+ writer.write(value);
+ }
+ } finally {
+ reader.releaseLock();
+ }
</code_context>
<issue_to_address>
**issue (bug_risk):** The fallback assumes `toAISdkFormat` produces a ReadableStream, which may be fragile if it returns an async iterable in some cases.
Previously this path supported both `AsyncIterable` and `ReadableStream` results from `toAISdkFormat`. The new code assumes `getReader()` always exists, so it will throw if an async iterable is returned.
If `toAISdkFormat` is contractually guaranteed to return a `ReadableStream` here, we should enforce that via types. Otherwise, consider keeping a small type-dispatch to handle both shapes so this fallback remains robust to changes in `toAISdkFormat`’s return type.
</issue_to_address>
### Comment 4
<location> `app/api/chat/route.ts:28-32` </location>
<code_context>
+ const body: ChatRequestBody = await req.json();
+
+ // Get available agents dynamically from mastra
+ const agentsMap = await mastra.getAgents();
+ const availableAgents = Object.keys(agentsMap);
+
+ // Use first available agent if none specified
+ const agentId = body.agentId || availableAgents[0];
+
+ if (!agentId || !availableAgents.includes(agentId)) {
</code_context>
<issue_to_address>
**suggestion:** The API doesn’t handle the case where `getAgents()` returns an empty map, which can lead to confusing error responses.
When `availableAgents` is empty, `availableAgents[0]` is `undefined`, so the error response becomes:
```json
{ "error": "Invalid or missing agentId. Available: " }
```
which gives no clear guidance.
Consider handling the empty case first, for example:
```ts
if (availableAgents.length === 0) {
return Response.json(
{ error: "No agents are configured on the server." },
{ status: 500 },
);
}
```
Then you can validate `agentId` assuming at least one agent exists.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| format?: "aisdk" | "mastra"; | ||
| threadId?: string; | ||
| resourceId?: string; | ||
| memory?: { | ||
| thread?: string | { id: string; resourceId?: string }; | ||
| resource?: string; | ||
| options?: { | ||
| lastMessages?: number; | ||
| semanticRecall?: boolean; | ||
| workingMemory?: { enabled?: boolean }; |
There was a problem hiding this comment.
issue: The format option currently doesn’t change behavior when set to "mastra", which can be misleading.
Currently only format === "aisdk" changes behavior; all other values (including "mastra") follow the same path and still run toAISdkFormat. A caller passing format: "mastra" gets identical behavior to omitting format. I’d suggest either making "mastra" a distinct mode (e.g., skip toAISdkFormat and return raw MastraModelOutput) or removing "mastra" from the type until it has dedicated semantics, so the API contract matches actual behavior.
| if (streamOptions.format === "aisdk") { | ||
| const stream = await agent.stream(messages, streamOptions); | ||
| if (stream.toUIMessageStreamResponse) { |
There was a problem hiding this comment.
suggestion (performance): Calling agent.stream twice for the same request can be wasteful and may have unintended side effects.
When format === "aisdk" but toUIMessageStreamResponse is missing, agent.stream is invoked once in the if block and again in the fallback, doubling LLM/tool work and any side effects (threads, logs, etc.). You can reuse the first result instead:
let stream = await agent.stream(messages, streamOptions);
if (streamOptions.format === "aisdk" && stream.toUIMessageStreamResponse) {
return stream.toUIMessageStreamResponse();
}
// reuse `stream` in the fallback instead of calling againAlternatively, move the agent.stream call outside the conditional and adjust options as needed.
Suggested implementation:
const streamOptions = {
format: options?.format ?? "aisdk",
threadId: options?.threadId,
resourceId: options?.resourceId,
memory: options?.memory,
maxSteps: options?.maxSteps,
};
// Call agent.stream once and reuse the result below
const stream = await agent.stream(messages, streamOptions);
// Preferred: Use built-in AI SDK format when available
if (streamOptions.format === "aisdk" && stream.toUIMessageStreamResponse) {
return stream.toUIMessageStreamResponse();
}To fully avoid double-calling agent.stream, also:
- Search in
lib/client-stream-to-ai-sdk.ts(in this same function) for any other occurrences ofawait agent.stream(messages, streamOptions)and remove/replace them with the already-createdstreamvariable. - Ensure the fallback logic that runs when
format !== "aisdk"ortoUIMessageStreamResponseis missing uses the existingstreamobject rather than creating a new one.
| const aiSdkStream = toAISdkFormat(stream, { from: "agent" }); | ||
| const reader = aiSdkStream.getReader(); | ||
| try { | ||
| while (true) { | ||
| const { done, value } = await reader.read(); | ||
| if (done) break; | ||
| writer.write(value); | ||
| } | ||
| } finally { |
There was a problem hiding this comment.
issue (bug_risk): The fallback assumes toAISdkFormat produces a ReadableStream, which may be fragile if it returns an async iterable in some cases.
Previously this path supported both AsyncIterable and ReadableStream results from toAISdkFormat. The new code assumes getReader() always exists, so it will throw if an async iterable is returned.
If toAISdkFormat is contractually guaranteed to return a ReadableStream here, we should enforce that via types. Otherwise, consider keeping a small type-dispatch to handle both shapes so this fallback remains robust to changes in toAISdkFormat’s return type.
| const agentsMap = await mastra.getAgents(); | ||
| const availableAgents = Object.keys(agentsMap); | ||
|
|
||
| // Use first available agent if none specified | ||
| const agentId = body.agentId || availableAgents[0]; |
There was a problem hiding this comment.
suggestion: The API doesn’t handle the case where getAgents() returns an empty map, which can lead to confusing error responses.
When availableAgents is empty, availableAgents[0] is undefined, so the error response becomes:
{ "error": "Invalid or missing agentId. Available: " }which gives no clear guidance.
Consider handling the empty case first, for example:
if (availableAgents.length === 0) {
return Response.json(
{ error: "No agents are configured on the server." },
{ status: 500 },
);
}Then you can validate agentId assuming at least one agent exists.
|
🤖 I'm sorry @ssdeanx, but I was unable to process your request. Please see the logs for more details. |
There was a problem hiding this comment.
Code Review
This pull request enhances the chat API by introducing a new server-side streaming helper createAgentStreamResponse, deprecating the old one, and adding an endpoint to list available agents. The changes are well-structured and improve the API's flexibility and error handling.
My review focuses on improving code maintainability, correctness, and organization. I've identified a potential performance issue in lib/client-stream-to-ai-sdk.ts where an API call might be made twice unnecessarily. I've also suggested improvements to code organization in lib/mastra-client.ts to better separate client and server concerns, and a way to strengthen type safety in lib/client-stream-to-ai-sdk.ts. Finally, I've pointed out some duplicated code in app/api/chat/route.ts that could be refactored.
| if (streamOptions.format === "aisdk") { | ||
| const stream = await agent.stream(messages, streamOptions); | ||
| if (stream.toUIMessageStreamResponse) { | ||
| return stream.toUIMessageStreamResponse(); | ||
| } | ||
| } |
There was a problem hiding this comment.
If streamOptions.format is 'aisdk' but the returned stream does not have the toUIMessageStreamResponse method, the code falls through and calls agent.stream() a second time on line 97. This is inefficient and could lead to unexpected side effects. Consider refactoring to avoid this second call, for example by handling the fallback case within this if block and reusing the stream that has already been fetched.
| const agentsMap = await mastra.getAgents(); | ||
| const availableAgents = Object.keys(agentsMap); |
| type MastraInstance = { | ||
| getAgent: (id: string) => MastraAgent; | ||
| } & Record<string, any>; |
There was a problem hiding this comment.
The MastraInstance type uses & Record<string, any>, which weakens type safety by allowing any additional properties. To create a more robust contract, consider importing the Mastra class from @mastra/core/mastra and using it directly in the createAgentStreamResponse function signature (e.g., mastra: Mastra). This would improve type checking and could eliminate the need for the type assertion in app/api/chat/route.ts.
| export { createAgentStreamResponse } from "./client-stream-to-ai-sdk"; | ||
| export type { StreamToAISdkOptions } from "./client-stream-to-ai-sdk"; |
There was a problem hiding this comment.
This file defines the client-side mastraClient, but it now also exports createAgentStreamResponse, which is a server-side utility. Mixing client and server code in a file named mastra-client.ts can be confusing for other developers. It's a good practice to separate server-side and client-side concerns into different modules (e.g., moving server utilities to a lib/server directory) to improve code organization and clarity.
Greptile OverviewGreptile SummaryThis PR refactors the chat API route to support dynamic agent selection and improves the streaming infrastructure for better integration with the AI SDK. Key Changes
Issues Found
Confidence Score: 3/5
Important Files ChangedFile Analysis
Sequence DiagramsequenceDiagram
participant Client
participant Route as POST /api/chat
participant Mastra
participant Agent
participant Stream as createAgentStreamResponse
participant AISDKStream as AI SDK Stream
Client->>Route: POST {messages, agentId?, threadId?, memory?}
Route->>Mastra: getAgents()
Mastra-->>Route: agentsMap
Route->>Route: Validate agentId (use first if not provided)
alt Invalid/Missing agentId
Route-->>Client: 400 {error: "Invalid or missing agentId"}
end
alt Empty/Invalid messages
Route-->>Client: 400 {error: "messages required"}
end
Route->>Stream: createAgentStreamResponse(mastra, agentId, messages, options)
Stream->>Mastra: getAgent(agentId)
Mastra-->>Stream: agent
Stream->>Agent: stream(messages, {format: "aisdk", threadId, memory, maxSteps})
Agent-->>Stream: MastraModelOutput
alt Built-in AI SDK format supported
Stream->>Agent: stream.toUIMessageStreamResponse()
Agent-->>Stream: Response
else Fallback: Manual transformation
Stream->>Stream: toAISdkFormat(stream)
Stream->>AISDKStream: createUIMessageStream()
AISDKStream->>AISDKStream: Read stream chunks
AISDKStream->>AISDKStream: Write to writer
Stream->>Stream: createUIMessageStreamResponse()
Stream-->>Route: Response
end
Route-->>Client: Streaming Response
alt Error occurred
Route-->>Client: 500 {error: message}
end
|
| const availableAgents = Object.keys(agentsMap); | ||
|
|
||
| // Use first available agent if none specified | ||
| const agentId = body.agentId || availableAgents[0]; |
There was a problem hiding this comment.
logic: if no agents are configured, availableAgents[0] is undefined, causing request to fail with error message instead of clear validation
| const agentId = body.agentId || availableAgents[0]; | |
| const agentId = body.agentId || (availableAgents.length > 0 ? availableAgents[0] : undefined); |
Prompt To Fix With AI
This is a comment left during a code review.
Path: app/api/chat/route.ts
Line: 32:32
Comment:
**logic:** if no agents are configured, `availableAgents[0]` is `undefined`, causing request to fail with error message instead of clear validation
```suggestion
const agentId = body.agentId || (availableAgents.length > 0 ? availableAgents[0] : undefined);
```
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
Pull request overview
This PR refactors the chat API to support dynamic agent selection and improves the streaming response architecture. The changes move from a hardcoded single-agent approach to a flexible system that validates agent IDs against available agents and provides better error handling. A new utility function createAgentStreamResponse centralizes streaming logic with built-in fallback support, while the old createMastraStreamResponse is properly deprecated.
Key changes include:
- Dynamic agent fetching and validation in the POST handler with fallback to the first available agent
- Enhanced error handling with specific validation for missing/invalid agents and messages
- New GET endpoint to expose available agents for frontend consumption
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 15 comments.
| File | Description |
|---|---|
app/api/chat/route.ts |
Refactored POST handler to dynamically fetch and validate agents, added comprehensive error handling, and introduced a new GET endpoint for listing available agents |
lib/client-stream-to-ai-sdk.ts |
Created new createAgentStreamResponse function with dual-path streaming (built-in vs. fallback), added comprehensive type definitions, and deprecated createMastraStreamResponse with clear migration guidance |
lib/mastra-client.ts |
Exported new streaming utilities to make them available throughout the codebase |
| const agentsMap = await mastra.getAgents(); | ||
| const availableAgents = Object.keys(agentsMap); |
There was a problem hiding this comment.
Fetching all agents on every POST request may impact performance, especially if there are many agents or if getAgents() is an expensive operation. Consider caching the agent list at the module level or using a memoization strategy, since the list of available agents likely doesn't change during runtime.
| export async function GET() { | ||
| const agentsMap = await mastra.getAgents(); | ||
| const availableAgents = Object.keys(agentsMap); | ||
| return Response.json({ agents: availableAgents, count: availableAgents.length }); | ||
| } |
There was a problem hiding this comment.
The new GET endpoint should have test coverage to verify it correctly returns the list of available agents and the count. This is especially important since this endpoint may be used by the frontend to dynamically populate agent selection UI.
| threadId: body.threadId, | ||
| resourceId: body.resourceId, | ||
| memory: body.memory, | ||
| maxSteps: body.maxSteps ?? 50, |
There was a problem hiding this comment.
[nitpick] The default value of 50 for maxSteps is hardcoded here. Consider extracting this to a constant at the top of the file (e.g., const DEFAULT_MAX_STEPS = 50) to make it easier to maintain and update across the codebase.
| export async function createAgentStreamResponse( | ||
| mastra: MastraInstance, | ||
| agentId: string, | ||
| messages: unknown, | ||
| options?: AgentStreamOptions | ||
| ): Promise<Response> { |
There was a problem hiding this comment.
[nitpick] The JSDoc documentation is excellent. However, consider adding @param tags to document each parameter individually, which would make it easier for developers using IDEs to understand the function signature. Also consider adding @returns to document what the Response object contains.
| messages: unknown, | ||
| options?: AgentStreamOptions | ||
| ): Promise<Response> { | ||
| const agent = mastra.getAgent(agentId); |
There was a problem hiding this comment.
The function doesn't handle the case where mastra.getAgent(agentId) might fail or return undefined/null if the agent doesn't exist. This could lead to runtime errors when trying to call agent.stream(). Consider adding validation: if (!agent) { throw new Error(\Agent '${agentId}' not found`) }` after line 78.
| export async function POST(req: Request) { | ||
| const { messages }: { | ||
| messages: UIMessage[]; | ||
| } = await req.json(); | ||
| const myAgent = mastra.getAgent("weatherAgent"); | ||
| const stream = await myAgent.stream(messages, { }); | ||
| const uiMessageStream = createUIMessageStream({ | ||
|
|
||
| execute: async ({ writer }) => { | ||
| const formatted = toAISdkFormat(stream, { from: "agent" })!; | ||
| const runtimeContext = new RuntimeContext(); | ||
| // If the returned object is an async iterable, use for-await | ||
| if (Symbol.asyncIterator in formatted) { | ||
| for await (const part of formatted as AsyncIterable<any>) { | ||
| writer.write(part); | ||
| } | ||
| } else if (typeof (formatted as any).getReader === "function") { | ||
| // If it's a ReadableStream (browser), read via getReader() | ||
| const reader = (formatted as ReadableStream<any>).getReader(); | ||
| try { | ||
| while (true) { | ||
| const { done, value } = await reader.read(); | ||
| if (done) break; | ||
| writer.write(value); | ||
| } | ||
| } finally { | ||
| reader.releaseLock?.(); | ||
| } | ||
| } | ||
| }, | ||
| }); | ||
| return createUIMessageStreamResponse({ | ||
| stream: uiMessageStream, | ||
| }); | ||
| const body: ChatRequestBody = await req.json(); | ||
|
|
||
| // Get available agents dynamically from mastra | ||
| const agentsMap = await mastra.getAgents(); | ||
| const availableAgents = Object.keys(agentsMap); | ||
|
|
||
| // Use first available agent if none specified | ||
| const agentId = body.agentId || availableAgents[0]; | ||
|
|
||
| if (!agentId || !availableAgents.includes(agentId)) { | ||
| return Response.json( | ||
| { error: `Invalid or missing agentId. Available: ${availableAgents.join(", ")}` }, | ||
| { status: 400 } | ||
| ); | ||
| } | ||
|
|
||
| if (!body.messages?.length) { | ||
| return Response.json({ error: "messages required" }, { status: 400 }); | ||
| } | ||
|
|
||
| try { | ||
| return await createAgentStreamResponse(mastra as Parameters<typeof createAgentStreamResponse>[0], agentId, body.messages, { | ||
| threadId: body.threadId, | ||
| resourceId: body.resourceId, | ||
| memory: body.memory, | ||
| maxSteps: body.maxSteps ?? 50, | ||
| }); | ||
| } catch (error) { | ||
| return Response.json( | ||
| { error: error instanceof Error ? error.message : "Stream failed" }, | ||
| { status: 500 } | ||
| ); | ||
| } | ||
| } |
There was a problem hiding this comment.
The new POST handler includes important validation logic (agent validation, message validation) and error handling that should be covered by tests. Consider adding test cases for: 1) Missing agentId with available agents, 2) Invalid agentId, 3) Missing messages, 4) Empty messages array, 5) Stream creation failure, 6) No agents available scenario.
| export async function createAgentStreamResponse( | ||
| mastra: MastraInstance, | ||
| agentId: string, | ||
| messages: unknown, | ||
| options?: AgentStreamOptions | ||
| ): Promise<Response> { | ||
| const agent = mastra.getAgent(agentId); | ||
|
|
||
| const streamOptions = { | ||
| format: options?.format ?? "aisdk", | ||
| threadId: options?.threadId, | ||
| resourceId: options?.resourceId, | ||
| memory: options?.memory, | ||
| maxSteps: options?.maxSteps, | ||
| }; | ||
|
|
||
| // Preferred: Use built-in AI SDK format | ||
| if (streamOptions.format === "aisdk") { | ||
| const stream = await agent.stream(messages, streamOptions); | ||
| if (stream.toUIMessageStreamResponse) { | ||
| return stream.toUIMessageStreamResponse(); | ||
| } | ||
| } | ||
|
|
||
| // Fallback: Manual transformation with toAISdkFormat | ||
| const stream = await agent.stream(messages, { | ||
| threadId: options?.threadId, | ||
| resourceId: options?.resourceId, | ||
| memory: options?.memory, | ||
| maxSteps: options?.maxSteps, | ||
| }); | ||
|
|
||
| const uiMessageStream = createUIMessageStream({ | ||
| execute: async ({ writer }) => { | ||
| const aiSdkStream = toAISdkFormat(stream, { from: "agent" }); | ||
| const reader = aiSdkStream.getReader(); | ||
| try { | ||
| while (true) { | ||
| const { done, value } = await reader.read(); | ||
| if (done) break; | ||
| writer.write(value); | ||
| } | ||
| } finally { | ||
| reader.releaseLock(); | ||
| } | ||
| }, | ||
| }); | ||
|
|
||
| return createUIMessageStreamResponse({ stream: uiMessageStream }); | ||
| } |
There was a problem hiding this comment.
The new createAgentStreamResponse function includes complex logic with two different code paths (built-in AI SDK format vs. fallback transformation) that should be thoroughly tested. Consider adding tests for: 1) Built-in format with toUIMessageStreamResponse available, 2) Fallback path when toUIMessageStreamResponse is not available, 3) Invalid agent ID, 4) Stream errors, 5) Different option combinations (threadId, resourceId, memory, maxSteps).
|
|
||
| type MastraInstance = { | ||
| getAgent: (id: string) => MastraAgent; | ||
| } & Record<string, any>; |
There was a problem hiding this comment.
Unexpected any. Specify a different type.
| } & Record<string, any>; | |
| } & Record<string, unknown>; |
| export async function createMastraStreamResponse( | ||
| _client: unknown, | ||
| _options: StreamToAISdkOptions | ||
| ): Promise<Response> { |
There was a problem hiding this comment.
'_client' is defined but never used.
| export async function createMastraStreamResponse( | |
| _client: unknown, | |
| _options: StreamToAISdkOptions | |
| ): Promise<Response> { | |
| export async function createMastraStreamResponse(): Promise<Response> { |
| _client: unknown, | ||
| _options: StreamToAISdkOptions |
There was a problem hiding this comment.
'_options' is defined but never used.
| _client: unknown, | |
| _options: StreamToAISdkOptions | |
| _client: unknown |
There was a problem hiding this comment.
Actionable comments posted: 8
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (3)
app/api/chat/route.ts(1 hunks)lib/client-stream-to-ai-sdk.ts(1 hunks)lib/mastra-client.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (6)
**/app/**
📄 CodeRabbit inference engine (.github/instructions/next-js.instructions.md)
Use the
app/directory structure for route handlers, server components, and client components (Next.js 13+). Prefer this over thepages/directory for new projects.
Files:
app/api/chat/route.ts
**/app/api/**/route.{js,ts}
📄 CodeRabbit inference engine (.github/instructions/next-js.instructions.md)
**/app/api/**/route.{js,ts}: Define server-side route handlers usingroute.tsorroute.jsfiles within theapp/apidirectory.
Use Next.js API routes for serverless functions.
Always validate user input on the server-side using a validation library likezodoryup.
Use authentication middleware to protect API endpoints.
Store API keys and sensitive data in environment variables and access them on the server-side only.
Configure caching with appropriate Cache-Control headers to improve performance and reduce server load.
Enable Gzip or Brotli compression on your server to reduce the size of transferred files.
Implement rate limiting to prevent abuse of your API.
Properly encode API responses to prevent injection attacks.
Files:
app/api/chat/route.ts
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.github/instructions/next-js.instructions.md)
**/*.{js,jsx,ts,tsx}: Usenext/dynamicfor dynamic imports to load components only when needed, improving initial load time.
Usenext/imagecomponent for automatic image optimization, including lazy loading and responsive images.
Use React.memo to prevent unnecessary re-renders of components.
Use the<Link prefetch>tag to prefetch pages that are likely to be visited.
Use getServerSideProps, getStaticProps, or server components for fetching data on the server-side.
Use SWR or React Query for client-side data fetching and caching.
Use CSS Modules, Styled Components, or Tailwind CSS for component-level styling. Prefer Tailwind CSS for rapid development.
Use React Context, Zustand, Jotai, or Recoil for managing global state. Avoid Redux unless necessary.
Usereact-hook-formfor managing forms and validation.
Only fetch the data that is needed by the component to avoid over-fetching.
Avoid long-running synchronous operations in the main thread to prevent blocking.
Always usesetStateor hooks to update state instead of mutating state directly.
Include a complete dependency array inuseEffecthooks to prevent unexpected behavior.
Avoid writing server-side code in client components to prevent exposing secrets or causing unexpected behavior.
Usetry...catchblocks for handling errors in asynchronous operations.
Implement error boundary components usinggetDerivedStateFromErrororcomponentDidCatchlifecycle methods.
Sanitize user input to prevent Cross-Site Scripting (XSS) attacks. Be especially careful when rendering HTML directly from user input.
Store authentication tokens in HTTP-only cookies or local storage securely.
Implement role-based access control to restrict access to sensitive resources.
Clean up event listeners and timers inuseEffecthooks to avoid memory leaks.
Only update state when necessary to reduce the number of re-renders and improve performance.
Use immutable data structures and avoid mutating data directly to prevent unexpected...
Files:
app/api/chat/route.tslib/mastra-client.tslib/client-stream-to-ai-sdk.ts
**/*.{js,ts}
📄 CodeRabbit inference engine (.github/instructions/next-js.instructions.md)
Use parameterized queries or an ORM to prevent SQL injection attacks.
Files:
app/api/chat/route.tslib/mastra-client.tslib/client-stream-to-ai-sdk.ts
{app,src/components}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use Next.js 16 App Router with React 19 for frontend development
Files:
app/api/chat/route.ts
app/**/*.{tsx,ts}
📄 CodeRabbit inference engine (app/AGENTS.md)
app/**/*.{tsx,ts}: Use Tailwind CSS 4 with oklch color variables for styling in Next.js App Router pages and layouts
Use React 19 latest features in component implementations within the app directory
Files:
app/api/chat/route.ts
🧬 Code graph analysis (2)
app/api/chat/route.ts (2)
src/mastra/index.ts (1)
mastra(85-611)lib/client-stream-to-ai-sdk.ts (1)
createAgentStreamResponse(72-121)
lib/client-stream-to-ai-sdk.ts (1)
lib/mastra-client.ts (2)
StreamToAISdkOptions(14-14)createAgentStreamResponse(13-13)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Agent
- GitHub Check: Sourcery review
- GitHub Check: Codacy Security Scan
🔇 Additional comments (2)
app/api/chat/route.ts (1)
60-63: GET handler for listing agents looks goodThe GET handler cleanly returns the agent IDs and count and reuses
mastra.getAgents()consistently with the POST handler. No issues here.lib/client-stream-to-ai-sdk.ts (1)
123-135: No remaining usages of deprecatedcreateMastraStreamResponsefoundThe search confirms that
createMastraStreamResponseexists only in its own definition at lib/client-stream-to-ai-sdk.ts. No call sites or imports exist elsewhere in the codebase, so the error-throwing implementation will not cause runtime failures for existing code.
| interface ChatRequestBody { | ||
| messages: UIMessage[]; | ||
| agentId?: string; | ||
| threadId?: string; | ||
| resourceId?: string; | ||
| memory?: { | ||
| thread?: string | { id: string; resourceId?: string }; | ||
| resource?: string; | ||
| options?: { | ||
| lastMessages?: number; | ||
| semanticRecall?: boolean; | ||
| workingMemory?: { enabled?: boolean }; | ||
| }; | ||
| }; | ||
| maxSteps?: number; | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Reuse AgentStreamOptions for memory/maxSteps to avoid type drift
ChatRequestBody redefines the same memory (and effectively maxSteps) shape already modeled in AgentStreamOptions in lib/client-stream-to-ai-sdk.ts. To keep these in sync, consider referencing the shared type instead of duplicating the structure, e.g.:
import type { AgentStreamOptions } from "@/lib/client-stream-to-ai-sdk";
interface ChatRequestBody {
messages: UIMessage[];
agentId?: string;
threadId?: string;
resourceId?: string;
memory?: AgentStreamOptions["memory"];
maxSteps?: AgentStreamOptions["maxSteps"];
}This reduces maintenance cost if the memory schema evolves.
🤖 Prompt for AI Agents
In app/api/chat/route.ts around lines 7 to 22, the ChatRequestBody duplicates
the memory and maxSteps shapes already defined in AgentStreamOptions; change
ChatRequestBody to reuse the shared types to avoid drift by importing the
AgentStreamOptions type from "@/lib/client-stream-to-ai-sdk" and replace the
memory and maxSteps field definitions with memory?: AgentStreamOptions["memory"]
and maxSteps?: AgentStreamOptions["maxSteps"] (ensure the import is a type-only
import).
| const body: ChatRequestBody = await req.json(); | ||
|
|
||
| // Get available agents dynamically from mastra | ||
| const agentsMap = await mastra.getAgents(); | ||
| const availableAgents = Object.keys(agentsMap); | ||
|
|
||
| // Use first available agent if none specified | ||
| const agentId = body.agentId || availableAgents[0]; | ||
|
|
||
| if (!agentId || !availableAgents.includes(agentId)) { | ||
| return Response.json( | ||
| { error: `Invalid or missing agentId. Available: ${availableAgents.join(", ")}` }, | ||
| { status: 400 } | ||
| ); | ||
| } | ||
|
|
||
| if (!body.messages?.length) { | ||
| return Response.json({ error: "messages required" }, { status: 400 }); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
🏁 Script executed:
rg "Invalid or missing agentId" --type ts --type tsx 2>/dev/nullRepository: ssdeanx/AgentStack
Length of output: 44
Add schema validation and JSON parse error handling to the API route
The current implementation lacks schema validation on the request body and doesn't handle JSON parse errors:
- Validate
ChatRequestBodyusingzodoryupat the start of the handler to catch malformed requests with clear 400 responses instead of passing arbitrary data to the agent layer - Wrap
await req.json()in a try-catch to return a 400 status for invalid JSON ("Invalid JSON body") instead of letting parse errors bubble as generic 500 errors
This aligns with the requirement to validate server-side input using a validation library for API routes.
| const agentsMap = await mastra.getAgents(); | ||
| const availableAgents = Object.keys(agentsMap); | ||
|
|
||
| // Use first available agent if none specified | ||
| const agentId = body.agentId || availableAgents[0]; | ||
|
|
||
| if (!agentId || !availableAgents.includes(agentId)) { | ||
| return Response.json( | ||
| { error: `Invalid or missing agentId. Available: ${availableAgents.join(", ")}` }, | ||
| { status: 400 } | ||
| ); |
There was a problem hiding this comment.
Clarify behavior when no agents are configured
If mastra.getAgents() returns an empty object, availableAgents[0] is undefined, and the code returns a 400 "Invalid or missing agentId. Available: " even though the real problem is “no agents configured.” It would be clearer to:
- Special-case
availableAgents.length === 0and return a 500 (or similar) indicating server misconfiguration. - Reserve the current 400 path for truly invalid client-specified agent IDs.
🤖 Prompt for AI Agents
In app/api/chat/route.ts around lines 28 to 38, handle the case where
mastra.getAgents() returns no agents separately: if availableAgents.length === 0
return an error response (500 or 503) with a clear message like "No agents
configured" instead of falling through to the 400 path; only use
availableAgents[0] as the default agentId after confirming availableAgents is
non-empty, and keep the existing 400 response for cases where the client
supplied an agentId that is not in availableAgents.
| try { | ||
| return await createAgentStreamResponse(mastra as Parameters<typeof createAgentStreamResponse>[0], agentId, body.messages, { | ||
| threadId: body.threadId, | ||
| resourceId: body.resourceId, | ||
| memory: body.memory, | ||
| maxSteps: body.maxSteps ?? 50, | ||
| }); | ||
| } catch (error) { | ||
| return Response.json( | ||
| { error: error instanceof Error ? error.message : "Stream failed" }, | ||
| { status: 500 } | ||
| ); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Log streaming errors and avoid leaking internal error details
The catch block turns any error into a JSON { error: message } with status 500 but doesn’t log it. That makes production debugging hard and may expose internal messages to clients.
Consider:
- Logging the full error (with stack) on the server before responding.
- Returning a more generic public message (e.g.
"Streaming failed") while keeping detailed info in logs only.
🤖 Prompt for AI Agents
In app/api/chat/route.ts around lines 45 to 57, the catch block currently
returns the raw error message to the client and does not log the error; change
it to log the full error (including stack) to the server log (e.g.,
processLogger.error or console.error) before returning a response, and return a
generic public-facing message like "Streaming failed" (status 500) to avoid
leaking internal details.
| export interface AgentStreamOptions { | ||
| format?: "aisdk" | "mastra"; | ||
| threadId?: string; | ||
| resourceId?: string; | ||
| memory?: { | ||
| thread?: string | { id: string; resourceId?: string }; | ||
| resource?: string; | ||
| options?: { | ||
| lastMessages?: number; | ||
| semanticRecall?: boolean; | ||
| workingMemory?: { enabled?: boolean }; | ||
| }; | ||
| }; | ||
| maxSteps?: number; | ||
| } | ||
|
|
||
| type MastraAgent = { | ||
| stream: ( | ||
| messages: unknown, | ||
| options?: { | ||
| format?: string; | ||
| threadId?: string; | ||
| resourceId?: string; | ||
| memory?: AgentStreamOptions["memory"]; | ||
| maxSteps?: number; | ||
| } | ||
| }, | ||
| }); | ||
| ) => Promise<MastraModelOutput & { toUIMessageStreamResponse?: () => Response }>; | ||
| }; | ||
|
|
||
| type MastraInstance = { | ||
| getAgent: (id: string) => MastraAgent; | ||
| } & Record<string, any>; | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Export and reuse core streaming option/instance types
AgentStreamOptions and MastraInstance are central to the streaming API but are currently only used internally here. Since other modules (e.g. app/api/chat/route.ts) need to mirror these shapes, consider explicitly exporting these types and consuming them at call sites to avoid duplicated schemas and keep everything in lockstep.
For example:
export type { AgentStreamOptions, MastraInstance };then reuse them in route definitions.
🤖 Prompt for AI Agents
In lib/client-stream-to-ai-sdk.ts lines 15 to 47, AgentStreamOptions and
MastraInstance types are declared but not exported. To enable reuse in other
modules and prevent duplication, explicitly export these two types by adding an
export statement such as 'export type { AgentStreamOptions, MastraInstance };'
at the bottom of the file. This allows other files like app/api/chat/route.ts to
import and use these types for consistency.
| export async function createAgentStreamResponse( | ||
| mastra: MastraInstance, | ||
| agentId: string, | ||
| messages: unknown, | ||
| options?: AgentStreamOptions | ||
| ): Promise<Response> { | ||
| const agent = mastra.getAgent(agentId); | ||
|
|
||
| const streamOptions = { | ||
| format: options?.format ?? "aisdk", | ||
| threadId: options?.threadId, | ||
| resourceId: options?.resourceId, | ||
| memory: options?.memory, | ||
| maxSteps: options?.maxSteps, | ||
| }; | ||
|
|
There was a problem hiding this comment.
Guard against missing agents with clearer error messages
If mastra.getAgent(agentId) ever returns undefined instead of throwing, agent.stream will fail with a generic “cannot read properties of undefined” error. Adding an explicit guard here would produce a more actionable error:
const agent = mastra.getAgent(agentId);
if (!agent) {
throw new Error(`Agent not found: ${agentId}`);
}This also keeps upstream route handlers from having to guess what went wrong.
🤖 Prompt for AI Agents
In lib/client-stream-to-ai-sdk.ts around lines 72 to 87, add an explicit guard
after calling mastra.getAgent(agentId) to check for a missing agent and throw a
clear error (e.g., "Agent not found: <agentId>") instead of allowing
agent.stream to fail with a generic undefined-property error; update the
function to validate agent is truthy and throw the descriptive Error so upstream
handlers receive an actionable message.
| // Fallback: Manual transformation with toAISdkFormat | ||
| const stream = await agent.stream(messages, { | ||
| threadId: options?.threadId, | ||
| resourceId: options?.resourceId, | ||
| memory: options?.memory, | ||
| maxSteps: options?.maxSteps, | ||
| }); | ||
|
|
||
| const uiMessageStream = createUIMessageStream({ | ||
| execute: async ({ writer }) => { | ||
| const aiSdkStream = toAISdkFormat(stream, { from: "agent" }); | ||
| const reader = aiSdkStream.getReader(); | ||
| try { | ||
| while (true) { | ||
| const { done, value } = await reader.read(); | ||
| if (done) break; | ||
| writer.write(value); | ||
| } | ||
| } finally { | ||
| reader.releaseLock(); | ||
| } | ||
| }, | ||
| }); | ||
|
|
||
| return createUIMessageStreamResponse({ stream: uiMessageStream }); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# 1. Check if the file exists and examine its content
echo "=== Checking file existence and content ==="
if [ -f "lib/client-stream-to-ai-sdk.ts" ]; then
wc -l lib/client-stream-to-ai-sdk.ts
echo "File exists. Now showing lines 90-130:"
sed -n '90,130p' lib/client-stream-to-ai-sdk.ts
else
echo "File not found at lib/client-stream-to-ai-sdk.ts"
# Try to find it
find . -name "client-stream-to-ai-sdk.ts" 2>/dev/null
fiRepository: ssdeanx/AgentStack
Length of output: 1369
🏁 Script executed:
#!/bin/bash
echo "=== Checking imports at top of file ==="
head -40 lib/client-stream-to-ai-sdk.tsRepository: ssdeanx/AgentStack
Length of output: 1050
🏁 Script executed:
#!/bin/bash
echo "=== Searching for createUIMessageStream usage ==="
rg "createUIMessageStream" --type ts --type tsx -B 2 -A 10
echo ""
echo "=== Searching for writer.close() usage ==="
rg "writer\.close\(\)" --type ts --type tsx -B 2 -A 2
echo ""
echo "=== Searching for writer usage patterns ==="
rg "writer\." --type ts --type tsx -B 1 -A 1 | head -100Repository: ssdeanx/AgentStack
Length of output: 301
🏁 Script executed:
#!/bin/bash
echo "=== Searching for createUIMessageStream usage (ts only) ==="
rg "createUIMessageStream" -t ts -B 2 -A 10
echo ""
echo "=== Searching for writer usage in ts files ==="
rg "writer" -t ts | grep -E "(writer\.|execute.*writer)" | head -20
echo ""
echo "=== Checking package.json for ai/vercel dependency ==="
cat package.json | jq '.dependencies | select(. != null) | to_entries[] | select(.key | contains("ai"))'Repository: ssdeanx/AgentStack
Length of output: 5497
🌐 Web query:
Vercel AI SDK createUIMessageStream writer.close() stream closure
💡 Result:
Short answer: always call writer.close() (or writer.abort(err) on error) to terminate the UIMessageStream; otherwise the SSE/readable stream remains open and the client will hang or not receive the final "end" event. Put writer.close() in your normal completion path and also in a finally/abort handler so the stream always ends. Use the SDK helpers (createUIMessageStreamResponse / pipeUIMessageStreamToResponse) to ensure proper SSE headers/closing behavior.
Key pointers and references:
- createUIMessageStream docs: writer.close() signals end of stream. [1]
- Dev write-up / examples showing writer.close() in examples and recommended pattern. [2]
- GitHub/issue notes: some edge cases (aborts) can affect onFinish callbacks — ensure you call close in abort handling or use writer.abort(error) to emit an error and terminate. [3]
- Migration/discussion examples showing writer.close() before piping to response. [4]
If you want, paste your handler code and I’ll show exactly where to add writer.close()/abort and an abortSignal-safe finally block.
Sources:
[1] AI SDK docs — createUIMessageStream.
[2] Vercel AI SDK internals / streaming examples.
[3] GitHub issue: onFinish not called when aborted.
[4] Vercel AI discussions example (createUIMessageStream + pipe).
🏁 Script executed:
#!/bin/bash
echo "=== Showing complete function context ==="
sed -n '80,134p' lib/client-stream-to-ai-sdk.tsRepository: ssdeanx/AgentStack
Length of output: 1797
Close the UI writer after streaming fallback completes
In the fallback path, the loop reads from aiSdkStream and writes chunks to the writer, but never explicitly closes the writer. Without calling writer.close(), the UIMessageStream remains open on the client side and the stream never properly terminates, causing the client to hang or fail to receive the final "end" event.
Add writer.close() in the finally block to ensure the stream always terminates properly:
const uiMessageStream = createUIMessageStream({
execute: async ({ writer }) => {
const aiSdkStream = toAISdkFormat(stream, { from: "agent" });
const reader = aiSdkStream.getReader();
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
writer.write(value);
}
} finally {
reader.releaseLock();
+ writer.close();
}
},
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Fallback: Manual transformation with toAISdkFormat | |
| const stream = await agent.stream(messages, { | |
| threadId: options?.threadId, | |
| resourceId: options?.resourceId, | |
| memory: options?.memory, | |
| maxSteps: options?.maxSteps, | |
| }); | |
| const uiMessageStream = createUIMessageStream({ | |
| execute: async ({ writer }) => { | |
| const aiSdkStream = toAISdkFormat(stream, { from: "agent" }); | |
| const reader = aiSdkStream.getReader(); | |
| try { | |
| while (true) { | |
| const { done, value } = await reader.read(); | |
| if (done) break; | |
| writer.write(value); | |
| } | |
| } finally { | |
| reader.releaseLock(); | |
| } | |
| }, | |
| }); | |
| return createUIMessageStreamResponse({ stream: uiMessageStream }); | |
| } | |
| // Fallback: Manual transformation with toAISdkFormat | |
| const stream = await agent.stream(messages, { | |
| threadId: options?.threadId, | |
| resourceId: options?.resourceId, | |
| memory: options?.memory, | |
| maxSteps: options?.maxSteps, | |
| }); | |
| const uiMessageStream = createUIMessageStream({ | |
| execute: async ({ writer }) => { | |
| const aiSdkStream = toAISdkFormat(stream, { from: "agent" }); | |
| const reader = aiSdkStream.getReader(); | |
| try { | |
| while (true) { | |
| const { done, value } = await reader.read(); | |
| if (done) break; | |
| writer.write(value); | |
| } | |
| } finally { | |
| reader.releaseLock(); | |
| writer.close(); | |
| } | |
| }, | |
| }); | |
| return createUIMessageStreamResponse({ stream: uiMessageStream }); | |
| } |
🤖 Prompt for AI Agents
In lib/client-stream-to-ai-sdk.ts around lines 96 to 121, the fallback streaming
loop writes chunks to the UI writer but never closes it, leaving the client
stream open; update the finally block to ensure the writer is closed after
streaming (call writer.close()) before releasing the reader lock so the
UIMessageStream always terminates, and still release the reader lock in all
cases.
| export { createAgentStreamResponse } from "./client-stream-to-ai-sdk"; | ||
| export type { StreamToAISdkOptions } from "./client-stream-to-ai-sdk"; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Separate server-only streaming helper from client SDK exports
This module now exposes both the browser-oriented mastraClient and the server-only createAgentStreamResponse. That can blur the boundary and make it easier for client code to accidentally import a server helper, potentially bloating bundles or causing environment confusion. Consider moving the server helper/type re-exports to a dedicated server module (e.g. lib/mastra-server) or clearly documenting this file as server-only.
🤖 Prompt for AI Agents
In lib/mastra-client.ts around lines 13 to 14, the file re-exports a server-only
helper (createAgentStreamResponse and its type) alongside browser-oriented
client APIs, which risks accidental client-side imports; move those server-only
exports into a new dedicated module (e.g., lib/mastra-server.ts) and update
import sites to use that module, or alternatively remove the server-only
re-exports from lib/mastra-client.ts and add a clear file-level comment stating
this file is browser-only and should not expose server helpers.
Summary by Sourcery
Enhance the chat API to use a reusable server-side agent streaming helper and surface available agents, while deprecating the old client streaming helper.
New Features:
Bug Fixes:
Enhancements: