Skip to content

feat: enhance chat API and client-stream integration#4

Merged
ssdeanx merged 1 commit intomainfrom
develop
Dec 5, 2025
Merged

feat: enhance chat API and client-stream integration#4
ssdeanx merged 1 commit intomainfrom
develop

Conversation

@ssdeanx
Copy link
Owner

@ssdeanx ssdeanx commented Dec 5, 2025

  • Updated POST handler in route.ts to dynamically fetch available agents and validate agentId.
  • Improved error handling for missing or invalid messages and agentId.
  • Refactored createAgentStreamResponse in client-stream-to-ai-sdk.ts to streamline response creation and added support for new options.
  • Deprecated createMastraStreamResponse in favor of createAgentStreamResponse for better clarity and usage.
  • Updated mastra-client.ts to export new types and functions for better integration.

Summary by Sourcery

Enhance the chat API to use a reusable server-side agent streaming helper and surface available agents, while deprecating the old client streaming helper.

New Features:

  • Add createAgentStreamResponse helper to stream responses from server-side Mastra agents in AI SDK format with support for threading, resources, memory, and maxSteps options.
  • Expose a GET /api/chat endpoint that returns the available agents and their count.

Bug Fixes:

  • Validate presence of messages and agentId in the chat POST handler and return clear 4xx errors for invalid input instead of failing implicitly.

Enhancements:

  • Refactor the POST /api/chat handler to choose agents dynamically from mastra, adjust maxDuration, and delegate streaming to the new createAgentStreamResponse helper.
  • Deprecate createMastraStreamResponse in favor of createAgentStreamResponse and make the former throw a clear error directing users to the new API.
  • Update the mastra client module to re-export the new streaming helper and associated options type for easier integration.

- Updated POST handler in route.ts to dynamically fetch available agents and validate agentId.
- Improved error handling for missing or invalid messages and agentId.
- Refactored createAgentStreamResponse in client-stream-to-ai-sdk.ts to streamline response creation and added support for new options.
- Deprecated createMastraStreamResponse in favor of createAgentStreamResponse for better clarity and usage.
- Updated mastra-client.ts to export new types and functions for better integration.
Copilot AI review requested due to automatic review settings December 5, 2025 17:32
@continue
Copy link

continue bot commented Dec 5, 2025

Keep this PR in a mergeable state →

Learn more

All Green is an AI agent that automatically:

✅ Addresses code review comments

✅ Fixes failing CI checks

✅ Resolves merge conflicts

@sourcery-ai
Copy link

sourcery-ai bot commented Dec 5, 2025

Reviewer's Guide

Refactors the chat API route to use a new generic agent streaming helper that integrates Mastra agents with the AI SDK, adds stronger request validation and dynamic agent lookup, and exposes the helper from the Mastra client while deprecating the old client-stream function.

Sequence diagram for POST chat API using createAgentStreamResponse

sequenceDiagram
  actor Client
  participant ChatRoute as ApiChatRoute
  participant Mastra as MastraInstance
  participant Agent as MastraAgent
  participant Helper as createAgentStreamResponse

  Client->>ChatRoute: POST /api/chat
  ChatRoute->>ChatRoute: Parse body to ChatRequestBody
  ChatRoute->>Mastra: getAgents()
  Mastra-->>ChatRoute: agentsMap
  ChatRoute->>ChatRoute: Derive agentId and validate
  ChatRoute->>ChatRoute: Validate messages
  alt Invalid agentId or messages
    ChatRoute-->>Client: 400 Response json(error)
  else Valid request
    ChatRoute->>Helper: createAgentStreamResponse(mastra, agentId, messages, options)
    Helper->>Mastra: getAgent(agentId)
    Mastra-->>Helper: agent
    Helper->>Agent: stream(messages, streamOptions)
    Agent-->>Helper: stream
    alt Agent stream supports toUIMessageStreamResponse
      Helper-->>ChatRoute: Response from toUIMessageStreamResponse
    else Fallback to manual AISDK format
      Helper->>Helper: toAISdkFormat(stream, from agent)
      Helper->>Helper: createUIMessageStream
      Helper-->>ChatRoute: createUIMessageStreamResponse
    end
    ChatRoute-->>Client: 200 streaming Response
  end
  Note over ChatRoute,Helper: Errors in streaming are caught and returned as 500 JSON
Loading

Class diagram for new agent streaming helper and chat route types

classDiagram
  class StreamToAISdkOptions {
    +string agentId
    +string messages
    +string threadId
    +string resourceId
  }

  class AgentStreamOptions {
    +string format
    +string threadId
    +string resourceId
    +memory memory
    +number maxSteps
  }

  class AgentStreamOptions_memory {
    +thread thread
    +string resource
    +memory_options options
  }

  class AgentStreamOptions_memory_thread {
    +string id
    +string resourceId
  }

  class AgentStreamOptions_memory_options {
    +number lastMessages
    +boolean semanticRecall
    +workingMemory workingMemory
  }

  class AgentStreamOptions_memory_options_workingMemory {
    +boolean enabled
  }

  class UIMessage {
    +string id
    +string role
    +string content
  }

  class ChatRequestBody {
    +UIMessage[] messages
    +string agentId
    +string threadId
    +string resourceId
    +AgentStreamOptions_memory memory
    +number maxSteps
  }

  class MastraModelOutput {
  }

  class MastraAgent {
    +stream(messages, options) MastraModelOutput
  }

  class MastraInstance {
    +getAgent(id) MastraAgent
    +getAgents() Map
  }

  class createAgentStreamResponse {
    +createAgentStreamResponse(mastra, agentId, messages, options) Response
  }

  class createMastraStreamResponse {
    +createMastraStreamResponse(client, options) Response
  }

  MastraInstance --> MastraAgent : getAgent
  MastraInstance --> MastraModelOutput : getAgents returns agents map
  MastraAgent --> MastraModelOutput : stream

  AgentStreamOptions --> AgentStreamOptions_memory : memory
  AgentStreamOptions_memory --> AgentStreamOptions_memory_thread : thread
  AgentStreamOptions_memory --> AgentStreamOptions_memory_options : options
  AgentStreamOptions_memory_options --> AgentStreamOptions_memory_options_workingMemory : workingMemory

  ChatRequestBody --> UIMessage : messages
  ChatRequestBody --> AgentStreamOptions_memory : memory

  createAgentStreamResponse --> MastraInstance : uses
  createAgentStreamResponse --> MastraAgent : uses
  createAgentStreamResponse --> MastraModelOutput : streams

  createMastraStreamResponse <|.. createAgentStreamResponse : deprecated in favor of
Loading

File-Level Changes

Change Details Files
Introduce a reusable createAgentStreamResponse helper to stream Mastra agent responses via the AI SDK, with support for AISDK-native streaming and a manual fallback.
  • Add AgentStreamOptions and supporting MastraAgent/MastraInstance types for strongly-typed agent streaming options and memory configuration.
  • Implement createAgentStreamResponse to call mastra.getAgent and agent.stream, preferring the agent’s toUIMessageStreamResponse when available and falling back to transforming the stream with toAISdkFormat and createUIMessageStream.
  • Ensure the fallback path reads from the AI SDK stream via a reader loop and returns a Response using createUIMessageStreamResponse.
lib/client-stream-to-ai-sdk.ts
Deprecate the previous createMastraStreamResponse API in favor of the new server-side agent helper.
  • Define StreamToAISdkOptions to describe the old client-side options shape for compatibility.
  • Replace the old implementation with a stub that always throws an Error directing users to createAgentStreamResponse and the documentation.
lib/client-stream-to-ai-sdk.ts
Refactor the chat API POST handler to dynamically select and validate agents, enforce required messages, and delegate streaming to createAgentStreamResponse.
  • Introduce ChatRequestBody interface capturing messages, agentId, threadId, resourceId, memory, and maxSteps.
  • Fetch agents dynamically via mastra.getAgents, derive available agent IDs, and pick a default agent when agentId is omitted.
  • Add validation for missing or invalid agentId and empty messages, returning 400 errors with descriptive messages.
  • Wrap the streaming call in a try/catch, using createAgentStreamResponse with thread/memory/maxSteps options and returning a 500 JSON error on failures.
  • Increase maxDuration from 30 to 60 and switch mastra import to the aliased path.
app/api/chat/route.ts
Add a GET /api/chat endpoint to expose available agents for clients.
  • Implement GET handler that calls mastra.getAgents, collects the keys, and returns a JSON payload with agents and count.
app/api/chat/route.ts
Re-export the new server-side streaming helper and its options type from the Mastra client module for easier consumption.
  • Export createAgentStreamResponse from client-stream-to-ai-sdk via mastra-client.
  • Export the StreamToAISdkOptions type from client-stream-to-ai-sdk so downstream code can reference the options shape.
lib/mastra-client.ts

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@github-actions
Copy link

github-actions bot commented Dec 5, 2025

🤖 Hi @ssdeanx, I've received your request, and I'm working on it now! You can track my progress in the logs for more details.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ssdeanx, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the chat API by introducing dynamic agent management and more robust streaming capabilities. It refactors the core logic for handling agent responses, making the API more flexible and developer-friendly. The changes aim to improve the overall reliability and functionality of agent-based interactions, providing better control over agent selection and stream processing.

Highlights

  • Enhanced Chat API: The chat API's POST handler in route.ts has been significantly updated to dynamically fetch available agents, validate the agentId provided in the request, and improve error handling for invalid inputs.
  • Streamlined Agent Response Creation: The createAgentStreamResponse function in client-stream-to-ai-sdk.ts has been refactored to streamline the creation of streaming responses from Mastra agents, now supporting additional options like threadId, resourceId, memory, and maxSteps.
  • Deprecation of createMastraStreamResponse: The createMastraStreamResponse function has been deprecated in favor of the more explicit and robust createAgentStreamResponse, guiding users towards the server-side Mastra instance for agent streaming.
  • New GET Endpoint for Agents: A new GET endpoint has been added to /api/chat to allow clients to easily retrieve a list of all available agents and their count.
  • Increased Max Duration: The maxDuration for the chat API route has been increased from 30 to 60 seconds, allowing for longer processing times for agent interactions.
  • Improved Client Integration: The mastra-client.ts file now exports the new createAgentStreamResponse function and StreamToAISdkOptions type, facilitating better integration with the updated streaming capabilities.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link

coderabbitai bot commented Dec 5, 2025

Summary by CodeRabbit

  • New Features

    • Agent-based chat system with dynamic agent selection
    • GET endpoint to list available agents and retrieve agent count
    • Support for configurable streaming options (thread context, resource IDs, memory settings, step limits)
  • Updates

    • Increased API timeout duration from 30 to 60 seconds
  • Deprecations

    • Deprecated legacy streaming function with migration guidance provided

✏️ Tip: You can customize this high-level summary in your review settings.

Walkthrough

The chat API undergoes a fundamental redesign, shifting from simple streaming to a Mastra agent-based architecture. The POST endpoint now dynamically selects agents, validates inputs, and delegates streaming to createAgentStreamResponse. A new GET endpoint lists available agents. New type definitions and server-side streaming utilities support flexible agent options and format handling.

Changes

Cohort / File(s) Summary
Chat API Route Refactoring
app/api/chat/route.ts
POST endpoint refactored to fetch available agents from Mastra, select and validate agentId, validate messages, and delegate streaming to createAgentStreamResponse. GET endpoint added to return agent list and count. New ChatRequestBody type introduced. Imports updated to use path aliases, added createAgentStreamResponse and UIMessage. maxDuration increased from 30 to 60.
Agent Stream Response Library
lib/client-stream-to-ai-sdk.ts
New createAgentStreamResponse function added to handle server-side Mastra-based agent streaming with format selection (AI SDK or Mastra). Introduces StreamToAISdkOptions, AgentStreamOptions, MastraAgent, and MastraInstance types. Deprecated createMastraStreamResponse for backward compatibility with error guidance.
Client Re-exports
lib/mastra-client.ts
Re-exports createAgentStreamResponse function and StreamToAISdkOptions type from client-stream-to-ai-sdk.

Sequence Diagram

sequenceDiagram
    participant Client
    participant API as API Route
    participant Mastra
    participant Agent
    participant Stream

    Client->>API: POST /chat (messages, optional agentId)
    API->>Mastra: getAgents()
    Mastra-->>API: List of available agents
    alt agentId provided
        API->>API: Validate agentId
    else agentId not provided
        API->>API: Select first available agent
    end
    API->>API: Validate messages
    API->>Mastra: createAgentStreamResponse(mastra, agentId, messages, options)
    Mastra->>Agent: getAgent(agentId)
    Agent-->>Mastra: MastraAgent instance
    Mastra->>Agent: stream(messages, options)
    Agent-->>Mastra: MastraModelOutput with stream
    alt format === "aisdk"
        Mastra->>Stream: Use agent's AI SDK format
    else fallback
        Mastra->>Stream: toAISdkFormat + toUIMessageStream
    end
    Stream-->>Mastra: Response stream
    Mastra-->>API: Response
    API-->>Client: Streamed response (200)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Type system expansion: Five new exported interfaces/types (StreamToAISdkOptions, AgentStreamOptions, MastraAgent, MastraInstance, ChatRequestBody) require careful review for correctness and extensibility.
  • Agent selection and validation logic: Dynamic agent retrieval, fallback behavior, and validation in POST endpoint needs verification for edge cases and error handling.
  • Streaming format handling: Conditional logic for "aisdk" vs. "mastra" format with fallback transformation may have subtle implications for different stream outputs.
  • Mastra integration assumptions: Heavy reliance on Mastra framework methods (getAgents(), getAgent(), stream()) requires understanding of expected contracts and error scenarios.
  • Cross-file dependencies: Changes span three files with interdependencies (route → library → re-exports), requiring holistic understanding of data flow.

Poem

🐰 Agents now dance where streams once flowed,
A Mastra path, a brighter road!
From simple POST to dynamic choice,
Each agent gets a hoppy voice. 🌟
Server-side flows, formats blend,
Streaming's refactored, tail to end!

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main changes: enhancing the chat API and improving client-stream integration through agent-based streaming and refactoring.
Description check ✅ Passed The description is directly related to the changeset, detailing the specific updates to route.ts, error handling, createAgentStreamResponse refactoring, and deprecation of createMastraStreamResponse.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch develop

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • In createAgentStreamResponse, when format === "aisdk" and toUIMessageStreamResponse is missing, you call agent.stream a second time for the fallback path; consider reusing the first stream instance to avoid double invocation of the agent.
  • The mastra as Parameters<typeof createAgentStreamResponse>[0] cast in the POST handler is unnecessary and makes the call harder to read; you can rely on the concrete mastra type or a simpler explicit interface instead.
  • The exported StreamToAISdkOptions type is now only used in the deprecated createMastraStreamResponse; consider either aligning it with the new API or clearly marking it as legacy to avoid confusion for consumers.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `createAgentStreamResponse`, when `format === "aisdk"` and `toUIMessageStreamResponse` is missing, you call `agent.stream` a second time for the fallback path; consider reusing the first `stream` instance to avoid double invocation of the agent.
- The `mastra as Parameters<typeof createAgentStreamResponse>[0]` cast in the POST handler is unnecessary and makes the call harder to read; you can rely on the concrete `mastra` type or a simpler explicit interface instead.
- The exported `StreamToAISdkOptions` type is now only used in the deprecated `createMastraStreamResponse`; consider either aligning it with the new API or clearly marking it as legacy to avoid confusion for consumers.

## Individual Comments

### Comment 1
<location> `lib/client-stream-to-ai-sdk.ts:16-25` </location>
<code_context>
+}
+
+export interface AgentStreamOptions {
+  format?: "aisdk" | "mastra";
+  threadId?: string;
+  resourceId?: string;
+  memory?: {
+    thread?: string | { id: string; resourceId?: string };
+    resource?: string;
+    options?: {
+      lastMessages?: number;
+      semanticRecall?: boolean;
+      workingMemory?: { enabled?: boolean };
+    };
+  };
+  maxSteps?: number;
+}
+
+type MastraAgent = {
+  stream: (
+    messages: unknown,
+    options?: {
+      format?: string;
+      threadId?: string;
+      resourceId?: string;
+      memory?: AgentStreamOptions["memory"];
+      maxSteps?: number;
     }
</code_context>

<issue_to_address>
**issue:** The `format` option currently doesn’t change behavior when set to `"mastra"`, which can be misleading.

Currently only `format === "aisdk"` changes behavior; all other values (including `"mastra"`) follow the same path and still run `toAISdkFormat`. A caller passing `format: "mastra"` gets identical behavior to omitting `format`. I’d suggest either making `"mastra"` a distinct mode (e.g., skip `toAISdkFormat` and return raw `MastraModelOutput`) or removing `"mastra"` from the type until it has dedicated semantics, so the API contract matches actual behavior.
</issue_to_address>

### Comment 2
<location> `lib/client-stream-to-ai-sdk.ts:89-91` </location>
<code_context>
+  };
+
+  // Preferred: Use built-in AI SDK format
+  if (streamOptions.format === "aisdk") {
+    const stream = await agent.stream(messages, streamOptions);
+    if (stream.toUIMessageStreamResponse) {
+      return stream.toUIMessageStreamResponse();
+    }
</code_context>

<issue_to_address>
**suggestion (performance):** Calling `agent.stream` twice for the same request can be wasteful and may have unintended side effects.

When `format === "aisdk"` but `toUIMessageStreamResponse` is missing, `agent.stream` is invoked once in the `if` block and again in the fallback, doubling LLM/tool work and any side effects (threads, logs, etc.). You can reuse the first result instead:

```ts
let stream = await agent.stream(messages, streamOptions);
if (streamOptions.format === "aisdk" && stream.toUIMessageStreamResponse) {
  return stream.toUIMessageStreamResponse();
}
// reuse `stream` in the fallback instead of calling again
```

Alternatively, move the `agent.stream` call outside the conditional and adjust options as needed.

Suggested implementation:

```typescript
  const streamOptions = {
    format: options?.format ?? "aisdk",
    threadId: options?.threadId,
    resourceId: options?.resourceId,
    memory: options?.memory,
    maxSteps: options?.maxSteps,
  };

  // Call agent.stream once and reuse the result below
  const stream = await agent.stream(messages, streamOptions);

  // Preferred: Use built-in AI SDK format when available
  if (streamOptions.format === "aisdk" && stream.toUIMessageStreamResponse) {
    return stream.toUIMessageStreamResponse();
  }

```

To fully avoid double-calling `agent.stream`, also:
1. Search in `lib/client-stream-to-ai-sdk.ts` (in this same function) for any other occurrences of `await agent.stream(messages, streamOptions)` and remove/replace them with the already-created `stream` variable.
2. Ensure the fallback logic that runs when `format !== "aisdk"` or `toUIMessageStreamResponse` is missing uses the existing `stream` object rather than creating a new one.
</issue_to_address>

### Comment 3
<location> `lib/client-stream-to-ai-sdk.ts:106-114` </location>
<code_context>
+  
+  const uiMessageStream = createUIMessageStream({
+    execute: async ({ writer }) => {
+      const aiSdkStream = toAISdkFormat(stream, { from: "agent" });
+      const reader = aiSdkStream.getReader();
+      try {
+        while (true) {
+          const { done, value } = await reader.read();
+          if (done) break;
+          writer.write(value);
+        }
+      } finally {
+        reader.releaseLock();
+      }
</code_context>

<issue_to_address>
**issue (bug_risk):** The fallback assumes `toAISdkFormat` produces a ReadableStream, which may be fragile if it returns an async iterable in some cases.

Previously this path supported both `AsyncIterable` and `ReadableStream` results from `toAISdkFormat`. The new code assumes `getReader()` always exists, so it will throw if an async iterable is returned.

If `toAISdkFormat` is contractually guaranteed to return a `ReadableStream` here, we should enforce that via types. Otherwise, consider keeping a small type-dispatch to handle both shapes so this fallback remains robust to changes in `toAISdkFormat`’s return type.
</issue_to_address>

### Comment 4
<location> `app/api/chat/route.ts:28-32` </location>
<code_context>
+  const body: ChatRequestBody = await req.json();
+  
+  // Get available agents dynamically from mastra
+  const agentsMap = await mastra.getAgents();
+  const availableAgents = Object.keys(agentsMap);
+  
+  // Use first available agent if none specified
+  const agentId = body.agentId || availableAgents[0];
+  
+  if (!agentId || !availableAgents.includes(agentId)) {
</code_context>

<issue_to_address>
**suggestion:** The API doesn’t handle the case where `getAgents()` returns an empty map, which can lead to confusing error responses.

When `availableAgents` is empty, `availableAgents[0]` is `undefined`, so the error response becomes:
```json
{ "error": "Invalid or missing agentId. Available: " }
```
which gives no clear guidance.

Consider handling the empty case first, for example:
```ts
if (availableAgents.length === 0) {
  return Response.json(
    { error: "No agents are configured on the server." },
    { status: 500 },
  );
}
```
Then you can validate `agentId` assuming at least one agent exists.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +16 to +25
format?: "aisdk" | "mastra";
threadId?: string;
resourceId?: string;
memory?: {
thread?: string | { id: string; resourceId?: string };
resource?: string;
options?: {
lastMessages?: number;
semanticRecall?: boolean;
workingMemory?: { enabled?: boolean };
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue: The format option currently doesn’t change behavior when set to "mastra", which can be misleading.

Currently only format === "aisdk" changes behavior; all other values (including "mastra") follow the same path and still run toAISdkFormat. A caller passing format: "mastra" gets identical behavior to omitting format. I’d suggest either making "mastra" a distinct mode (e.g., skip toAISdkFormat and return raw MastraModelOutput) or removing "mastra" from the type until it has dedicated semantics, so the API contract matches actual behavior.

Comment on lines +89 to +91
if (streamOptions.format === "aisdk") {
const stream = await agent.stream(messages, streamOptions);
if (stream.toUIMessageStreamResponse) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (performance): Calling agent.stream twice for the same request can be wasteful and may have unintended side effects.

When format === "aisdk" but toUIMessageStreamResponse is missing, agent.stream is invoked once in the if block and again in the fallback, doubling LLM/tool work and any side effects (threads, logs, etc.). You can reuse the first result instead:

let stream = await agent.stream(messages, streamOptions);
if (streamOptions.format === "aisdk" && stream.toUIMessageStreamResponse) {
  return stream.toUIMessageStreamResponse();
}
// reuse `stream` in the fallback instead of calling again

Alternatively, move the agent.stream call outside the conditional and adjust options as needed.

Suggested implementation:

  const streamOptions = {
    format: options?.format ?? "aisdk",
    threadId: options?.threadId,
    resourceId: options?.resourceId,
    memory: options?.memory,
    maxSteps: options?.maxSteps,
  };

  // Call agent.stream once and reuse the result below
  const stream = await agent.stream(messages, streamOptions);

  // Preferred: Use built-in AI SDK format when available
  if (streamOptions.format === "aisdk" && stream.toUIMessageStreamResponse) {
    return stream.toUIMessageStreamResponse();
  }

To fully avoid double-calling agent.stream, also:

  1. Search in lib/client-stream-to-ai-sdk.ts (in this same function) for any other occurrences of await agent.stream(messages, streamOptions) and remove/replace them with the already-created stream variable.
  2. Ensure the fallback logic that runs when format !== "aisdk" or toUIMessageStreamResponse is missing uses the existing stream object rather than creating a new one.

Comment on lines +106 to +114
const aiSdkStream = toAISdkFormat(stream, { from: "agent" });
const reader = aiSdkStream.getReader();
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
writer.write(value);
}
} finally {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): The fallback assumes toAISdkFormat produces a ReadableStream, which may be fragile if it returns an async iterable in some cases.

Previously this path supported both AsyncIterable and ReadableStream results from toAISdkFormat. The new code assumes getReader() always exists, so it will throw if an async iterable is returned.

If toAISdkFormat is contractually guaranteed to return a ReadableStream here, we should enforce that via types. Otherwise, consider keeping a small type-dispatch to handle both shapes so this fallback remains robust to changes in toAISdkFormat’s return type.

Comment on lines +28 to +32
const agentsMap = await mastra.getAgents();
const availableAgents = Object.keys(agentsMap);

// Use first available agent if none specified
const agentId = body.agentId || availableAgents[0];
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: The API doesn’t handle the case where getAgents() returns an empty map, which can lead to confusing error responses.

When availableAgents is empty, availableAgents[0] is undefined, so the error response becomes:

{ "error": "Invalid or missing agentId. Available: " }

which gives no clear guidance.

Consider handling the empty case first, for example:

if (availableAgents.length === 0) {
  return Response.json(
    { error: "No agents are configured on the server." },
    { status: 500 },
  );
}

Then you can validate agentId assuming at least one agent exists.

@github-actions
Copy link

github-actions bot commented Dec 5, 2025

🤖 I'm sorry @ssdeanx, but I was unable to process your request. Please see the logs for more details.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enhances the chat API by introducing a new server-side streaming helper createAgentStreamResponse, deprecating the old one, and adding an endpoint to list available agents. The changes are well-structured and improve the API's flexibility and error handling.

My review focuses on improving code maintainability, correctness, and organization. I've identified a potential performance issue in lib/client-stream-to-ai-sdk.ts where an API call might be made twice unnecessarily. I've also suggested improvements to code organization in lib/mastra-client.ts to better separate client and server concerns, and a way to strengthen type safety in lib/client-stream-to-ai-sdk.ts. Finally, I've pointed out some duplicated code in app/api/chat/route.ts that could be refactored.

Comment on lines +89 to +94
if (streamOptions.format === "aisdk") {
const stream = await agent.stream(messages, streamOptions);
if (stream.toUIMessageStreamResponse) {
return stream.toUIMessageStreamResponse();
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

If streamOptions.format is 'aisdk' but the returned stream does not have the toUIMessageStreamResponse method, the code falls through and calls agent.stream() a second time on line 97. This is inefficient and could lead to unexpected side effects. Consider refactoring to avoid this second call, for example by handling the fallback case within this if block and reusing the stream that has already been fetched.

Comment on lines +61 to +62
const agentsMap = await mastra.getAgents();
const availableAgents = Object.keys(agentsMap);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logic to fetch available agents is duplicated from the POST handler on lines 28-29. To follow the DRY (Don't Repeat Yourself) principle and improve maintainability, consider extracting this logic into a shared helper function.

Comment on lines +44 to +46
type MastraInstance = {
getAgent: (id: string) => MastraAgent;
} & Record<string, any>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The MastraInstance type uses & Record<string, any>, which weakens type safety by allowing any additional properties. To create a more robust contract, consider importing the Mastra class from @mastra/core/mastra and using it directly in the createAgentStreamResponse function signature (e.g., mastra: Mastra). This would improve type checking and could eliminate the need for the type assertion in app/api/chat/route.ts.

Comment on lines +13 to +14
export { createAgentStreamResponse } from "./client-stream-to-ai-sdk";
export type { StreamToAISdkOptions } from "./client-stream-to-ai-sdk";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This file defines the client-side mastraClient, but it now also exports createAgentStreamResponse, which is a server-side utility. Mixing client and server code in a file named mastra-client.ts can be confusing for other developers. It's a good practice to separate server-side and client-side concerns into different modules (e.g., moving server utilities to a lib/server directory) to improve code organization and clarity.

@greptile-apps
Copy link

greptile-apps bot commented Dec 5, 2025

Greptile Overview

Greptile Summary

This PR refactors the chat API route to support dynamic agent selection and improves the streaming infrastructure for better integration with the AI SDK.

Key Changes

  • Dynamic Agent Discovery: route.ts now fetches available agents dynamically from the Mastra instance instead of hardcoding weatherAgent, enabling flexible agent selection via the agentId parameter
  • Enhanced Error Handling: Added input validation for messages and agentId with descriptive error responses
  • Streaming Refactor: Consolidated streaming logic into createAgentStreamResponse() helper with dual-path support (native AI SDK format + fallback transformation)
  • New GET Endpoint: Added GET /api/chat to list available agents for discovery
  • Increased Timeout: Updated maxDuration from 30 to 60 seconds to accommodate longer-running agent operations
  • Better TypeScript Types: Added comprehensive interfaces (ChatRequestBody, AgentStreamOptions) for better type safety
  • Backward Compatibility: Deprecated createMastraStreamResponse with clear migration guidance

Issues Found

  • Logic Error (app/api/chat/route.ts:32): The fallback logic availableAgents[0] assumes at least one agent exists. If getAgents() returns an empty map, this will be undefined and cause the API to respond with a confusing error message rather than clear validation failure.

Confidence Score: 3/5

  • This PR can be merged with one logic fix for the empty agents edge case
  • The refactoring is well-structured with improved error handling and clear documentation. However, there's a critical logic bug (app/api/chat/route.ts:32) where an empty agent list causes undefined behavior. The streaming implementation is solid with proper fallback handling, and the deprecation pattern is appropriate. Once the edge case bug is fixed, this will be production-ready.
  • app/api/chat/route.ts requires attention for the empty agents array edge case on line 32

Important Files Changed

File Analysis

Filename Score Overview
app/api/chat/route.ts 3/5 Refactored to use dynamic agent fetching and improved error handling; has edge case bug with empty agent list
lib/client-stream-to-ai-sdk.ts 4/5 Well-structured refactor with clear documentation, improved streaming logic, and proper backward compatibility handling
lib/mastra-client.ts 5/5 Simple addition of exports, no issues found

Sequence Diagram

sequenceDiagram
    participant Client
    participant Route as POST /api/chat
    participant Mastra
    participant Agent
    participant Stream as createAgentStreamResponse
    participant AISDKStream as AI SDK Stream

    Client->>Route: POST {messages, agentId?, threadId?, memory?}
    Route->>Mastra: getAgents()
    Mastra-->>Route: agentsMap
    Route->>Route: Validate agentId (use first if not provided)
    alt Invalid/Missing agentId
        Route-->>Client: 400 {error: "Invalid or missing agentId"}
    end
    alt Empty/Invalid messages
        Route-->>Client: 400 {error: "messages required"}
    end
    Route->>Stream: createAgentStreamResponse(mastra, agentId, messages, options)
    Stream->>Mastra: getAgent(agentId)
    Mastra-->>Stream: agent
    Stream->>Agent: stream(messages, {format: "aisdk", threadId, memory, maxSteps})
    Agent-->>Stream: MastraModelOutput
    alt Built-in AI SDK format supported
        Stream->>Agent: stream.toUIMessageStreamResponse()
        Agent-->>Stream: Response
    else Fallback: Manual transformation
        Stream->>Stream: toAISdkFormat(stream)
        Stream->>AISDKStream: createUIMessageStream()
        AISDKStream->>AISDKStream: Read stream chunks
        AISDKStream->>AISDKStream: Write to writer
        Stream->>Stream: createUIMessageStreamResponse()
        Stream-->>Route: Response
    end
    Route-->>Client: Streaming Response
    alt Error occurred
        Route-->>Client: 500 {error: message}
    end
Loading

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 files reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

const availableAgents = Object.keys(agentsMap);

// Use first available agent if none specified
const agentId = body.agentId || availableAgents[0];
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: if no agents are configured, availableAgents[0] is undefined, causing request to fail with error message instead of clear validation

Suggested change
const agentId = body.agentId || availableAgents[0];
const agentId = body.agentId || (availableAgents.length > 0 ? availableAgents[0] : undefined);
Prompt To Fix With AI
This is a comment left during a code review.
Path: app/api/chat/route.ts
Line: 32:32

Comment:
**logic:** if no agents are configured, `availableAgents[0]` is `undefined`, causing request to fail with error message instead of clear validation

```suggestion
  const agentId = body.agentId || (availableAgents.length > 0 ? availableAgents[0] : undefined);
```

How can I resolve this? If you propose a fix, please make it concise.

@ssdeanx ssdeanx merged commit d729737 into main Dec 5, 2025
94 of 96 checks passed
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors the chat API to support dynamic agent selection and improves the streaming response architecture. The changes move from a hardcoded single-agent approach to a flexible system that validates agent IDs against available agents and provides better error handling. A new utility function createAgentStreamResponse centralizes streaming logic with built-in fallback support, while the old createMastraStreamResponse is properly deprecated.

Key changes include:

  • Dynamic agent fetching and validation in the POST handler with fallback to the first available agent
  • Enhanced error handling with specific validation for missing/invalid agents and messages
  • New GET endpoint to expose available agents for frontend consumption

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 15 comments.

File Description
app/api/chat/route.ts Refactored POST handler to dynamically fetch and validate agents, added comprehensive error handling, and introduced a new GET endpoint for listing available agents
lib/client-stream-to-ai-sdk.ts Created new createAgentStreamResponse function with dual-path streaming (built-in vs. fallback), added comprehensive type definitions, and deprecated createMastraStreamResponse with clear migration guidance
lib/mastra-client.ts Exported new streaming utilities to make them available throughout the codebase

Comment on lines +28 to +29
const agentsMap = await mastra.getAgents();
const availableAgents = Object.keys(agentsMap);
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fetching all agents on every POST request may impact performance, especially if there are many agents or if getAgents() is an expensive operation. Consider caching the agent list at the module level or using a memoization strategy, since the list of available agents likely doesn't change during runtime.

Copilot uses AI. Check for mistakes.
Comment on lines +60 to 64
export async function GET() {
const agentsMap = await mastra.getAgents();
const availableAgents = Object.keys(agentsMap);
return Response.json({ agents: availableAgents, count: availableAgents.length });
}
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new GET endpoint should have test coverage to verify it correctly returns the list of available agents and the count. This is especially important since this endpoint may be used by the frontend to dynamically populate agent selection UI.

Copilot uses AI. Check for mistakes.
threadId: body.threadId,
resourceId: body.resourceId,
memory: body.memory,
maxSteps: body.maxSteps ?? 50,
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The default value of 50 for maxSteps is hardcoded here. Consider extracting this to a constant at the top of the file (e.g., const DEFAULT_MAX_STEPS = 50) to make it easier to maintain and update across the codebase.

Copilot uses AI. Check for mistakes.
Comment on lines +72 to +77
export async function createAgentStreamResponse(
mastra: MastraInstance,
agentId: string,
messages: unknown,
options?: AgentStreamOptions
): Promise<Response> {
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The JSDoc documentation is excellent. However, consider adding @param tags to document each parameter individually, which would make it easier for developers using IDEs to understand the function signature. Also consider adding @returns to document what the Response object contains.

Copilot uses AI. Check for mistakes.
messages: unknown,
options?: AgentStreamOptions
): Promise<Response> {
const agent = mastra.getAgent(agentId);
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function doesn't handle the case where mastra.getAgent(agentId) might fail or return undefined/null if the agent doesn't exist. This could lead to runtime errors when trying to call agent.stream(). Consider adding validation: if (!agent) { throw new Error(\Agent '${agentId}' not found`) }` after line 78.

Copilot uses AI. Check for mistakes.
Comment on lines 24 to +58
export async function POST(req: Request) {
const { messages }: {
messages: UIMessage[];
} = await req.json();
const myAgent = mastra.getAgent("weatherAgent");
const stream = await myAgent.stream(messages, { });
const uiMessageStream = createUIMessageStream({

execute: async ({ writer }) => {
const formatted = toAISdkFormat(stream, { from: "agent" })!;
const runtimeContext = new RuntimeContext();
// If the returned object is an async iterable, use for-await
if (Symbol.asyncIterator in formatted) {
for await (const part of formatted as AsyncIterable<any>) {
writer.write(part);
}
} else if (typeof (formatted as any).getReader === "function") {
// If it's a ReadableStream (browser), read via getReader()
const reader = (formatted as ReadableStream<any>).getReader();
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
writer.write(value);
}
} finally {
reader.releaseLock?.();
}
}
},
});
return createUIMessageStreamResponse({
stream: uiMessageStream,
});
const body: ChatRequestBody = await req.json();

// Get available agents dynamically from mastra
const agentsMap = await mastra.getAgents();
const availableAgents = Object.keys(agentsMap);

// Use first available agent if none specified
const agentId = body.agentId || availableAgents[0];

if (!agentId || !availableAgents.includes(agentId)) {
return Response.json(
{ error: `Invalid or missing agentId. Available: ${availableAgents.join(", ")}` },
{ status: 400 }
);
}

if (!body.messages?.length) {
return Response.json({ error: "messages required" }, { status: 400 });
}

try {
return await createAgentStreamResponse(mastra as Parameters<typeof createAgentStreamResponse>[0], agentId, body.messages, {
threadId: body.threadId,
resourceId: body.resourceId,
memory: body.memory,
maxSteps: body.maxSteps ?? 50,
});
} catch (error) {
return Response.json(
{ error: error instanceof Error ? error.message : "Stream failed" },
{ status: 500 }
);
}
}
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new POST handler includes important validation logic (agent validation, message validation) and error handling that should be covered by tests. Consider adding test cases for: 1) Missing agentId with available agents, 2) Invalid agentId, 3) Missing messages, 4) Empty messages array, 5) Stream creation failure, 6) No agents available scenario.

Copilot uses AI. Check for mistakes.
Comment on lines +72 to +121
export async function createAgentStreamResponse(
mastra: MastraInstance,
agentId: string,
messages: unknown,
options?: AgentStreamOptions
): Promise<Response> {
const agent = mastra.getAgent(agentId);

const streamOptions = {
format: options?.format ?? "aisdk",
threadId: options?.threadId,
resourceId: options?.resourceId,
memory: options?.memory,
maxSteps: options?.maxSteps,
};

// Preferred: Use built-in AI SDK format
if (streamOptions.format === "aisdk") {
const stream = await agent.stream(messages, streamOptions);
if (stream.toUIMessageStreamResponse) {
return stream.toUIMessageStreamResponse();
}
}

// Fallback: Manual transformation with toAISdkFormat
const stream = await agent.stream(messages, {
threadId: options?.threadId,
resourceId: options?.resourceId,
memory: options?.memory,
maxSteps: options?.maxSteps,
});

const uiMessageStream = createUIMessageStream({
execute: async ({ writer }) => {
const aiSdkStream = toAISdkFormat(stream, { from: "agent" });
const reader = aiSdkStream.getReader();
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
writer.write(value);
}
} finally {
reader.releaseLock();
}
},
});

return createUIMessageStreamResponse({ stream: uiMessageStream });
}
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new createAgentStreamResponse function includes complex logic with two different code paths (built-in AI SDK format vs. fallback transformation) that should be thoroughly tested. Consider adding tests for: 1) Built-in format with toUIMessageStreamResponse available, 2) Fallback path when toUIMessageStreamResponse is not available, 3) Invalid agent ID, 4) Stream errors, 5) Different option combinations (threadId, resourceId, memory, maxSteps).

Copilot uses AI. Check for mistakes.

type MastraInstance = {
getAgent: (id: string) => MastraAgent;
} & Record<string, any>;
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unexpected any. Specify a different type.

Suggested change
} & Record<string, any>;
} & Record<string, unknown>;

Copilot uses AI. Check for mistakes.
Comment on lines +127 to +130
export async function createMastraStreamResponse(
_client: unknown,
_options: StreamToAISdkOptions
): Promise<Response> {
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'_client' is defined but never used.

Suggested change
export async function createMastraStreamResponse(
_client: unknown,
_options: StreamToAISdkOptions
): Promise<Response> {
export async function createMastraStreamResponse(): Promise<Response> {

Copilot uses AI. Check for mistakes.
Comment on lines +128 to +129
_client: unknown,
_options: StreamToAISdkOptions
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'_options' is defined but never used.

Suggested change
_client: unknown,
_options: StreamToAISdkOptions
_client: unknown

Copilot uses AI. Check for mistakes.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ffa3c17 and ed3672b.

📒 Files selected for processing (3)
  • app/api/chat/route.ts (1 hunks)
  • lib/client-stream-to-ai-sdk.ts (1 hunks)
  • lib/mastra-client.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (6)
**/app/**

📄 CodeRabbit inference engine (.github/instructions/next-js.instructions.md)

Use the app/ directory structure for route handlers, server components, and client components (Next.js 13+). Prefer this over the pages/ directory for new projects.

Files:

  • app/api/chat/route.ts
**/app/api/**/route.{js,ts}

📄 CodeRabbit inference engine (.github/instructions/next-js.instructions.md)

**/app/api/**/route.{js,ts}: Define server-side route handlers using route.ts or route.js files within the app/api directory.
Use Next.js API routes for serverless functions.
Always validate user input on the server-side using a validation library like zod or yup.
Use authentication middleware to protect API endpoints.
Store API keys and sensitive data in environment variables and access them on the server-side only.
Configure caching with appropriate Cache-Control headers to improve performance and reduce server load.
Enable Gzip or Brotli compression on your server to reduce the size of transferred files.
Implement rate limiting to prevent abuse of your API.
Properly encode API responses to prevent injection attacks.

Files:

  • app/api/chat/route.ts
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.github/instructions/next-js.instructions.md)

**/*.{js,jsx,ts,tsx}: Use next/dynamic for dynamic imports to load components only when needed, improving initial load time.
Use next/image component for automatic image optimization, including lazy loading and responsive images.
Use React.memo to prevent unnecessary re-renders of components.
Use the <Link prefetch> tag to prefetch pages that are likely to be visited.
Use getServerSideProps, getStaticProps, or server components for fetching data on the server-side.
Use SWR or React Query for client-side data fetching and caching.
Use CSS Modules, Styled Components, or Tailwind CSS for component-level styling. Prefer Tailwind CSS for rapid development.
Use React Context, Zustand, Jotai, or Recoil for managing global state. Avoid Redux unless necessary.
Use react-hook-form for managing forms and validation.
Only fetch the data that is needed by the component to avoid over-fetching.
Avoid long-running synchronous operations in the main thread to prevent blocking.
Always use setState or hooks to update state instead of mutating state directly.
Include a complete dependency array in useEffect hooks to prevent unexpected behavior.
Avoid writing server-side code in client components to prevent exposing secrets or causing unexpected behavior.
Use try...catch blocks for handling errors in asynchronous operations.
Implement error boundary components using getDerivedStateFromError or componentDidCatch lifecycle methods.
Sanitize user input to prevent Cross-Site Scripting (XSS) attacks. Be especially careful when rendering HTML directly from user input.
Store authentication tokens in HTTP-only cookies or local storage securely.
Implement role-based access control to restrict access to sensitive resources.
Clean up event listeners and timers in useEffect hooks to avoid memory leaks.
Only update state when necessary to reduce the number of re-renders and improve performance.
Use immutable data structures and avoid mutating data directly to prevent unexpected...

Files:

  • app/api/chat/route.ts
  • lib/mastra-client.ts
  • lib/client-stream-to-ai-sdk.ts
**/*.{js,ts}

📄 CodeRabbit inference engine (.github/instructions/next-js.instructions.md)

Use parameterized queries or an ORM to prevent SQL injection attacks.

Files:

  • app/api/chat/route.ts
  • lib/mastra-client.ts
  • lib/client-stream-to-ai-sdk.ts
{app,src/components}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Use Next.js 16 App Router with React 19 for frontend development

Files:

  • app/api/chat/route.ts
app/**/*.{tsx,ts}

📄 CodeRabbit inference engine (app/AGENTS.md)

app/**/*.{tsx,ts}: Use Tailwind CSS 4 with oklch color variables for styling in Next.js App Router pages and layouts
Use React 19 latest features in component implementations within the app directory

Files:

  • app/api/chat/route.ts
🧬 Code graph analysis (2)
app/api/chat/route.ts (2)
src/mastra/index.ts (1)
  • mastra (85-611)
lib/client-stream-to-ai-sdk.ts (1)
  • createAgentStreamResponse (72-121)
lib/client-stream-to-ai-sdk.ts (1)
lib/mastra-client.ts (2)
  • StreamToAISdkOptions (14-14)
  • createAgentStreamResponse (13-13)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Agent
  • GitHub Check: Sourcery review
  • GitHub Check: Codacy Security Scan
🔇 Additional comments (2)
app/api/chat/route.ts (1)

60-63: GET handler for listing agents looks good

The GET handler cleanly returns the agent IDs and count and reuses mastra.getAgents() consistently with the POST handler. No issues here.

lib/client-stream-to-ai-sdk.ts (1)

123-135: No remaining usages of deprecated createMastraStreamResponse found

The search confirms that createMastraStreamResponse exists only in its own definition at lib/client-stream-to-ai-sdk.ts. No call sites or imports exist elsewhere in the codebase, so the error-throwing implementation will not cause runtime failures for existing code.

Comment on lines +7 to +22
interface ChatRequestBody {
messages: UIMessage[];
agentId?: string;
threadId?: string;
resourceId?: string;
memory?: {
thread?: string | { id: string; resourceId?: string };
resource?: string;
options?: {
lastMessages?: number;
semanticRecall?: boolean;
workingMemory?: { enabled?: boolean };
};
};
maxSteps?: number;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Reuse AgentStreamOptions for memory/maxSteps to avoid type drift

ChatRequestBody redefines the same memory (and effectively maxSteps) shape already modeled in AgentStreamOptions in lib/client-stream-to-ai-sdk.ts. To keep these in sync, consider referencing the shared type instead of duplicating the structure, e.g.:

import type { AgentStreamOptions } from "@/lib/client-stream-to-ai-sdk";

interface ChatRequestBody {
  messages: UIMessage[];
  agentId?: string;
  threadId?: string;
  resourceId?: string;
  memory?: AgentStreamOptions["memory"];
  maxSteps?: AgentStreamOptions["maxSteps"];
}

This reduces maintenance cost if the memory schema evolves.

🤖 Prompt for AI Agents
In app/api/chat/route.ts around lines 7 to 22, the ChatRequestBody duplicates
the memory and maxSteps shapes already defined in AgentStreamOptions; change
ChatRequestBody to reuse the shared types to avoid drift by importing the
AgentStreamOptions type from "@/lib/client-stream-to-ai-sdk" and replace the
memory and maxSteps field definitions with memory?: AgentStreamOptions["memory"]
and maxSteps?: AgentStreamOptions["maxSteps"] (ensure the import is a type-only
import).

Comment on lines +25 to +43
const body: ChatRequestBody = await req.json();

// Get available agents dynamically from mastra
const agentsMap = await mastra.getAgents();
const availableAgents = Object.keys(agentsMap);

// Use first available agent if none specified
const agentId = body.agentId || availableAgents[0];

if (!agentId || !availableAgents.includes(agentId)) {
return Response.json(
{ error: `Invalid or missing agentId. Available: ${availableAgents.join(", ")}` },
{ status: 400 }
);
}

if (!body.messages?.length) {
return Response.json({ error: "messages required" }, { status: 400 });
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

rg "Invalid or missing agentId" --type ts --type tsx 2>/dev/null

Repository: ssdeanx/AgentStack

Length of output: 44


Add schema validation and JSON parse error handling to the API route

The current implementation lacks schema validation on the request body and doesn't handle JSON parse errors:

  • Validate ChatRequestBody using zod or yup at the start of the handler to catch malformed requests with clear 400 responses instead of passing arbitrary data to the agent layer
  • Wrap await req.json() in a try-catch to return a 400 status for invalid JSON ("Invalid JSON body") instead of letting parse errors bubble as generic 500 errors

This aligns with the requirement to validate server-side input using a validation library for API routes.

Comment on lines +28 to +38
const agentsMap = await mastra.getAgents();
const availableAgents = Object.keys(agentsMap);

// Use first available agent if none specified
const agentId = body.agentId || availableAgents[0];

if (!agentId || !availableAgents.includes(agentId)) {
return Response.json(
{ error: `Invalid or missing agentId. Available: ${availableAgents.join(", ")}` },
{ status: 400 }
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Clarify behavior when no agents are configured

If mastra.getAgents() returns an empty object, availableAgents[0] is undefined, and the code returns a 400 "Invalid or missing agentId. Available: " even though the real problem is “no agents configured.” It would be clearer to:

  • Special-case availableAgents.length === 0 and return a 500 (or similar) indicating server misconfiguration.
  • Reserve the current 400 path for truly invalid client-specified agent IDs.
🤖 Prompt for AI Agents
In app/api/chat/route.ts around lines 28 to 38, handle the case where
mastra.getAgents() returns no agents separately: if availableAgents.length === 0
return an error response (500 or 503) with a clear message like "No agents
configured" instead of falling through to the 400 path; only use
availableAgents[0] as the default agentId after confirming availableAgents is
non-empty, and keep the existing 400 response for cases where the client
supplied an agentId that is not in availableAgents.

Comment on lines +45 to +57
try {
return await createAgentStreamResponse(mastra as Parameters<typeof createAgentStreamResponse>[0], agentId, body.messages, {
threadId: body.threadId,
resourceId: body.resourceId,
memory: body.memory,
maxSteps: body.maxSteps ?? 50,
});
} catch (error) {
return Response.json(
{ error: error instanceof Error ? error.message : "Stream failed" },
{ status: 500 }
);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Log streaming errors and avoid leaking internal error details

The catch block turns any error into a JSON { error: message } with status 500 but doesn’t log it. That makes production debugging hard and may expose internal messages to clients.

Consider:

  • Logging the full error (with stack) on the server before responding.
  • Returning a more generic public message (e.g. "Streaming failed") while keeping detailed info in logs only.
🤖 Prompt for AI Agents
In app/api/chat/route.ts around lines 45 to 57, the catch block currently
returns the raw error message to the client and does not log the error; change
it to log the full error (including stack) to the server log (e.g.,
processLogger.error or console.error) before returning a response, and return a
generic public-facing message like "Streaming failed" (status 500) to avoid
leaking internal details.

Comment on lines +15 to +47
export interface AgentStreamOptions {
format?: "aisdk" | "mastra";
threadId?: string;
resourceId?: string;
memory?: {
thread?: string | { id: string; resourceId?: string };
resource?: string;
options?: {
lastMessages?: number;
semanticRecall?: boolean;
workingMemory?: { enabled?: boolean };
};
};
maxSteps?: number;
}

type MastraAgent = {
stream: (
messages: unknown,
options?: {
format?: string;
threadId?: string;
resourceId?: string;
memory?: AgentStreamOptions["memory"];
maxSteps?: number;
}
},
});
) => Promise<MastraModelOutput & { toUIMessageStreamResponse?: () => Response }>;
};

type MastraInstance = {
getAgent: (id: string) => MastraAgent;
} & Record<string, any>;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Export and reuse core streaming option/instance types

AgentStreamOptions and MastraInstance are central to the streaming API but are currently only used internally here. Since other modules (e.g. app/api/chat/route.ts) need to mirror these shapes, consider explicitly exporting these types and consuming them at call sites to avoid duplicated schemas and keep everything in lockstep.

For example:

export type { AgentStreamOptions, MastraInstance };

then reuse them in route definitions.

🤖 Prompt for AI Agents
In lib/client-stream-to-ai-sdk.ts lines 15 to 47, AgentStreamOptions and
MastraInstance types are declared but not exported. To enable reuse in other
modules and prevent duplication, explicitly export these two types by adding an
export statement such as 'export type { AgentStreamOptions, MastraInstance };'
at the bottom of the file. This allows other files like app/api/chat/route.ts to
import and use these types for consistency.

Comment on lines +72 to +87
export async function createAgentStreamResponse(
mastra: MastraInstance,
agentId: string,
messages: unknown,
options?: AgentStreamOptions
): Promise<Response> {
const agent = mastra.getAgent(agentId);

const streamOptions = {
format: options?.format ?? "aisdk",
threadId: options?.threadId,
resourceId: options?.resourceId,
memory: options?.memory,
maxSteps: options?.maxSteps,
};

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard against missing agents with clearer error messages

If mastra.getAgent(agentId) ever returns undefined instead of throwing, agent.stream will fail with a generic “cannot read properties of undefined” error. Adding an explicit guard here would produce a more actionable error:

const agent = mastra.getAgent(agentId);
if (!agent) {
  throw new Error(`Agent not found: ${agentId}`);
}

This also keeps upstream route handlers from having to guess what went wrong.

🤖 Prompt for AI Agents
In lib/client-stream-to-ai-sdk.ts around lines 72 to 87, add an explicit guard
after calling mastra.getAgent(agentId) to check for a missing agent and throw a
clear error (e.g., "Agent not found: <agentId>") instead of allowing
agent.stream to fail with a generic undefined-property error; update the
function to validate agent is truthy and throw the descriptive Error so upstream
handlers receive an actionable message.

Comment on lines +96 to +121
// Fallback: Manual transformation with toAISdkFormat
const stream = await agent.stream(messages, {
threadId: options?.threadId,
resourceId: options?.resourceId,
memory: options?.memory,
maxSteps: options?.maxSteps,
});

const uiMessageStream = createUIMessageStream({
execute: async ({ writer }) => {
const aiSdkStream = toAISdkFormat(stream, { from: "agent" });
const reader = aiSdkStream.getReader();
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
writer.write(value);
}
} finally {
reader.releaseLock();
}
},
});

return createUIMessageStreamResponse({ stream: uiMessageStream });
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash

# 1. Check if the file exists and examine its content
echo "=== Checking file existence and content ==="
if [ -f "lib/client-stream-to-ai-sdk.ts" ]; then
  wc -l lib/client-stream-to-ai-sdk.ts
  echo "File exists. Now showing lines 90-130:"
  sed -n '90,130p' lib/client-stream-to-ai-sdk.ts
else
  echo "File not found at lib/client-stream-to-ai-sdk.ts"
  # Try to find it
  find . -name "client-stream-to-ai-sdk.ts" 2>/dev/null
fi

Repository: ssdeanx/AgentStack

Length of output: 1369


🏁 Script executed:

#!/bin/bash
echo "=== Checking imports at top of file ==="
head -40 lib/client-stream-to-ai-sdk.ts

Repository: ssdeanx/AgentStack

Length of output: 1050


🏁 Script executed:

#!/bin/bash
echo "=== Searching for createUIMessageStream usage ==="
rg "createUIMessageStream" --type ts --type tsx -B 2 -A 10

echo ""
echo "=== Searching for writer.close() usage ==="
rg "writer\.close\(\)" --type ts --type tsx -B 2 -A 2

echo ""
echo "=== Searching for writer usage patterns ==="
rg "writer\." --type ts --type tsx -B 1 -A 1 | head -100

Repository: ssdeanx/AgentStack

Length of output: 301


🏁 Script executed:

#!/bin/bash
echo "=== Searching for createUIMessageStream usage (ts only) ==="
rg "createUIMessageStream" -t ts -B 2 -A 10

echo ""
echo "=== Searching for writer usage in ts files ==="
rg "writer" -t ts | grep -E "(writer\.|execute.*writer)" | head -20

echo ""
echo "=== Checking package.json for ai/vercel dependency ==="
cat package.json | jq '.dependencies | select(. != null) | to_entries[] | select(.key | contains("ai"))'

Repository: ssdeanx/AgentStack

Length of output: 5497


🌐 Web query:

Vercel AI SDK createUIMessageStream writer.close() stream closure

💡 Result:

Short answer: always call writer.close() (or writer.abort(err) on error) to terminate the UIMessageStream; otherwise the SSE/readable stream remains open and the client will hang or not receive the final "end" event. Put writer.close() in your normal completion path and also in a finally/abort handler so the stream always ends. Use the SDK helpers (createUIMessageStreamResponse / pipeUIMessageStreamToResponse) to ensure proper SSE headers/closing behavior.

Key pointers and references:

  • createUIMessageStream docs: writer.close() signals end of stream. [1]
  • Dev write-up / examples showing writer.close() in examples and recommended pattern. [2]
  • GitHub/issue notes: some edge cases (aborts) can affect onFinish callbacks — ensure you call close in abort handling or use writer.abort(error) to emit an error and terminate. [3]
  • Migration/discussion examples showing writer.close() before piping to response. [4]

If you want, paste your handler code and I’ll show exactly where to add writer.close()/abort and an abortSignal-safe finally block.

Sources:
[1] AI SDK docs — createUIMessageStream.
[2] Vercel AI SDK internals / streaming examples.
[3] GitHub issue: onFinish not called when aborted.
[4] Vercel AI discussions example (createUIMessageStream + pipe).


🏁 Script executed:

#!/bin/bash
echo "=== Showing complete function context ==="
sed -n '80,134p' lib/client-stream-to-ai-sdk.ts

Repository: ssdeanx/AgentStack

Length of output: 1797


Close the UI writer after streaming fallback completes

In the fallback path, the loop reads from aiSdkStream and writes chunks to the writer, but never explicitly closes the writer. Without calling writer.close(), the UIMessageStream remains open on the client side and the stream never properly terminates, causing the client to hang or fail to receive the final "end" event.

Add writer.close() in the finally block to ensure the stream always terminates properly:

   const uiMessageStream = createUIMessageStream({
     execute: async ({ writer }) => {
       const aiSdkStream = toAISdkFormat(stream, { from: "agent" });
       const reader = aiSdkStream.getReader();
       try {
         while (true) {
           const { done, value } = await reader.read();
           if (done) break;
           writer.write(value);
         }
       } finally {
         reader.releaseLock();
+        writer.close();
       }
     },
   });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Fallback: Manual transformation with toAISdkFormat
const stream = await agent.stream(messages, {
threadId: options?.threadId,
resourceId: options?.resourceId,
memory: options?.memory,
maxSteps: options?.maxSteps,
});
const uiMessageStream = createUIMessageStream({
execute: async ({ writer }) => {
const aiSdkStream = toAISdkFormat(stream, { from: "agent" });
const reader = aiSdkStream.getReader();
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
writer.write(value);
}
} finally {
reader.releaseLock();
}
},
});
return createUIMessageStreamResponse({ stream: uiMessageStream });
}
// Fallback: Manual transformation with toAISdkFormat
const stream = await agent.stream(messages, {
threadId: options?.threadId,
resourceId: options?.resourceId,
memory: options?.memory,
maxSteps: options?.maxSteps,
});
const uiMessageStream = createUIMessageStream({
execute: async ({ writer }) => {
const aiSdkStream = toAISdkFormat(stream, { from: "agent" });
const reader = aiSdkStream.getReader();
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
writer.write(value);
}
} finally {
reader.releaseLock();
writer.close();
}
},
});
return createUIMessageStreamResponse({ stream: uiMessageStream });
}
🤖 Prompt for AI Agents
In lib/client-stream-to-ai-sdk.ts around lines 96 to 121, the fallback streaming
loop writes chunks to the UI writer but never closes it, leaving the client
stream open; update the finally block to ensure the writer is closed after
streaming (call writer.close()) before releasing the reader lock so the
UIMessageStream always terminates, and still release the reader lock in all
cases.

Comment on lines +13 to +14
export { createAgentStreamResponse } from "./client-stream-to-ai-sdk";
export type { StreamToAISdkOptions } from "./client-stream-to-ai-sdk";
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Separate server-only streaming helper from client SDK exports

This module now exposes both the browser-oriented mastraClient and the server-only createAgentStreamResponse. That can blur the boundary and make it easier for client code to accidentally import a server helper, potentially bloating bundles or causing environment confusion. Consider moving the server helper/type re-exports to a dedicated server module (e.g. lib/mastra-server) or clearly documenting this file as server-only.

🤖 Prompt for AI Agents
In lib/mastra-client.ts around lines 13 to 14, the file re-exports a server-only
helper (createAgentStreamResponse and its type) alongside browser-oriented
client APIs, which risks accidental client-side imports; move those server-only
exports into a new dedicated module (e.g., lib/mastra-server.ts) and update
import sites to use that module, or alternatively remove the server-only
re-exports from lib/mastra-client.ts and add a clear file-level comment stating
this file is browser-only and should not expose server helpers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant