Skip to content

feat: add runtime memory envelope for per-call memory overrides#1136

Merged
omeraplak merged 6 commits intomainfrom
feat/runtime-memory-envelope
Mar 6, 2026
Merged

feat: add runtime memory envelope for per-call memory overrides#1136
omeraplak merged 6 commits intomainfrom
feat/runtime-memory-envelope

Conversation

@omeraplak
Copy link
Copy Markdown
Member

@omeraplak omeraplak commented Mar 6, 2026

PR Checklist

Please check if your PR fulfills the following requirements:

Bugs / Features

What is the current behavior?

Memory identity and runtime memory behavior are passed through legacy top-level fields (options.conversationId, options.userId, options.contextLimit, options.semanticMemory, options.conversationPersistence).

This makes per-call memory overrides less structured and causes inconsistencies across layers (core runtime, server request schema, and resumable stream key resolution).

What is the new behavior?

Introduces a unified runtime memory envelope under options.memory while preserving backward compatibility with legacy top-level fields.

  • Preferred per-call identity:
    • options.memory.conversationId (thread)
    • options.memory.userId (resource)
  • Preferred per-call memory behavior overrides:
    • options.memory.options.contextLimit
    • options.memory.options.semanticMemory
    • options.memory.options.conversationPersistence
  • Legacy top-level fields remain supported but are marked deprecated in types/docs.
  • When both are provided, options.memory values are prioritized.
  • @voltagent/server-core schemas and handlers now accept/read options.memory.
  • @voltagent/resumable-streams default resolvers now read options.memory.{conversationId,userId} first.

fixes (issue): N/A

Notes for reviewers

  • Added coverage for memory envelope precedence in @voltagent/core agent tests.
  • Added server-core tests for schema parsing and resumable stream identity resolution from options.memory.
  • Validation run:
    • pnpm --filter @voltagent/core test
    • pnpm --filter @voltagent/server-core test
    • pnpm --filter @voltagent/server-core typecheck
    • pnpm --filter @voltagent/resumable-streams typecheck

Summary by cubic

Adds a per-call runtime memory envelope under options.memory and resolves it across the agent, server, and resumable streams. Resolved memory is stored on the operation context, enables user-scoped working memory when conversationId is not set, and safely falls back to legacy fields.

  • New Features

    • Added resolveMemoryRuntimeOptions and stored resolvedMemory on the operation context; envelope values take precedence with parent context fallback.
    • Agent uses resolved memory for contextLimit, semantic search, working memory tools (fallback to user scope when no conversationId), conversation persistence, delegate tool identity, and passes it to agent-as-tool calls.
    • Server schemas accept options.memory with nested overrides; handlers, agent runtime resolution, and resumable streams prefer memory.userId/conversationId, ignore blank values, then fall back to legacy fields.
    • Expanded tests for precedence, parent fallback, schema validation, stream identity resolution, user-scoped working memory, and blank-id handling.
  • Migration

    • Prefer options.memory.* for SDK calls and API requests; legacy top-level fields remain supported but are deprecated.
    • For resumable streams, provide userId and conversationId via options.memory.

Written for commit f4fe8a6. Summary will update on new commits.

Summary by CodeRabbit

  • New Features

    • Added a per-call memory envelope (options.memory) with userId, conversationId and nested memory.options for contextLimit, semanticMemory, and conversationPersistence; envelope takes precedence when provided.
  • Deprecations

    • Legacy top-level fields (userId, conversationId, contextLimit, semanticMemory, conversationPersistence) marked deprecated but kept as fallbacks.
  • Tests

    • Added tests covering schema parsing, precedence resolution, resumable-stream keys, and memory-envelope behavior in generation/tool flows.
  • Documentation

    • Updated docs, examples, and API reference to use the memory envelope and nested settings.

@changeset-bot
Copy link
Copy Markdown

changeset-bot bot commented Mar 6, 2026

🦋 Changeset detected

Latest commit: f4fe8a6

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 3 packages
Name Type
@voltagent/core Patch
@voltagent/server-core Patch
@voltagent/resumable-streams Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@joggrbot

This comment has been minimized.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 6, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a per-call runtime memory envelope (options.memory) that carries conversationId, userId, and memory behavior overrides (memory.options such as contextLimit, semanticMemory, conversationPersistence). Legacy top-level fields remain supported but are deprecated and used as fallbacks.

Changes

Cohort / File(s) Summary
Types & Public API
packages/core/src/agent/types.ts, packages/core/src/agent/agent.ts
Add runtime memory types (RuntimeMemoryEnvelope, RuntimeMemoryBehaviorOptions, SemanticMemoryOptions, resolved memory types). Add memory?: RuntimeMemoryEnvelope to public options and mark legacy top-level memory fields deprecated.
Agent Core & Tests
packages/core/src/agent/agent.ts, packages/core/src/agent/agent.spec.ts, packages/core/src/agent/agent.spec-d.ts
Introduce resolveMemoryRuntimeOptions() to produce resolvedMemory; replace direct top-level memory field usage across generation, tooling, memory reads/writes, observability, and operation context. Add tests validating envelope precedence, propagation, fallbacks, and tool identity resolution.
Resumable Streams
packages/resumable-streams/src/chat-handlers.ts
defaultResolveConversationId and defaultResolveUserId now prefer options.memory values, falling back to legacy fields.
Server-Core Schemas, Handlers & Utils
packages/server-core/src/schemas/agent.schemas.ts, packages/server-core/src/schemas/agent.schemas.spec.ts, packages/server-core/src/handlers/agent.handlers.ts, packages/server-core/src/handlers/agent.handlers.spec.ts, packages/server-core/src/utils/options.ts
Add memory to GenerateOptionsSchema and related processed options. Mark top-level userId/conversationId/contextLimit/semanticMemory/conversationPersistence as deprecated; update parsing/validation and handler resolution to prefer envelope with tests for resumable streams.
Docs & Examples
.changeset/runtime-memory-envelope.md, website/docs/agents/*.md, website/docs/agents/memory/*.md, website/docs/api/*.md, website/docs/ui/*.md
Update examples, API docs, UI examples, and guides to use options.memory and memory.options.* instead of top-level userId, conversationId, and contextLimit. Document deprecations and precedence rules.
Server Handlers Tests
packages/server-core/src/handlers/agent.handlers.spec.ts
Add tests ensuring handleChatStream resolves IDs from options.memory for resumable streams and falls back to legacy fields when blank.

Sequence Diagram(s)

sequenceDiagram
    participant Caller as Caller
    participant Agent as Agent
    participant Resolver as MemoryResolver
    participant Store as MemoryService

    Caller->>Agent: generateText(input, options)
    Note over Caller,Agent: options may include both memory envelope\r\nand legacy top-level fields

    Agent->>Resolver: resolveMemoryRuntimeOptions(options)
    Note over Resolver: read `options.memory` first,\r\nthen fallback to top-level fields

    Resolver-->>Agent: resolved { userId, conversationId, contextLimit, semanticMemory, conversationPersistence }

    Agent->>Store: getMessages({ userId: resolved.userId, conversationId: resolved.conversationId, limit: resolved.contextLimit })
    Store-->>Agent: messages
    Agent-->>Caller: response
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Poem

🐇 I wrapped IDs in a tiny sleeve so new,
Each call finds its pocket, tidy and true.
Old keys still jingle if you keep them near,
New envelopes whisper which path to steer.
Hop, hop — memory snug, every call clear.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title 'feat: add runtime memory envelope for per-call memory overrides' clearly and concisely describes the main change: introducing a new memory envelope structure for per-call overrides.
Description check ✅ Passed PR description comprehensively covers current behavior, new behavior with detailed feature breakdown, and thorough testing/validation notes.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/runtime-memory-envelope

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 22 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="website/docs/ui/ai-sdk-integration.md">

<violation number="1" location="website/docs/ui/ai-sdk-integration.md:363">
P2: The deprecated-options table is incomplete: it omits legacy top-level `conversationPersistence` and `semanticMemory`, so users may incorrectly assume those backward-compatible fields are unsupported instead of deprecated.</violation>
</file>

<file name="packages/server-core/src/handlers/agent.handlers.ts">

<violation number="1" location="packages/server-core/src/handlers/agent.handlers.ts:251">
P2: `memory.conversationId` precedence accepts empty strings, so an empty memory value can override a valid legacy `conversationId` and incorrectly break resumable stream requests.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages bot commented Mar 6, 2026

Deploying voltagent with  Cloudflare Pages  Cloudflare Pages

Latest commit: f4fe8a6
Status: ✅  Deploy successful!
Preview URL: https://680f03d4.voltagent.pages.dev
Branch Preview URL: https://feat-runtime-memory-envelope.voltagent.pages.dev

View logs

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
website/docs/api/endpoints/agents.md (1)

184-211: ⚠️ Potential issue | 🟡 Minor

Document the remaining deprecated alias and the precedence rule.

This table covers deprecated top-level userId, conversationId, contextLimit, and conversationPersistence.*, but it still omits top-level semanticMemory even though the runtime types still accept it. It also never states that options.memory wins when both shapes are sent, which is the behavior implemented in resolveMemoryRuntimeOptions().

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@website/docs/api/endpoints/agents.md` around lines 184 - 211, Add the missing
deprecated top-level semanticMemory alias to the table (document name
`semanticMemory` as deprecated and point to using
`memory.options.semanticMemory` instead) and add a clear precedence note that
when both top-level options and the `memory` envelope are provided, the `memory`
envelope (resolved by resolveMemoryRuntimeOptions()) takes precedence; update
the table rows for the deprecated `semanticMemory` alias and add one-line text
referencing resolveMemoryRuntimeOptions() to state that `memory` wins over
top-level fields.
packages/core/src/agent/agent.ts (2)

4551-4647: ⚠️ Potential issue | 🟠 Major

Resolve working-memory state after the conversation ID is finalized.

This code reads working-memory instructions/content from resolvedMemory.conversationId, but prepareConversationContext() can still replace oc.conversationId on Line 4414. If the memory layer canonicalizes or creates a new thread ID, the system prompt is built from one conversation while the rest of the request reads/writes another.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/agent/agent.ts` around lines 4551 - 4647, The
working-memory reads use resolvedMemory.conversationId before the conversation
ID is finalized by prepareConversationContext(), causing a mismatch; move the
working-memory logic so it runs after prepareConversationContext() (or
re-resolve memory using this.resolveMemoryRuntimeOptions/oc after
prepareConversationContext()), and then call memoryManager.getMemory(),
memory.getWorkingMemoryInstructions(...) and memory.getWorkingMemory(...) using
the finalized conversationId (ensure checks in hasWorkingMemorySupport() and use
the same resolvedMemory variable or a newly-resolved one to avoid divergent
IDs).

3516-3626: ⚠️ Potential issue | 🟠 Major

Persist the full resolved memory envelope on the operation context.

Only userId and conversationId survive context creation here. Later paths like prepareTools() and toTool() therefore have no way to forward contextLimit, semanticMemory, or conversationPersistence, so tool/sub-agent calls silently ignore the new per-call behavior overrides.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/agent/agent.ts` around lines 3516 - 3626, The operation
context being returned from the setup in agent.ts currently only attaches userId
and conversationId from resolveMemoryRuntimeOptions; persist the entire resolved
memory envelope by adding a resolvedMemory (or memoryOptions) field to the
returned operation context (use the existing resolvedMemory variable from
resolveMemoryRuntimeOptions) so downstream functions like prepareTools() and
toTool() can read context.operationContext.resolvedMemory and honor
contextLimit, semanticMemory, and conversationPersistence overrides; update any
places that expect memory values on options.parentOperationContext to read this
new resolvedMemory property.
🧹 Nitpick comments (2)
packages/server-core/src/handlers/agent.handlers.spec.ts (1)

61-115: Exercise precedence here, not just envelope support.

This test only passes options.memory, so it would still pass if the handler accidentally preferred the legacy top-level IDs whenever both shapes are present. Add conflicting legacy conversationId/userId values and keep the existing expectation so this becomes a real precedence regression test.

Suggested test hardening
     const res = await handleChatStream(
       "agent-1",
       {
         input: "hello",
         options: {
           resumableStream: true,
+          conversationId: "legacy-conv",
+          userId: "legacy-user",
           memory: {
             conversationId: "conv-1",
             userId: "user-1",
           },
         },
       },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server-core/src/handlers/agent.handlers.spec.ts` around lines 61 -
115, The test currently only passes options.memory so it won't catch regressions
where legacy top-level conversationId/userId are incorrectly preferred; update
the test invoking handleChatStream to also include top-level conversationId and
userId with different values (e.g. conversationId: "legacy-conv", userId:
"legacy-user") while keeping options.memory as
{"conversationId":"conv-1","userId":"user-1"}, then assert that streamText and
deps.resumableStream.clearActiveStream are called with the IDs from
options.memory (ensuring handleChatStream prefers options.memory over legacy
top-level IDs); references: handleChatStream, options.memory, streamText,
deps.resumableStream.clearActiveStream.
packages/server-core/src/schemas/agent.schemas.spec.ts (1)

50-91: These tests don't prove the nested envelope is actually validated.

Because GenerateOptionsSchema is .passthrough(), a happy-path parse() here can still succeed if memory stops being part of the schema and is merely tolerated as an unknown property. Add at least one negative case with invalid nested memory.options types/enum values and assert that parsing fails.

Suggested coverage addition
 describe("GenerateOptionsSchema", () => {
@@
   it("keeps legacy top-level memory fields for backward compatibility", () => {
@@
     expect(() => GenerateOptionsSchema.parse(payload)).not.toThrow();
   });
+
+  it("rejects invalid nested memory overrides", () => {
+    expect(() =>
+      GenerateOptionsSchema.parse({
+        memory: {
+          userId: "user-1",
+          conversationId: "conv-1",
+          options: {
+            contextLimit: "12",
+            conversationPersistence: { mode: "bogus" },
+          },
+        },
+      }),
+    ).toThrow();
+  });
 });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server-core/src/schemas/agent.schemas.spec.ts` around lines 50 - 91,
Add a negative test to ensure nested memory envelope validation actually runs:
update the suite around GenerateOptionsSchema to include a test that passes an
invalid payload with memory.options containing wrong types/invalid enum values
(e.g., contextLimit as a string or semanticMemory.mergeStrategy set to an
invalid string) and assert that GenerateOptionsSchema.parse(payload) throws;
reference the existing test block names and the GenerateOptionsSchema symbol so
the new test lives alongside the current "accepts options.memory envelope..."
and "keeps legacy..." cases to prove the nested validation rejects malformed
memory.options.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/core/src/agent/types.ts`:
- Around line 965-989: The public options type CommonGenerateOptions must
re-declare the deprecated top-level overrides that the runtime still reads: add
back deprecated properties semanticMemory and conversationPersistence to
CommonGenerateOptions (with the same types the runtime expects), mark them with
deprecation JSDoc/comments, and keep them optional so legacy callers compile;
this restores the fallback path used in the agent resolution code (see agent.ts
handling of options.semanticMemory and options.conversationPersistence) without
changing runtime logic.

In `@packages/server-core/src/handlers/agent.handlers.ts`:
- Around line 249-264: The code accepts whitespace-only memory.conversationId
but trims memory.userId; update the memoryConversationId determination so it
rejects blank/whitespace-only strings like memoryUserId does: change the check
that sets memoryConversationId (based on memory and memory.conversationId) to
verify typeof memory.conversationId === "string" &&
memory.conversationId.trim().length > 0, leaving memoryUserId, conversationId,
and userId logic unchanged so the conversationId fallback still uses
memoryConversationId when valid.

---

Outside diff comments:
In `@packages/core/src/agent/agent.ts`:
- Around line 4551-4647: The working-memory reads use
resolvedMemory.conversationId before the conversation ID is finalized by
prepareConversationContext(), causing a mismatch; move the working-memory logic
so it runs after prepareConversationContext() (or re-resolve memory using
this.resolveMemoryRuntimeOptions/oc after prepareConversationContext()), and
then call memoryManager.getMemory(), memory.getWorkingMemoryInstructions(...)
and memory.getWorkingMemory(...) using the finalized conversationId (ensure
checks in hasWorkingMemorySupport() and use the same resolvedMemory variable or
a newly-resolved one to avoid divergent IDs).
- Around line 3516-3626: The operation context being returned from the setup in
agent.ts currently only attaches userId and conversationId from
resolveMemoryRuntimeOptions; persist the entire resolved memory envelope by
adding a resolvedMemory (or memoryOptions) field to the returned operation
context (use the existing resolvedMemory variable from
resolveMemoryRuntimeOptions) so downstream functions like prepareTools() and
toTool() can read context.operationContext.resolvedMemory and honor
contextLimit, semanticMemory, and conversationPersistence overrides; update any
places that expect memory values on options.parentOperationContext to read this
new resolvedMemory property.

In `@website/docs/api/endpoints/agents.md`:
- Around line 184-211: Add the missing deprecated top-level semanticMemory alias
to the table (document name `semanticMemory` as deprecated and point to using
`memory.options.semanticMemory` instead) and add a clear precedence note that
when both top-level options and the `memory` envelope are provided, the `memory`
envelope (resolved by resolveMemoryRuntimeOptions()) takes precedence; update
the table rows for the deprecated `semanticMemory` alias and add one-line text
referencing resolveMemoryRuntimeOptions() to state that `memory` wins over
top-level fields.

---

Nitpick comments:
In `@packages/server-core/src/handlers/agent.handlers.spec.ts`:
- Around line 61-115: The test currently only passes options.memory so it won't
catch regressions where legacy top-level conversationId/userId are incorrectly
preferred; update the test invoking handleChatStream to also include top-level
conversationId and userId with different values (e.g. conversationId:
"legacy-conv", userId: "legacy-user") while keeping options.memory as
{"conversationId":"conv-1","userId":"user-1"}, then assert that streamText and
deps.resumableStream.clearActiveStream are called with the IDs from
options.memory (ensuring handleChatStream prefers options.memory over legacy
top-level IDs); references: handleChatStream, options.memory, streamText,
deps.resumableStream.clearActiveStream.

In `@packages/server-core/src/schemas/agent.schemas.spec.ts`:
- Around line 50-91: Add a negative test to ensure nested memory envelope
validation actually runs: update the suite around GenerateOptionsSchema to
include a test that passes an invalid payload with memory.options containing
wrong types/invalid enum values (e.g., contextLimit as a string or
semanticMemory.mergeStrategy set to an invalid string) and assert that
GenerateOptionsSchema.parse(payload) throws; reference the existing test block
names and the GenerateOptionsSchema symbol so the new test lives alongside the
current "accepts options.memory envelope..." and "keeps legacy..." cases to
prove the nested validation rejects malformed memory.options.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: bcd64c8f-36d8-4050-ba6f-c243b04ddc73

📥 Commits

Reviewing files that changed from the base of the PR and between 549fc45 and 458763b.

📒 Files selected for processing (22)
  • .changeset/runtime-memory-envelope.md
  • packages/core/src/agent/agent.spec-d.ts
  • packages/core/src/agent/agent.spec.ts
  • packages/core/src/agent/agent.ts
  • packages/core/src/agent/types.ts
  • packages/resumable-streams/src/chat-handlers.ts
  • packages/server-core/src/handlers/agent.handlers.spec.ts
  • packages/server-core/src/handlers/agent.handlers.ts
  • packages/server-core/src/schemas/agent.schemas.spec.ts
  • packages/server-core/src/schemas/agent.schemas.ts
  • packages/server-core/src/utils/options.ts
  • website/docs/agents/memory.md
  • website/docs/agents/memory/in-memory.md
  • website/docs/agents/memory/overview.md
  • website/docs/agents/memory/semantic-search.md
  • website/docs/agents/memory/working-memory.md
  • website/docs/agents/overview.md
  • website/docs/agents/voltagent-instance.md
  • website/docs/api/api-reference.md
  • website/docs/api/endpoints/agents.md
  • website/docs/ui/ai-sdk-integration.md
  • website/docs/ui/assistant-ui.md

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.changeset/runtime-memory-envelope.md:
- Around line 2-4: The changeset currently marks three packages
("@voltagent/core", "@voltagent/server-core", "@voltagent/resumable-streams") as
"patch" but it introduces a new public configuration option (options.memory), so
update the changeset to use "minor" instead of "patch" for those three package
entries; open the .changeset/runtime-memory-envelope.md content and replace each
"patch" value with "minor" so the semver bump correctly reflects a
backward-compatible feature addition.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 95794a4d-dbc3-430f-a215-5220e330fb6a

📥 Commits

Reviewing files that changed from the base of the PR and between 458763b and f4f6082.

📒 Files selected for processing (1)
  • .changeset/runtime-memory-envelope.md

Comment on lines +2 to +4
"@voltagent/core": patch
"@voltagent/server-core": patch
"@voltagent/resumable-streams": patch
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use a minor bump for the new options.memory API.

This changeset adds a new supported public configuration shape across all three packages. That is backward-compatible feature work, so shipping it as patch understates the semver impact.

Suggested change
-"@voltagent/core": patch
-"@voltagent/server-core": patch
-"@voltagent/resumable-streams": patch
+"@voltagent/core": minor
+"@voltagent/server-core": minor
+"@voltagent/resumable-streams": minor
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"@voltagent/core": patch
"@voltagent/server-core": patch
"@voltagent/resumable-streams": patch
"@voltagent/core": minor
"@voltagent/server-core": minor
"@voltagent/resumable-streams": minor
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.changeset/runtime-memory-envelope.md around lines 2 - 4, The changeset
currently marks three packages ("@voltagent/core", "@voltagent/server-core",
"@voltagent/resumable-streams") as "patch" but it introduces a new public
configuration option (options.memory), so update the changeset to use "minor"
instead of "patch" for those three package entries; open the
.changeset/runtime-memory-envelope.md content and replace each "patch" value
with "minor" so the semver bump correctly reflects a backward-compatible feature
addition.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
website/docs/api/endpoints/agents.md (1)

193-216: ⚠️ Potential issue | 🟡 Minor

Document the conversationPersistence object itself, not just its subfields.

The table lists memory.options.conversationPersistence.mode|debounceMs|flushOnToolResult and the deprecated top-level subfields, but it never documents the parent objects memory.options.conversationPersistence and deprecated conversationPersistence. That leaves the request shape incomplete compared with how semanticMemory is documented and with the API described in the PR.

📘 Suggested table rows
 | `memory.options.semanticMemory.mergeStrategy` | string | `"append"` | `"prepend"` or `"append"` or `"interleave"` |
+| `memory.options.conversationPersistence` | object | - | Per-call conversation persistence overrides |
 | `memory.options.conversationPersistence.mode` | string | `"step"` | Persistence strategy: `"step"` or `"finish"` |
 | `memory.options.conversationPersistence.debounceMs` | number | `200` | Debounce interval for step checkpoint persistence |
 | `memory.options.conversationPersistence.flushOnToolResult` | boolean | `true` | Flush immediately on `tool-result`/`tool-error` in step mode |
 | `userId` | string | - | Deprecated: use `memory.userId` |
 | `conversationId` | string | - | Deprecated: use `memory.conversationId` |
 | `contextLimit` | number | 10 | Deprecated: use `memory.options.contextLimit` |
 | `semanticMemory` | object | - | Deprecated: use `memory.options.semanticMemory` |
 | `semanticMemory.enabled` | boolean | auto | Deprecated: use `memory.options.semanticMemory.enabled` |
 | `semanticMemory.semanticLimit` | number | 5 | Deprecated: use `memory.options.semanticMemory.semanticLimit` |
 | `semanticMemory.semanticThreshold` | number | 0.7 | Deprecated: use `memory.options.semanticMemory.semanticThreshold` |
 | `semanticMemory.mergeStrategy` | string | `"append"` | Deprecated: use `memory.options.semanticMemory.mergeStrategy` |
+| `conversationPersistence` | object | - | Deprecated: use `memory.options.conversationPersistence` |
 | `conversationPersistence.mode` | string | `"step"` | Deprecated: use `memory.options.conversationPersistence.mode` |
 | `conversationPersistence.debounceMs` | number | `200` | Deprecated: use `memory.options.conversationPersistence.debounceMs` |
 | `conversationPersistence.flushOnToolResult` | boolean | `true` | Deprecated: use `memory.options.conversationPersistence.flushOnToolResult` |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@website/docs/api/endpoints/agents.md` around lines 193 - 216, Add explicit
table rows documenting the parent objects memory.options.conversationPersistence
and the deprecated top-level conversationPersistence (in addition to the
existing subfield rows). For each parent object include the type (object),
default/placeholder, and a short description that it groups the subfields mode
(string), debounceMs (number), and flushOnToolResult (boolean); also mark the
top-level conversationPersistence as deprecated and point readers to
memory.options.conversationPersistence. Update any surrounding wording to mirror
how semanticMemory’s parent object is documented so the request shape is
complete.
🧹 Nitpick comments (2)
packages/server-core/src/schemas/agent.schemas.spec.ts (1)

51-74: Consider asserting parsed output values.

The test verifies the schema accepts valid input but doesn't assert the parsed structure. For consistency with the existing WorkflowExecutionRequestSchema tests (lines 19-24, 41-46), consider capturing the parsed result and asserting expected values.

✨ Optional improvement
   it("accepts options.memory envelope with nested runtime overrides", () => {
     const payload = {
       memory: {
         userId: "user-1",
         conversationId: "conv-1",
         options: {
           contextLimit: 12,
           semanticMemory: {
             enabled: true,
             semanticLimit: 4,
             semanticThreshold: 0.8,
             mergeStrategy: "append",
           },
           conversationPersistence: {
             mode: "step",
             debounceMs: 150,
             flushOnToolResult: true,
           },
         },
       },
     };

-    expect(() => GenerateOptionsSchema.parse(payload)).not.toThrow();
+    const parsed = GenerateOptionsSchema.parse(payload);
+
+    expect(parsed.memory?.userId).toBe("user-1");
+    expect(parsed.memory?.conversationId).toBe("conv-1");
+    expect(parsed.memory?.options?.contextLimit).toBe(12);
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server-core/src/schemas/agent.schemas.spec.ts` around lines 51 - 74,
Test currently only checks that GenerateOptionsSchema.parse(payload) does not
throw; update the spec to capture the parsed result and add assertions on key
fields to mirror existing tests: call const result =
GenerateOptionsSchema.parse(payload) and assert result.memory.userId ===
"user-1", result.memory.conversationId === "conv-1",
result.memory.options.contextLimit === 12,
result.memory.options.semanticMemory.enabled === true (and optionally
semanticLimit/semanticThreshold/mergeStrategy and
conversationPersistence.mode/debounceMs/flushOnToolResult) so the test verifies
the parsed structure as well as acceptance.
packages/core/src/agent/agent.ts (1)

675-692: Prefer re-exporting the shared memory-envelope types from types.ts.

These shapes already exist in packages/core/src/agent/types.ts, and duplicating them here creates a second contract that has to stay in sync with server-core schemas and CommonResolvedRuntimeMemoryOptions. Re-exporting aliases from the shared source would remove that drift risk.

As per coding guidelines, "Maintain type safety in TypeScript-first codebase".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/agent/agent.ts` around lines 675 - 692, These duplicated
interfaces (SemanticMemoryOptions, RuntimeMemoryBehaviorOptions,
RuntimeMemoryEnvelope) should be removed here and re-exported from the shared
types module instead: import and re-export the corresponding types from
packages/core/src/agent/types.ts (or the shared barrel) so the file no longer
declares these shapes locally; update any local references in this file (e.g.,
usages in functions/classes that currently refer to SemanticMemoryOptions,
RuntimeMemoryBehaviorOptions, RuntimeMemoryEnvelope) to use the re-exported
types to keep a single source-of-truth and preserve type compatibility with
CommonResolvedRuntimeMemoryOptions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/core/src/agent/agent.spec.ts`:
- Around line 1977-2025: The test only asserts operationContext.resolvedMemory
but must also assert that the top-level identity fields were updated; after
calling (agent as any).createOperationContext(...) add expectations that
operationContext.userId and operationContext.conversationId equal "memory-user"
and "memory-conv" respectively (i.e., the values from the memory envelope), so
hooks/logs/eval payloads will see the same resolved identity as resolvedMemory;
locate the assertion block in the "should store resolved memory envelope on
operation context" spec around the createOperationContext call and add these two
expects.
- Around line 1922-1975: The test currently only verifies a matching memory
lookup but allows extra calls; update the assertion to enforce precedence by
checking the spy call count and forbidding legacy lookups: after calling
Agent.generateText (the Agent constructor using memory and the
Memory.getMessages spy via getMessagesSpy), assert
getMessagesSpy.mock.calls.length === 1 (or the exact expected number) and also
assert that no call in getMessagesSpy.mock.calls has userId === "legacy-user" &&
conversationId === "legacy-conv", ensuring only the envelope IDs
("memory-user","memory-conv") were used.

In `@packages/core/src/agent/agent.ts`:
- Around line 4284-4294: The current short-circuit uses canIUseMemory
(resolvedMemory.userId && resolvedMemory.conversationId) which prevents loading
memory when only userId exists; change the gating so memory hydation can run
when resolvedMemory.userId is present (do not require conversationId) and keep
the existing logic that determines useSemanticSearch
(resolvedMemory.semanticMemory?.enabled ?? this.hasSemanticSearchSupport()) and
populates memoryContextMessages; ensure downstream code that relies on
result.conversationId and the semantic-search fallback still runs and can
set/create the conversationId after loading memory (update any branches using
canIUseMemory to instead check for userId or for a post-load conversationId as
appropriate).
- Around line 3471-3495: resolveMemoryRuntimeOptions currently only reads from
the initial BaseGenerationOptions and parent context, so it returns
pre-hydration ids; change resolveMemoryRuntimeOptions to accept the live
operation context (oc) or an optional resolved operationContext param and prefer
oc.resolvedMemory / oc.conversationId (falling back to options where needed) so
the function returns the post-hydration userId/conversationId/context values;
update call sites (e.g., where prepareMessages is invoked) to pass the live oc
after prepareMessages completes and ensure callers like prepareTools and
createWorkingMemoryTools use the updated signature so subagent delegation and
working-memory bindings use the hydrated ids.

In `@website/docs/ui/ai-sdk-integration.md`:
- Around line 264-267: The current implementation leaves memory.conversationId
stable so calling setMessages([]) only clears client state but the server will
continue the same persisted thread; update the chat-clear logic to
rotate/regenerate memory.conversationId (e.g., create a new UUID) when the user
hits “Clear” (where you call setMessages([])) so subsequent sends start a new
conversation, or alternatively add a clear comment in the docs stating that
Clear is UI-only; modify the Clear handler that calls setMessages([]) to also
update the conversationId used in the memory object sent with messages (the
memory.conversationId value).

---

Outside diff comments:
In `@website/docs/api/endpoints/agents.md`:
- Around line 193-216: Add explicit table rows documenting the parent objects
memory.options.conversationPersistence and the deprecated top-level
conversationPersistence (in addition to the existing subfield rows). For each
parent object include the type (object), default/placeholder, and a short
description that it groups the subfields mode (string), debounceMs (number), and
flushOnToolResult (boolean); also mark the top-level conversationPersistence as
deprecated and point readers to memory.options.conversationPersistence. Update
any surrounding wording to mirror how semanticMemory’s parent object is
documented so the request shape is complete.

---

Nitpick comments:
In `@packages/core/src/agent/agent.ts`:
- Around line 675-692: These duplicated interfaces (SemanticMemoryOptions,
RuntimeMemoryBehaviorOptions, RuntimeMemoryEnvelope) should be removed here and
re-exported from the shared types module instead: import and re-export the
corresponding types from packages/core/src/agent/types.ts (or the shared barrel)
so the file no longer declares these shapes locally; update any local references
in this file (e.g., usages in functions/classes that currently refer to
SemanticMemoryOptions, RuntimeMemoryBehaviorOptions, RuntimeMemoryEnvelope) to
use the re-exported types to keep a single source-of-truth and preserve type
compatibility with CommonResolvedRuntimeMemoryOptions.

In `@packages/server-core/src/schemas/agent.schemas.spec.ts`:
- Around line 51-74: Test currently only checks that
GenerateOptionsSchema.parse(payload) does not throw; update the spec to capture
the parsed result and add assertions on key fields to mirror existing tests:
call const result = GenerateOptionsSchema.parse(payload) and assert
result.memory.userId === "user-1", result.memory.conversationId === "conv-1",
result.memory.options.contextLimit === 12,
result.memory.options.semanticMemory.enabled === true (and optionally
semanticLimit/semanticThreshold/mergeStrategy and
conversationPersistence.mode/debounceMs/flushOnToolResult) so the test verifies
the parsed structure as well as acceptance.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c9ad780d-60dc-4d35-a1bc-0c929a2645ac

📥 Commits

Reviewing files that changed from the base of the PR and between f4f6082 and d73138e.

📒 Files selected for processing (9)
  • packages/core/src/agent/agent.spec-d.ts
  • packages/core/src/agent/agent.spec.ts
  • packages/core/src/agent/agent.ts
  • packages/core/src/agent/types.ts
  • packages/server-core/src/handlers/agent.handlers.spec.ts
  • packages/server-core/src/handlers/agent.handlers.ts
  • packages/server-core/src/schemas/agent.schemas.spec.ts
  • website/docs/api/endpoints/agents.md
  • website/docs/ui/ai-sdk-integration.md
🚧 Files skipped from review as they are similar to previous changes (3)
  • packages/server-core/src/handlers/agent.handlers.ts
  • packages/core/src/agent/agent.spec-d.ts
  • packages/server-core/src/handlers/agent.handlers.spec.ts

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/core/src/agent/agent.ts (1)

7928-8038: ⚠️ Potential issue | 🟠 Major

Don't expose working-memory tools without any resolved identity.

createWorkingMemoryTools() adds read/write tools even when both conversationId and userId are absent. Any tool call from that state forwards two undefined identifiers into the memory adapter, which makes the target scope ambiguous.

Suggested fix
   private createWorkingMemoryTools(
     options?: BaseGenerationOptions,
     operationContext?: OperationContext,
   ): Tool<any, any>[] {
     if (!this.hasWorkingMemorySupport()) {
       return [];
     }
     const resolvedMemory = this.resolveMemoryRuntimeOptions(options, operationContext);
+    if (!resolvedMemory.conversationId && !resolvedMemory.userId) {
+      return [];
+    }

     const memoryManager = this.memoryManager as unknown as MemoryManager;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/agent/agent.ts` around lines 7928 - 8038, The
createWorkingMemoryTools function currently creates get/update/clear tools even
when resolveMemoryRuntimeOptions returns no identity; change it to early-return
an empty array when neither resolvedMemory.conversationId nor
resolvedMemory.userId is present so no tool can be invoked with ambiguous
undefined identifiers; locate createWorkingMemoryTools and the resolvedMemory
variable (from resolveMemoryRuntimeOptions) and add a guard that checks
resolvedMemory.conversationId || resolvedMemory.userId and returns [] if both
are falsy before any createTool calls.
♻️ Duplicate comments (1)
packages/core/src/agent/agent.spec.ts (1)

3343-3348: ⚠️ Potential issue | 🟡 Minor

Assert delegate-tool precedence, not just a matching call.

This still passes if createDelegateTool is invoked once with legacy IDs and once with envelope IDs. Add a call-count check, or explicitly reject "legacy-user" / "legacy-conv", so the test enforces precedence instead of just one matching invocation.

Suggested update
-      expect(mockCreateDelegateTool).toHaveBeenCalledWith(
+      expect(mockCreateDelegateTool).toHaveBeenCalledTimes(1);
+      expect(mockCreateDelegateTool).toHaveBeenCalledWith(
         expect.objectContaining({
           conversationId: "memory-conv",
           userId: "memory-user",
         }),
       );
+      expect(
+        mockCreateDelegateTool.mock.calls.some(
+          ([args]) => args?.conversationId === "legacy-conv" || args?.userId === "legacy-user",
+        ),
+      ).toBe(false);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/agent/agent.spec.ts` around lines 3343 - 3348, The test
currently only asserts that mockCreateDelegateTool was called with envelope IDs
but doesn't prevent a prior call with legacy IDs; update the assertion to
enforce precedence by either (a) checking the exact number of calls to
mockCreateDelegateTool (e.g.,
expect(mockCreateDelegateTool).toHaveBeenCalledTimes(1)) after the operation
that should use envelope IDs, or (b) add an explicit negative assertion that
mockCreateDelegateTool was not called with legacy IDs (e.g.,
expect(mockCreateDelegateTool).not.toHaveBeenCalledWith(expect.objectContaining({
conversationId: "legacy-conv", userId: "legacy-user" }))); apply this change
around the existing expect(mockCreateDelegateTool).toHaveBeenCalledWith(...) in
the test to ensure createDelegateTool precedence is enforced.
🧹 Nitpick comments (1)
website/docs/api/endpoints/agents.md (1)

184-218: Consider adding a migration example.

While the deprecation notes are clear, users would benefit from seeing a concrete before/after example showing how to migrate from legacy fields to the new memory envelope. This could be a small code snippet showing both approaches side-by-side.

📝 Suggested migration example

Consider adding a section like this after line 220:

**Migration Example:**

```json
// ❌ Legacy (deprecated but still supported)
{
  "input": "Hello",
  "options": {
    "userId": "user-123",
    "conversationId": "conv-456",
    "contextLimit": 10
  }
}

// ✅ Preferred (new memory envelope)
{
  "input": "Hello",
  "options": {
    "memory": {
      "userId": "user-123",
      "conversationId": "conv-456",
      "options": {
        "contextLimit": 10
      }
    }
  }
}

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @website/docs/api/endpoints/agents.md around lines 184 - 218, Add a short
"Migration Example" section showing a before/after conversion from the
deprecated fields (userId, conversationId, contextLimit, semanticMemory,
conversationPersistence, etc.) to the new memory envelope (memory and
memory.options.*); include a concise side-by-side JSON example demonstrating the
legacy request using userId/conversationId/contextLimit and the preferred
request using memory.userId, memory.conversationId, and
memory.options.contextLimit (place this example immediately after the table that
documents the memory fields so readers see it right after the memory/options
rows).


</details>

</blockquote></details>

</blockquote></details>

<details>
<summary>🤖 Prompt for all review comments with AI agents</summary>

Verify each finding against the current code and only fix it if needed.

Inline comments:
In @packages/core/src/agent/agent.ts:

  • Around line 4573-4575: The current logic only looks up working memory when a
    conversationId exists, causing user-scoped memory to be ignored on the first
    turn when only oc.userId is provided; update getSystemMessage (and the similar
    block around the 4649-4671 region) to use the already-computed
    workingMemoryConversationId / workingMemoryUserId values: if
    workingMemoryConversationId is missing but workingMemoryUserId exists, perform a
    user-scoped memory lookup (or merge user-scoped instructions) using
    workingMemoryUserId; ensure resolveMemoryRuntimeOptions,
    workingMemoryConversationId, and workingMemoryUserId are used as the sources of
    truth so the first response can include user-scoped working-memory when
    conversationId is absent.

In @website/docs/api/endpoints/agents.md:

  • Line 189: Update the docs for memory.options.semanticMemory.enabled (and the
    deprecated equivalent) to reflect the actual schema type and default: change the
    Type column to "boolean" and the Default column to "-" or "undefined", and add a
    short note like "Default: undefined (auto-enables if vectors are available)" to
    explain the internal auto behavior when the field is omitted; ensure both
    occurrences (the main field and the deprecated field) are updated so the docs
    match z.boolean().optional() rather than showing "auto" as a literal default.

In @website/docs/ui/ai-sdk-integration.md:

  • Around line 301-304: The resetConversation function currently clears state but
    doesn't stop any in-flight stream; modify resetConversation to call the stream
    stop function (stop()) before resetting state so any active response is halted
    prior to setMessages([]) and setConversationId(crypto.randomUUID()); ensure you
    reference the same stop() used by the streaming logic so the in-flight append
    cannot occur after the clear.

Outside diff comments:
In @packages/core/src/agent/agent.ts:

  • Around line 7928-8038: The createWorkingMemoryTools function currently creates
    get/update/clear tools even when resolveMemoryRuntimeOptions returns no
    identity; change it to early-return an empty array when neither
    resolvedMemory.conversationId nor resolvedMemory.userId is present so no tool
    can be invoked with ambiguous undefined identifiers; locate
    createWorkingMemoryTools and the resolvedMemory variable (from
    resolveMemoryRuntimeOptions) and add a guard that checks
    resolvedMemory.conversationId || resolvedMemory.userId and returns [] if both
    are falsy before any createTool calls.

Duplicate comments:
In @packages/core/src/agent/agent.spec.ts:

  • Around line 3343-3348: The test currently only asserts that
    mockCreateDelegateTool was called with envelope IDs but doesn't prevent a prior
    call with legacy IDs; update the assertion to enforce precedence by either (a)
    checking the exact number of calls to mockCreateDelegateTool (e.g.,
    expect(mockCreateDelegateTool).toHaveBeenCalledTimes(1)) after the operation
    that should use envelope IDs, or (b) add an explicit negative assertion that
    mockCreateDelegateTool was not called with legacy IDs (e.g.,
    expect(mockCreateDelegateTool).not.toHaveBeenCalledWith(expect.objectContaining({
    conversationId: "legacy-conv", userId: "legacy-user" }))); apply this change
    around the existing expect(mockCreateDelegateTool).toHaveBeenCalledWith(...) in
    the test to ensure createDelegateTool precedence is enforced.

Nitpick comments:
In @website/docs/api/endpoints/agents.md:

  • Around line 184-218: Add a short "Migration Example" section showing a
    before/after conversion from the deprecated fields (userId, conversationId,
    contextLimit, semanticMemory, conversationPersistence, etc.) to the new memory
    envelope (memory and memory.options.*); include a concise side-by-side JSON
    example demonstrating the legacy request using
    userId/conversationId/contextLimit and the preferred request using
    memory.userId, memory.conversationId, and memory.options.contextLimit (place
    this example immediately after the table that documents the memory fields so
    readers see it right after the memory/options rows).

</details>

---

<details>
<summary>ℹ️ Review info</summary>

<details>
<summary>⚙️ Run configuration</summary>

**Configuration used**: defaults

**Review profile**: CHILL

**Plan**: Pro

**Run ID**: `d427719c-c458-484e-96b5-d96dc93ab180`

</details>

<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between d73138edb830b9fa1e081749278932eb599e2f93 and c394d0c96f905fa60a4c626e04ad43d10e05cf7e.

</details>

<details>
<summary>📒 Files selected for processing (6)</summary>

* `packages/core/src/agent/agent.spec.ts`
* `packages/core/src/agent/agent.ts`
* `packages/core/src/agent/types.ts`
* `packages/server-core/src/schemas/agent.schemas.spec.ts`
* `website/docs/api/endpoints/agents.md`
* `website/docs/ui/ai-sdk-integration.md`

</details>

<details>
<summary>🚧 Files skipped from review as they are similar to previous changes (1)</summary>

* packages/server-core/src/schemas/agent.schemas.spec.ts

</details>

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
website/docs/ui/ai-sdk-integration.md (1)

231-233: ⚠️ Potential issue | 🟡 Minor

Guideline violation: Use of JSON.stringify in TypeScript code example.

The code example uses JSON.stringify on Lines 231 and 233. Per coding guidelines, TypeScript code should use the safeStringify function from @voltagent/internal instead. Consider updating the example to either use safeStringify or simplify the output display to avoid potential circular reference issues.

As per coding guidelines: "Never use JSON.stringify; use the safeStringify function instead, imported from @voltagent/internal".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@website/docs/ui/ai-sdk-integration.md` around lines 231 - 233, Replace direct
JSON.stringify usage in the example with the project-safe serializer: import
safeStringify from '@voltagent/internal' and use safeStringify(invocation.args,
null, 2) for the pre block and safeStringify(invocation.result) for the Result
line (or simplify to a compact representation using safeStringify) so the
example uses safeStringify instead of JSON.stringify for invocation.args and
invocation.result.
🧹 Nitpick comments (1)
website/docs/api/endpoints/agents.md (1)

522-525: Clarify that this useChat snippet relies on server-side memory for continuity.

This example only sends lastMessage on Line 520, so prior turns are preserved only because the request also includes stable memory.userId / memory.conversationId. Please either send messages instead, or add a short note calling out that this abbreviated payload depends on runtime memory being enabled.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@website/docs/api/endpoints/agents.md` around lines 522 - 525, The snippet for
useChat currently only sends lastMessage while also including memory.userId and
memory.conversationId, which relies on server-side memory for conversation
continuity; either modify the example to send the full messages array
(useChat(..., messages: [...] ) including previous turns) or add a short note
next to the useChat/lastMessage example explicitly stating that continuity
depends on runtime memory being enabled and stable memory.userId /
memory.conversationId values. Reference the useChat call, lastMessage, messages,
and memory.userId / memory.conversationId when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/core/src/agent/agent.ts`:
- Around line 3472-3483: The current nullish-coalescing chain for userId and
conversationId (contextResolvedMemory?.userId ?? operationContext?.userId ??
memory?.userId ?? options?.userId ?? parentUserId and similarly for
conversationId) lets empty string values win; change this to ignore
blank/empty-string envelope fields when resolving memory by selecting the first
non-blank value instead. Implement a small helper (e.g., isNonBlank or
firstNonBlank) and replace the ?? chain for userId and conversationId to test
each source like contextResolvedMemory?.userId, operationContext?.userId,
memory?.userId, options?.userId, parentUserId (and the conversationId
equivalents) and only accept values that are not null/undefined and not an empty
string so legacy/parent ids are used as fallbacks when envelopes are blank.

---

Outside diff comments:
In `@website/docs/ui/ai-sdk-integration.md`:
- Around line 231-233: Replace direct JSON.stringify usage in the example with
the project-safe serializer: import safeStringify from '@voltagent/internal' and
use safeStringify(invocation.args, null, 2) for the pre block and
safeStringify(invocation.result) for the Result line (or simplify to a compact
representation using safeStringify) so the example uses safeStringify instead of
JSON.stringify for invocation.args and invocation.result.

---

Nitpick comments:
In `@website/docs/api/endpoints/agents.md`:
- Around line 522-525: The snippet for useChat currently only sends lastMessage
while also including memory.userId and memory.conversationId, which relies on
server-side memory for conversation continuity; either modify the example to
send the full messages array (useChat(..., messages: [...] ) including previous
turns) or add a short note next to the useChat/lastMessage example explicitly
stating that continuity depends on runtime memory being enabled and stable
memory.userId / memory.conversationId values. Reference the useChat call,
lastMessage, messages, and memory.userId / memory.conversationId when making the
change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1b00ec7d-f450-42bc-9c39-eb06dd473398

📥 Commits

Reviewing files that changed from the base of the PR and between c394d0c and a214a9d.

📒 Files selected for processing (4)
  • packages/core/src/agent/agent.spec.ts
  • packages/core/src/agent/agent.ts
  • website/docs/api/endpoints/agents.md
  • website/docs/ui/ai-sdk-integration.md

@omeraplak omeraplak merged commit 0d8b1b9 into main Mar 6, 2026
24 checks passed
@omeraplak omeraplak deleted the feat/runtime-memory-envelope branch March 6, 2026 04:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant