feat: add runtime memory envelope for per-call memory overrides#1136
feat: add runtime memory envelope for per-call memory overrides#1136
Conversation
🦋 Changeset detectedLatest commit: f4fe8a6 The changes in this PR will be included in the next version bump. This PR includes changesets to release 3 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
This comment has been minimized.
This comment has been minimized.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a per-call runtime memory envelope ( Changes
Sequence Diagram(s)sequenceDiagram
participant Caller as Caller
participant Agent as Agent
participant Resolver as MemoryResolver
participant Store as MemoryService
Caller->>Agent: generateText(input, options)
Note over Caller,Agent: options may include both memory envelope\r\nand legacy top-level fields
Agent->>Resolver: resolveMemoryRuntimeOptions(options)
Note over Resolver: read `options.memory` first,\r\nthen fallback to top-level fields
Resolver-->>Agent: resolved { userId, conversationId, contextLimit, semanticMemory, conversationPersistence }
Agent->>Store: getMessages({ userId: resolved.userId, conversationId: resolved.conversationId, limit: resolved.contextLimit })
Store-->>Agent: messages
Agent-->>Caller: response
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
2 issues found across 22 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="website/docs/ui/ai-sdk-integration.md">
<violation number="1" location="website/docs/ui/ai-sdk-integration.md:363">
P2: The deprecated-options table is incomplete: it omits legacy top-level `conversationPersistence` and `semanticMemory`, so users may incorrectly assume those backward-compatible fields are unsupported instead of deprecated.</violation>
</file>
<file name="packages/server-core/src/handlers/agent.handlers.ts">
<violation number="1" location="packages/server-core/src/handlers/agent.handlers.ts:251">
P2: `memory.conversationId` precedence accepts empty strings, so an empty memory value can override a valid legacy `conversationId` and incorrectly break resumable stream requests.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Deploying voltagent with
|
| Latest commit: |
f4fe8a6
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://680f03d4.voltagent.pages.dev |
| Branch Preview URL: | https://feat-runtime-memory-envelope.voltagent.pages.dev |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
website/docs/api/endpoints/agents.md (1)
184-211:⚠️ Potential issue | 🟡 MinorDocument the remaining deprecated alias and the precedence rule.
This table covers deprecated top-level
userId,conversationId,contextLimit, andconversationPersistence.*, but it still omits top-levelsemanticMemoryeven though the runtime types still accept it. It also never states thatoptions.memorywins when both shapes are sent, which is the behavior implemented inresolveMemoryRuntimeOptions().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/docs/api/endpoints/agents.md` around lines 184 - 211, Add the missing deprecated top-level semanticMemory alias to the table (document name `semanticMemory` as deprecated and point to using `memory.options.semanticMemory` instead) and add a clear precedence note that when both top-level options and the `memory` envelope are provided, the `memory` envelope (resolved by resolveMemoryRuntimeOptions()) takes precedence; update the table rows for the deprecated `semanticMemory` alias and add one-line text referencing resolveMemoryRuntimeOptions() to state that `memory` wins over top-level fields.packages/core/src/agent/agent.ts (2)
4551-4647:⚠️ Potential issue | 🟠 MajorResolve working-memory state after the conversation ID is finalized.
This code reads working-memory instructions/content from
resolvedMemory.conversationId, butprepareConversationContext()can still replaceoc.conversationIdon Line 4414. If the memory layer canonicalizes or creates a new thread ID, the system prompt is built from one conversation while the rest of the request reads/writes another.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/agent/agent.ts` around lines 4551 - 4647, The working-memory reads use resolvedMemory.conversationId before the conversation ID is finalized by prepareConversationContext(), causing a mismatch; move the working-memory logic so it runs after prepareConversationContext() (or re-resolve memory using this.resolveMemoryRuntimeOptions/oc after prepareConversationContext()), and then call memoryManager.getMemory(), memory.getWorkingMemoryInstructions(...) and memory.getWorkingMemory(...) using the finalized conversationId (ensure checks in hasWorkingMemorySupport() and use the same resolvedMemory variable or a newly-resolved one to avoid divergent IDs).
3516-3626:⚠️ Potential issue | 🟠 MajorPersist the full resolved memory envelope on the operation context.
Only
userIdandconversationIdsurvive context creation here. Later paths likeprepareTools()andtoTool()therefore have no way to forwardcontextLimit,semanticMemory, orconversationPersistence, so tool/sub-agent calls silently ignore the new per-call behavior overrides.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/agent/agent.ts` around lines 3516 - 3626, The operation context being returned from the setup in agent.ts currently only attaches userId and conversationId from resolveMemoryRuntimeOptions; persist the entire resolved memory envelope by adding a resolvedMemory (or memoryOptions) field to the returned operation context (use the existing resolvedMemory variable from resolveMemoryRuntimeOptions) so downstream functions like prepareTools() and toTool() can read context.operationContext.resolvedMemory and honor contextLimit, semanticMemory, and conversationPersistence overrides; update any places that expect memory values on options.parentOperationContext to read this new resolvedMemory property.
🧹 Nitpick comments (2)
packages/server-core/src/handlers/agent.handlers.spec.ts (1)
61-115: Exercise precedence here, not just envelope support.This test only passes
options.memory, so it would still pass if the handler accidentally preferred the legacy top-level IDs whenever both shapes are present. Add conflicting legacyconversationId/userIdvalues and keep the existing expectation so this becomes a real precedence regression test.Suggested test hardening
const res = await handleChatStream( "agent-1", { input: "hello", options: { resumableStream: true, + conversationId: "legacy-conv", + userId: "legacy-user", memory: { conversationId: "conv-1", userId: "user-1", }, }, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server-core/src/handlers/agent.handlers.spec.ts` around lines 61 - 115, The test currently only passes options.memory so it won't catch regressions where legacy top-level conversationId/userId are incorrectly preferred; update the test invoking handleChatStream to also include top-level conversationId and userId with different values (e.g. conversationId: "legacy-conv", userId: "legacy-user") while keeping options.memory as {"conversationId":"conv-1","userId":"user-1"}, then assert that streamText and deps.resumableStream.clearActiveStream are called with the IDs from options.memory (ensuring handleChatStream prefers options.memory over legacy top-level IDs); references: handleChatStream, options.memory, streamText, deps.resumableStream.clearActiveStream.packages/server-core/src/schemas/agent.schemas.spec.ts (1)
50-91: These tests don't prove the nested envelope is actually validated.Because
GenerateOptionsSchemais.passthrough(), a happy-pathparse()here can still succeed ifmemorystops being part of the schema and is merely tolerated as an unknown property. Add at least one negative case with invalid nestedmemory.optionstypes/enum values and assert that parsing fails.Suggested coverage addition
describe("GenerateOptionsSchema", () => { @@ it("keeps legacy top-level memory fields for backward compatibility", () => { @@ expect(() => GenerateOptionsSchema.parse(payload)).not.toThrow(); }); + + it("rejects invalid nested memory overrides", () => { + expect(() => + GenerateOptionsSchema.parse({ + memory: { + userId: "user-1", + conversationId: "conv-1", + options: { + contextLimit: "12", + conversationPersistence: { mode: "bogus" }, + }, + }, + }), + ).toThrow(); + }); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server-core/src/schemas/agent.schemas.spec.ts` around lines 50 - 91, Add a negative test to ensure nested memory envelope validation actually runs: update the suite around GenerateOptionsSchema to include a test that passes an invalid payload with memory.options containing wrong types/invalid enum values (e.g., contextLimit as a string or semanticMemory.mergeStrategy set to an invalid string) and assert that GenerateOptionsSchema.parse(payload) throws; reference the existing test block names and the GenerateOptionsSchema symbol so the new test lives alongside the current "accepts options.memory envelope..." and "keeps legacy..." cases to prove the nested validation rejects malformed memory.options.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/core/src/agent/types.ts`:
- Around line 965-989: The public options type CommonGenerateOptions must
re-declare the deprecated top-level overrides that the runtime still reads: add
back deprecated properties semanticMemory and conversationPersistence to
CommonGenerateOptions (with the same types the runtime expects), mark them with
deprecation JSDoc/comments, and keep them optional so legacy callers compile;
this restores the fallback path used in the agent resolution code (see agent.ts
handling of options.semanticMemory and options.conversationPersistence) without
changing runtime logic.
In `@packages/server-core/src/handlers/agent.handlers.ts`:
- Around line 249-264: The code accepts whitespace-only memory.conversationId
but trims memory.userId; update the memoryConversationId determination so it
rejects blank/whitespace-only strings like memoryUserId does: change the check
that sets memoryConversationId (based on memory and memory.conversationId) to
verify typeof memory.conversationId === "string" &&
memory.conversationId.trim().length > 0, leaving memoryUserId, conversationId,
and userId logic unchanged so the conversationId fallback still uses
memoryConversationId when valid.
---
Outside diff comments:
In `@packages/core/src/agent/agent.ts`:
- Around line 4551-4647: The working-memory reads use
resolvedMemory.conversationId before the conversation ID is finalized by
prepareConversationContext(), causing a mismatch; move the working-memory logic
so it runs after prepareConversationContext() (or re-resolve memory using
this.resolveMemoryRuntimeOptions/oc after prepareConversationContext()), and
then call memoryManager.getMemory(), memory.getWorkingMemoryInstructions(...)
and memory.getWorkingMemory(...) using the finalized conversationId (ensure
checks in hasWorkingMemorySupport() and use the same resolvedMemory variable or
a newly-resolved one to avoid divergent IDs).
- Around line 3516-3626: The operation context being returned from the setup in
agent.ts currently only attaches userId and conversationId from
resolveMemoryRuntimeOptions; persist the entire resolved memory envelope by
adding a resolvedMemory (or memoryOptions) field to the returned operation
context (use the existing resolvedMemory variable from
resolveMemoryRuntimeOptions) so downstream functions like prepareTools() and
toTool() can read context.operationContext.resolvedMemory and honor
contextLimit, semanticMemory, and conversationPersistence overrides; update any
places that expect memory values on options.parentOperationContext to read this
new resolvedMemory property.
In `@website/docs/api/endpoints/agents.md`:
- Around line 184-211: Add the missing deprecated top-level semanticMemory alias
to the table (document name `semanticMemory` as deprecated and point to using
`memory.options.semanticMemory` instead) and add a clear precedence note that
when both top-level options and the `memory` envelope are provided, the `memory`
envelope (resolved by resolveMemoryRuntimeOptions()) takes precedence; update
the table rows for the deprecated `semanticMemory` alias and add one-line text
referencing resolveMemoryRuntimeOptions() to state that `memory` wins over
top-level fields.
---
Nitpick comments:
In `@packages/server-core/src/handlers/agent.handlers.spec.ts`:
- Around line 61-115: The test currently only passes options.memory so it won't
catch regressions where legacy top-level conversationId/userId are incorrectly
preferred; update the test invoking handleChatStream to also include top-level
conversationId and userId with different values (e.g. conversationId:
"legacy-conv", userId: "legacy-user") while keeping options.memory as
{"conversationId":"conv-1","userId":"user-1"}, then assert that streamText and
deps.resumableStream.clearActiveStream are called with the IDs from
options.memory (ensuring handleChatStream prefers options.memory over legacy
top-level IDs); references: handleChatStream, options.memory, streamText,
deps.resumableStream.clearActiveStream.
In `@packages/server-core/src/schemas/agent.schemas.spec.ts`:
- Around line 50-91: Add a negative test to ensure nested memory envelope
validation actually runs: update the suite around GenerateOptionsSchema to
include a test that passes an invalid payload with memory.options containing
wrong types/invalid enum values (e.g., contextLimit as a string or
semanticMemory.mergeStrategy set to an invalid string) and assert that
GenerateOptionsSchema.parse(payload) throws; reference the existing test block
names and the GenerateOptionsSchema symbol so the new test lives alongside the
current "accepts options.memory envelope..." and "keeps legacy..." cases to
prove the nested validation rejects malformed memory.options.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: bcd64c8f-36d8-4050-ba6f-c243b04ddc73
📒 Files selected for processing (22)
.changeset/runtime-memory-envelope.mdpackages/core/src/agent/agent.spec-d.tspackages/core/src/agent/agent.spec.tspackages/core/src/agent/agent.tspackages/core/src/agent/types.tspackages/resumable-streams/src/chat-handlers.tspackages/server-core/src/handlers/agent.handlers.spec.tspackages/server-core/src/handlers/agent.handlers.tspackages/server-core/src/schemas/agent.schemas.spec.tspackages/server-core/src/schemas/agent.schemas.tspackages/server-core/src/utils/options.tswebsite/docs/agents/memory.mdwebsite/docs/agents/memory/in-memory.mdwebsite/docs/agents/memory/overview.mdwebsite/docs/agents/memory/semantic-search.mdwebsite/docs/agents/memory/working-memory.mdwebsite/docs/agents/overview.mdwebsite/docs/agents/voltagent-instance.mdwebsite/docs/api/api-reference.mdwebsite/docs/api/endpoints/agents.mdwebsite/docs/ui/ai-sdk-integration.mdwebsite/docs/ui/assistant-ui.md
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.changeset/runtime-memory-envelope.md:
- Around line 2-4: The changeset currently marks three packages
("@voltagent/core", "@voltagent/server-core", "@voltagent/resumable-streams") as
"patch" but it introduces a new public configuration option (options.memory), so
update the changeset to use "minor" instead of "patch" for those three package
entries; open the .changeset/runtime-memory-envelope.md content and replace each
"patch" value with "minor" so the semver bump correctly reflects a
backward-compatible feature addition.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 95794a4d-dbc3-430f-a215-5220e330fb6a
📒 Files selected for processing (1)
.changeset/runtime-memory-envelope.md
| "@voltagent/core": patch | ||
| "@voltagent/server-core": patch | ||
| "@voltagent/resumable-streams": patch |
There was a problem hiding this comment.
Use a minor bump for the new options.memory API.
This changeset adds a new supported public configuration shape across all three packages. That is backward-compatible feature work, so shipping it as patch understates the semver impact.
Suggested change
-"@voltagent/core": patch
-"@voltagent/server-core": patch
-"@voltagent/resumable-streams": patch
+"@voltagent/core": minor
+"@voltagent/server-core": minor
+"@voltagent/resumable-streams": minor📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "@voltagent/core": patch | |
| "@voltagent/server-core": patch | |
| "@voltagent/resumable-streams": patch | |
| "@voltagent/core": minor | |
| "@voltagent/server-core": minor | |
| "@voltagent/resumable-streams": minor |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.changeset/runtime-memory-envelope.md around lines 2 - 4, The changeset
currently marks three packages ("@voltagent/core", "@voltagent/server-core",
"@voltagent/resumable-streams") as "patch" but it introduces a new public
configuration option (options.memory), so update the changeset to use "minor"
instead of "patch" for those three package entries; open the
.changeset/runtime-memory-envelope.md content and replace each "patch" value
with "minor" so the semver bump correctly reflects a backward-compatible feature
addition.
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
website/docs/api/endpoints/agents.md (1)
193-216:⚠️ Potential issue | 🟡 MinorDocument the
conversationPersistenceobject itself, not just its subfields.The table lists
memory.options.conversationPersistence.mode|debounceMs|flushOnToolResultand the deprecated top-level subfields, but it never documents the parent objectsmemory.options.conversationPersistenceand deprecatedconversationPersistence. That leaves the request shape incomplete compared with howsemanticMemoryis documented and with the API described in the PR.📘 Suggested table rows
| `memory.options.semanticMemory.mergeStrategy` | string | `"append"` | `"prepend"` or `"append"` or `"interleave"` | +| `memory.options.conversationPersistence` | object | - | Per-call conversation persistence overrides | | `memory.options.conversationPersistence.mode` | string | `"step"` | Persistence strategy: `"step"` or `"finish"` | | `memory.options.conversationPersistence.debounceMs` | number | `200` | Debounce interval for step checkpoint persistence | | `memory.options.conversationPersistence.flushOnToolResult` | boolean | `true` | Flush immediately on `tool-result`/`tool-error` in step mode | | `userId` | string | - | Deprecated: use `memory.userId` | | `conversationId` | string | - | Deprecated: use `memory.conversationId` | | `contextLimit` | number | 10 | Deprecated: use `memory.options.contextLimit` | | `semanticMemory` | object | - | Deprecated: use `memory.options.semanticMemory` | | `semanticMemory.enabled` | boolean | auto | Deprecated: use `memory.options.semanticMemory.enabled` | | `semanticMemory.semanticLimit` | number | 5 | Deprecated: use `memory.options.semanticMemory.semanticLimit` | | `semanticMemory.semanticThreshold` | number | 0.7 | Deprecated: use `memory.options.semanticMemory.semanticThreshold` | | `semanticMemory.mergeStrategy` | string | `"append"` | Deprecated: use `memory.options.semanticMemory.mergeStrategy` | +| `conversationPersistence` | object | - | Deprecated: use `memory.options.conversationPersistence` | | `conversationPersistence.mode` | string | `"step"` | Deprecated: use `memory.options.conversationPersistence.mode` | | `conversationPersistence.debounceMs` | number | `200` | Deprecated: use `memory.options.conversationPersistence.debounceMs` | | `conversationPersistence.flushOnToolResult` | boolean | `true` | Deprecated: use `memory.options.conversationPersistence.flushOnToolResult` |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/docs/api/endpoints/agents.md` around lines 193 - 216, Add explicit table rows documenting the parent objects memory.options.conversationPersistence and the deprecated top-level conversationPersistence (in addition to the existing subfield rows). For each parent object include the type (object), default/placeholder, and a short description that it groups the subfields mode (string), debounceMs (number), and flushOnToolResult (boolean); also mark the top-level conversationPersistence as deprecated and point readers to memory.options.conversationPersistence. Update any surrounding wording to mirror how semanticMemory’s parent object is documented so the request shape is complete.
🧹 Nitpick comments (2)
packages/server-core/src/schemas/agent.schemas.spec.ts (1)
51-74: Consider asserting parsed output values.The test verifies the schema accepts valid input but doesn't assert the parsed structure. For consistency with the existing
WorkflowExecutionRequestSchematests (lines 19-24, 41-46), consider capturing the parsed result and asserting expected values.✨ Optional improvement
it("accepts options.memory envelope with nested runtime overrides", () => { const payload = { memory: { userId: "user-1", conversationId: "conv-1", options: { contextLimit: 12, semanticMemory: { enabled: true, semanticLimit: 4, semanticThreshold: 0.8, mergeStrategy: "append", }, conversationPersistence: { mode: "step", debounceMs: 150, flushOnToolResult: true, }, }, }, }; - expect(() => GenerateOptionsSchema.parse(payload)).not.toThrow(); + const parsed = GenerateOptionsSchema.parse(payload); + + expect(parsed.memory?.userId).toBe("user-1"); + expect(parsed.memory?.conversationId).toBe("conv-1"); + expect(parsed.memory?.options?.contextLimit).toBe(12); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server-core/src/schemas/agent.schemas.spec.ts` around lines 51 - 74, Test currently only checks that GenerateOptionsSchema.parse(payload) does not throw; update the spec to capture the parsed result and add assertions on key fields to mirror existing tests: call const result = GenerateOptionsSchema.parse(payload) and assert result.memory.userId === "user-1", result.memory.conversationId === "conv-1", result.memory.options.contextLimit === 12, result.memory.options.semanticMemory.enabled === true (and optionally semanticLimit/semanticThreshold/mergeStrategy and conversationPersistence.mode/debounceMs/flushOnToolResult) so the test verifies the parsed structure as well as acceptance.packages/core/src/agent/agent.ts (1)
675-692: Prefer re-exporting the shared memory-envelope types fromtypes.ts.These shapes already exist in
packages/core/src/agent/types.ts, and duplicating them here creates a second contract that has to stay in sync with server-core schemas andCommonResolvedRuntimeMemoryOptions. Re-exporting aliases from the shared source would remove that drift risk.As per coding guidelines, "Maintain type safety in TypeScript-first codebase".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/agent/agent.ts` around lines 675 - 692, These duplicated interfaces (SemanticMemoryOptions, RuntimeMemoryBehaviorOptions, RuntimeMemoryEnvelope) should be removed here and re-exported from the shared types module instead: import and re-export the corresponding types from packages/core/src/agent/types.ts (or the shared barrel) so the file no longer declares these shapes locally; update any local references in this file (e.g., usages in functions/classes that currently refer to SemanticMemoryOptions, RuntimeMemoryBehaviorOptions, RuntimeMemoryEnvelope) to use the re-exported types to keep a single source-of-truth and preserve type compatibility with CommonResolvedRuntimeMemoryOptions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/core/src/agent/agent.spec.ts`:
- Around line 1977-2025: The test only asserts operationContext.resolvedMemory
but must also assert that the top-level identity fields were updated; after
calling (agent as any).createOperationContext(...) add expectations that
operationContext.userId and operationContext.conversationId equal "memory-user"
and "memory-conv" respectively (i.e., the values from the memory envelope), so
hooks/logs/eval payloads will see the same resolved identity as resolvedMemory;
locate the assertion block in the "should store resolved memory envelope on
operation context" spec around the createOperationContext call and add these two
expects.
- Around line 1922-1975: The test currently only verifies a matching memory
lookup but allows extra calls; update the assertion to enforce precedence by
checking the spy call count and forbidding legacy lookups: after calling
Agent.generateText (the Agent constructor using memory and the
Memory.getMessages spy via getMessagesSpy), assert
getMessagesSpy.mock.calls.length === 1 (or the exact expected number) and also
assert that no call in getMessagesSpy.mock.calls has userId === "legacy-user" &&
conversationId === "legacy-conv", ensuring only the envelope IDs
("memory-user","memory-conv") were used.
In `@packages/core/src/agent/agent.ts`:
- Around line 4284-4294: The current short-circuit uses canIUseMemory
(resolvedMemory.userId && resolvedMemory.conversationId) which prevents loading
memory when only userId exists; change the gating so memory hydation can run
when resolvedMemory.userId is present (do not require conversationId) and keep
the existing logic that determines useSemanticSearch
(resolvedMemory.semanticMemory?.enabled ?? this.hasSemanticSearchSupport()) and
populates memoryContextMessages; ensure downstream code that relies on
result.conversationId and the semantic-search fallback still runs and can
set/create the conversationId after loading memory (update any branches using
canIUseMemory to instead check for userId or for a post-load conversationId as
appropriate).
- Around line 3471-3495: resolveMemoryRuntimeOptions currently only reads from
the initial BaseGenerationOptions and parent context, so it returns
pre-hydration ids; change resolveMemoryRuntimeOptions to accept the live
operation context (oc) or an optional resolved operationContext param and prefer
oc.resolvedMemory / oc.conversationId (falling back to options where needed) so
the function returns the post-hydration userId/conversationId/context values;
update call sites (e.g., where prepareMessages is invoked) to pass the live oc
after prepareMessages completes and ensure callers like prepareTools and
createWorkingMemoryTools use the updated signature so subagent delegation and
working-memory bindings use the hydrated ids.
In `@website/docs/ui/ai-sdk-integration.md`:
- Around line 264-267: The current implementation leaves memory.conversationId
stable so calling setMessages([]) only clears client state but the server will
continue the same persisted thread; update the chat-clear logic to
rotate/regenerate memory.conversationId (e.g., create a new UUID) when the user
hits “Clear” (where you call setMessages([])) so subsequent sends start a new
conversation, or alternatively add a clear comment in the docs stating that
Clear is UI-only; modify the Clear handler that calls setMessages([]) to also
update the conversationId used in the memory object sent with messages (the
memory.conversationId value).
---
Outside diff comments:
In `@website/docs/api/endpoints/agents.md`:
- Around line 193-216: Add explicit table rows documenting the parent objects
memory.options.conversationPersistence and the deprecated top-level
conversationPersistence (in addition to the existing subfield rows). For each
parent object include the type (object), default/placeholder, and a short
description that it groups the subfields mode (string), debounceMs (number), and
flushOnToolResult (boolean); also mark the top-level conversationPersistence as
deprecated and point readers to memory.options.conversationPersistence. Update
any surrounding wording to mirror how semanticMemory’s parent object is
documented so the request shape is complete.
---
Nitpick comments:
In `@packages/core/src/agent/agent.ts`:
- Around line 675-692: These duplicated interfaces (SemanticMemoryOptions,
RuntimeMemoryBehaviorOptions, RuntimeMemoryEnvelope) should be removed here and
re-exported from the shared types module instead: import and re-export the
corresponding types from packages/core/src/agent/types.ts (or the shared barrel)
so the file no longer declares these shapes locally; update any local references
in this file (e.g., usages in functions/classes that currently refer to
SemanticMemoryOptions, RuntimeMemoryBehaviorOptions, RuntimeMemoryEnvelope) to
use the re-exported types to keep a single source-of-truth and preserve type
compatibility with CommonResolvedRuntimeMemoryOptions.
In `@packages/server-core/src/schemas/agent.schemas.spec.ts`:
- Around line 51-74: Test currently only checks that
GenerateOptionsSchema.parse(payload) does not throw; update the spec to capture
the parsed result and add assertions on key fields to mirror existing tests:
call const result = GenerateOptionsSchema.parse(payload) and assert
result.memory.userId === "user-1", result.memory.conversationId === "conv-1",
result.memory.options.contextLimit === 12,
result.memory.options.semanticMemory.enabled === true (and optionally
semanticLimit/semanticThreshold/mergeStrategy and
conversationPersistence.mode/debounceMs/flushOnToolResult) so the test verifies
the parsed structure as well as acceptance.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c9ad780d-60dc-4d35-a1bc-0c929a2645ac
📒 Files selected for processing (9)
packages/core/src/agent/agent.spec-d.tspackages/core/src/agent/agent.spec.tspackages/core/src/agent/agent.tspackages/core/src/agent/types.tspackages/server-core/src/handlers/agent.handlers.spec.tspackages/server-core/src/handlers/agent.handlers.tspackages/server-core/src/schemas/agent.schemas.spec.tswebsite/docs/api/endpoints/agents.mdwebsite/docs/ui/ai-sdk-integration.md
🚧 Files skipped from review as they are similar to previous changes (3)
- packages/server-core/src/handlers/agent.handlers.ts
- packages/core/src/agent/agent.spec-d.ts
- packages/server-core/src/handlers/agent.handlers.spec.ts
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/core/src/agent/agent.ts (1)
7928-8038:⚠️ Potential issue | 🟠 MajorDon't expose working-memory tools without any resolved identity.
createWorkingMemoryTools()adds read/write tools even when bothconversationIdanduserIdare absent. Any tool call from that state forwards twoundefinedidentifiers into the memory adapter, which makes the target scope ambiguous.Suggested fix
private createWorkingMemoryTools( options?: BaseGenerationOptions, operationContext?: OperationContext, ): Tool<any, any>[] { if (!this.hasWorkingMemorySupport()) { return []; } const resolvedMemory = this.resolveMemoryRuntimeOptions(options, operationContext); + if (!resolvedMemory.conversationId && !resolvedMemory.userId) { + return []; + } const memoryManager = this.memoryManager as unknown as MemoryManager;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/agent/agent.ts` around lines 7928 - 8038, The createWorkingMemoryTools function currently creates get/update/clear tools even when resolveMemoryRuntimeOptions returns no identity; change it to early-return an empty array when neither resolvedMemory.conversationId nor resolvedMemory.userId is present so no tool can be invoked with ambiguous undefined identifiers; locate createWorkingMemoryTools and the resolvedMemory variable (from resolveMemoryRuntimeOptions) and add a guard that checks resolvedMemory.conversationId || resolvedMemory.userId and returns [] if both are falsy before any createTool calls.
♻️ Duplicate comments (1)
packages/core/src/agent/agent.spec.ts (1)
3343-3348:⚠️ Potential issue | 🟡 MinorAssert delegate-tool precedence, not just a matching call.
This still passes if
createDelegateToolis invoked once with legacy IDs and once with envelope IDs. Add a call-count check, or explicitly reject"legacy-user"/"legacy-conv", so the test enforces precedence instead of just one matching invocation.Suggested update
- expect(mockCreateDelegateTool).toHaveBeenCalledWith( + expect(mockCreateDelegateTool).toHaveBeenCalledTimes(1); + expect(mockCreateDelegateTool).toHaveBeenCalledWith( expect.objectContaining({ conversationId: "memory-conv", userId: "memory-user", }), ); + expect( + mockCreateDelegateTool.mock.calls.some( + ([args]) => args?.conversationId === "legacy-conv" || args?.userId === "legacy-user", + ), + ).toBe(false);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/agent/agent.spec.ts` around lines 3343 - 3348, The test currently only asserts that mockCreateDelegateTool was called with envelope IDs but doesn't prevent a prior call with legacy IDs; update the assertion to enforce precedence by either (a) checking the exact number of calls to mockCreateDelegateTool (e.g., expect(mockCreateDelegateTool).toHaveBeenCalledTimes(1)) after the operation that should use envelope IDs, or (b) add an explicit negative assertion that mockCreateDelegateTool was not called with legacy IDs (e.g., expect(mockCreateDelegateTool).not.toHaveBeenCalledWith(expect.objectContaining({ conversationId: "legacy-conv", userId: "legacy-user" }))); apply this change around the existing expect(mockCreateDelegateTool).toHaveBeenCalledWith(...) in the test to ensure createDelegateTool precedence is enforced.
🧹 Nitpick comments (1)
website/docs/api/endpoints/agents.md (1)
184-218: Consider adding a migration example.While the deprecation notes are clear, users would benefit from seeing a concrete before/after example showing how to migrate from legacy fields to the new memory envelope. This could be a small code snippet showing both approaches side-by-side.
📝 Suggested migration example
Consider adding a section like this after line 220:
**Migration Example:** ```json // ❌ Legacy (deprecated but still supported) { "input": "Hello", "options": { "userId": "user-123", "conversationId": "conv-456", "contextLimit": 10 } } // ✅ Preferred (new memory envelope) { "input": "Hello", "options": { "memory": { "userId": "user-123", "conversationId": "conv-456", "options": { "contextLimit": 10 } } } }</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@website/docs/api/endpoints/agents.mdaround lines 184 - 218, Add a short
"Migration Example" section showing a before/after conversion from the
deprecated fields (userId, conversationId, contextLimit, semanticMemory,
conversationPersistence, etc.) to the new memory envelope (memory and
memory.options.*); include a concise side-by-side JSON example demonstrating the
legacy request using userId/conversationId/contextLimit and the preferred
request using memory.userId, memory.conversationId, and
memory.options.contextLimit (place this example immediately after the table that
documents the memory fields so readers see it right after the memory/options
rows).</details> </blockquote></details> </blockquote></details> <details> <summary>🤖 Prompt for all review comments with AI agents</summary>Verify each finding against the current code and only fix it if needed.
Inline comments:
In@packages/core/src/agent/agent.ts:
- Around line 4573-4575: The current logic only looks up working memory when a
conversationId exists, causing user-scoped memory to be ignored on the first
turn when only oc.userId is provided; update getSystemMessage (and the similar
block around the 4649-4671 region) to use the already-computed
workingMemoryConversationId / workingMemoryUserId values: if
workingMemoryConversationId is missing but workingMemoryUserId exists, perform a
user-scoped memory lookup (or merge user-scoped instructions) using
workingMemoryUserId; ensure resolveMemoryRuntimeOptions,
workingMemoryConversationId, and workingMemoryUserId are used as the sources of
truth so the first response can include user-scoped working-memory when
conversationId is absent.In
@website/docs/api/endpoints/agents.md:
- Line 189: Update the docs for memory.options.semanticMemory.enabled (and the
deprecated equivalent) to reflect the actual schema type and default: change the
Type column to "boolean" and the Default column to "-" or "undefined", and add a
short note like "Default: undefined (auto-enables if vectors are available)" to
explain the internal auto behavior when the field is omitted; ensure both
occurrences (the main field and the deprecated field) are updated so the docs
match z.boolean().optional() rather than showing "auto" as a literal default.In
@website/docs/ui/ai-sdk-integration.md:
- Around line 301-304: The resetConversation function currently clears state but
doesn't stop any in-flight stream; modify resetConversation to call the stream
stop function (stop()) before resetting state so any active response is halted
prior to setMessages([]) and setConversationId(crypto.randomUUID()); ensure you
reference the same stop() used by the streaming logic so the in-flight append
cannot occur after the clear.
Outside diff comments:
In@packages/core/src/agent/agent.ts:
- Around line 7928-8038: The createWorkingMemoryTools function currently creates
get/update/clear tools even when resolveMemoryRuntimeOptions returns no
identity; change it to early-return an empty array when neither
resolvedMemory.conversationId nor resolvedMemory.userId is present so no tool
can be invoked with ambiguous undefined identifiers; locate
createWorkingMemoryTools and the resolvedMemory variable (from
resolveMemoryRuntimeOptions) and add a guard that checks
resolvedMemory.conversationId || resolvedMemory.userId and returns [] if both
are falsy before any createTool calls.
Duplicate comments:
In@packages/core/src/agent/agent.spec.ts:
- Around line 3343-3348: The test currently only asserts that
mockCreateDelegateTool was called with envelope IDs but doesn't prevent a prior
call with legacy IDs; update the assertion to enforce precedence by either (a)
checking the exact number of calls to mockCreateDelegateTool (e.g.,
expect(mockCreateDelegateTool).toHaveBeenCalledTimes(1)) after the operation
that should use envelope IDs, or (b) add an explicit negative assertion that
mockCreateDelegateTool was not called with legacy IDs (e.g.,
expect(mockCreateDelegateTool).not.toHaveBeenCalledWith(expect.objectContaining({
conversationId: "legacy-conv", userId: "legacy-user" }))); apply this change
around the existing expect(mockCreateDelegateTool).toHaveBeenCalledWith(...) in
the test to ensure createDelegateTool precedence is enforced.
Nitpick comments:
In@website/docs/api/endpoints/agents.md:
- Around line 184-218: Add a short "Migration Example" section showing a
before/after conversion from the deprecated fields (userId, conversationId,
contextLimit, semanticMemory, conversationPersistence, etc.) to the new memory
envelope (memory and memory.options.*); include a concise side-by-side JSON
example demonstrating the legacy request using
userId/conversationId/contextLimit and the preferred request using
memory.userId, memory.conversationId, and memory.options.contextLimit (place
this example immediately after the table that documents the memory fields so
readers see it right after the memory/options rows).</details> --- <details> <summary>ℹ️ Review info</summary> <details> <summary>⚙️ Run configuration</summary> **Configuration used**: defaults **Review profile**: CHILL **Plan**: Pro **Run ID**: `d427719c-c458-484e-96b5-d96dc93ab180` </details> <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between d73138edb830b9fa1e081749278932eb599e2f93 and c394d0c96f905fa60a4c626e04ad43d10e05cf7e. </details> <details> <summary>📒 Files selected for processing (6)</summary> * `packages/core/src/agent/agent.spec.ts` * `packages/core/src/agent/agent.ts` * `packages/core/src/agent/types.ts` * `packages/server-core/src/schemas/agent.schemas.spec.ts` * `website/docs/api/endpoints/agents.md` * `website/docs/ui/ai-sdk-integration.md` </details> <details> <summary>🚧 Files skipped from review as they are similar to previous changes (1)</summary> * packages/server-core/src/schemas/agent.schemas.spec.ts </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
website/docs/ui/ai-sdk-integration.md (1)
231-233:⚠️ Potential issue | 🟡 MinorGuideline violation: Use of
JSON.stringifyin TypeScript code example.The code example uses
JSON.stringifyon Lines 231 and 233. Per coding guidelines, TypeScript code should use thesafeStringifyfunction from@voltagent/internalinstead. Consider updating the example to either usesafeStringifyor simplify the output display to avoid potential circular reference issues.As per coding guidelines: "Never use JSON.stringify; use the
safeStringifyfunction instead, imported from@voltagent/internal".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/docs/ui/ai-sdk-integration.md` around lines 231 - 233, Replace direct JSON.stringify usage in the example with the project-safe serializer: import safeStringify from '@voltagent/internal' and use safeStringify(invocation.args, null, 2) for the pre block and safeStringify(invocation.result) for the Result line (or simplify to a compact representation using safeStringify) so the example uses safeStringify instead of JSON.stringify for invocation.args and invocation.result.
🧹 Nitpick comments (1)
website/docs/api/endpoints/agents.md (1)
522-525: Clarify that thisuseChatsnippet relies on server-side memory for continuity.This example only sends
lastMessageon Line 520, so prior turns are preserved only because the request also includes stablememory.userId/memory.conversationId. Please either sendmessagesinstead, or add a short note calling out that this abbreviated payload depends on runtime memory being enabled.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/docs/api/endpoints/agents.md` around lines 522 - 525, The snippet for useChat currently only sends lastMessage while also including memory.userId and memory.conversationId, which relies on server-side memory for conversation continuity; either modify the example to send the full messages array (useChat(..., messages: [...] ) including previous turns) or add a short note next to the useChat/lastMessage example explicitly stating that continuity depends on runtime memory being enabled and stable memory.userId / memory.conversationId values. Reference the useChat call, lastMessage, messages, and memory.userId / memory.conversationId when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/core/src/agent/agent.ts`:
- Around line 3472-3483: The current nullish-coalescing chain for userId and
conversationId (contextResolvedMemory?.userId ?? operationContext?.userId ??
memory?.userId ?? options?.userId ?? parentUserId and similarly for
conversationId) lets empty string values win; change this to ignore
blank/empty-string envelope fields when resolving memory by selecting the first
non-blank value instead. Implement a small helper (e.g., isNonBlank or
firstNonBlank) and replace the ?? chain for userId and conversationId to test
each source like contextResolvedMemory?.userId, operationContext?.userId,
memory?.userId, options?.userId, parentUserId (and the conversationId
equivalents) and only accept values that are not null/undefined and not an empty
string so legacy/parent ids are used as fallbacks when envelopes are blank.
---
Outside diff comments:
In `@website/docs/ui/ai-sdk-integration.md`:
- Around line 231-233: Replace direct JSON.stringify usage in the example with
the project-safe serializer: import safeStringify from '@voltagent/internal' and
use safeStringify(invocation.args, null, 2) for the pre block and
safeStringify(invocation.result) for the Result line (or simplify to a compact
representation using safeStringify) so the example uses safeStringify instead of
JSON.stringify for invocation.args and invocation.result.
---
Nitpick comments:
In `@website/docs/api/endpoints/agents.md`:
- Around line 522-525: The snippet for useChat currently only sends lastMessage
while also including memory.userId and memory.conversationId, which relies on
server-side memory for conversation continuity; either modify the example to
send the full messages array (useChat(..., messages: [...] ) including previous
turns) or add a short note next to the useChat/lastMessage example explicitly
stating that continuity depends on runtime memory being enabled and stable
memory.userId / memory.conversationId values. Reference the useChat call,
lastMessage, messages, and memory.userId / memory.conversationId when making the
change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1b00ec7d-f450-42bc-9c39-eb06dd473398
📒 Files selected for processing (4)
packages/core/src/agent/agent.spec.tspackages/core/src/agent/agent.tswebsite/docs/api/endpoints/agents.mdwebsite/docs/ui/ai-sdk-integration.md
PR Checklist
Please check if your PR fulfills the following requirements:
Bugs / Features
What is the current behavior?
Memory identity and runtime memory behavior are passed through legacy top-level fields (
options.conversationId,options.userId,options.contextLimit,options.semanticMemory,options.conversationPersistence).This makes per-call memory overrides less structured and causes inconsistencies across layers (core runtime, server request schema, and resumable stream key resolution).
What is the new behavior?
Introduces a unified runtime memory envelope under
options.memorywhile preserving backward compatibility with legacy top-level fields.options.memory.conversationId(thread)options.memory.userId(resource)options.memory.options.contextLimitoptions.memory.options.semanticMemoryoptions.memory.options.conversationPersistenceoptions.memoryvalues are prioritized.@voltagent/server-coreschemas and handlers now accept/readoptions.memory.@voltagent/resumable-streamsdefault resolvers now readoptions.memory.{conversationId,userId}first.fixes (issue): N/A
Notes for reviewers
@voltagent/coreagent tests.options.memory.pnpm --filter @voltagent/core testpnpm --filter @voltagent/server-core testpnpm --filter @voltagent/server-core typecheckpnpm --filter @voltagent/resumable-streams typecheckSummary by cubic
Adds a per-call runtime memory envelope under options.memory and resolves it across the agent, server, and resumable streams. Resolved memory is stored on the operation context, enables user-scoped working memory when conversationId is not set, and safely falls back to legacy fields.
New Features
Migration
Written for commit f4fe8a6. Summary will update on new commits.
Summary by CodeRabbit
New Features
Deprecations
Tests
Documentation