Skip to content

Commit

Permalink
cr
Browse files Browse the repository at this point in the history
  • Loading branch information
bracesproul committed Oct 10, 2024
1 parent a4b261c commit 3a341f6
Show file tree
Hide file tree
Showing 6 changed files with 42 additions and 19 deletions.
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
"@langchain/anthropic": "^0.3.3",
"@langchain/core": "^0.3.9",
"@langchain/langgraph": "^0.2.10",
"@langchain/langgraph-sdk": "^0.0.14",
"@langchain/langgraph-sdk": "^0.0.16",
"@langchain/openai": "^0.3.5",
"@radix-ui/react-avatar": "^1.1.0",
"@radix-ui/react-dialog": "^1.1.1",
Expand Down
1 change: 0 additions & 1 deletion src/agent/open-canvas/nodes/generateFollowup.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ import { FOLLOWUP_ARTIFACT_PROMPT } from "../prompts";
import { ensureStoreInConfig, formatReflections } from "@/agent/utils";
import { Reflections } from "../../../types";
import { LangGraphRunnableConfig } from "@langchain/langgraph";
import { isHumanMessage } from "@langchain/core/messages";

/**
* Generate a followup message after generating or updating an artifact.
Expand Down
27 changes: 23 additions & 4 deletions src/agent/open-canvas/nodes/reflect.ts
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,29 @@ export const reflectNode = async (
const newThread = await langGraphClient.threads.create();
// Create a new reflection run, but do not `wait` for it to finish.
// Intended to be a background run.
await langGraphClient.runs.create(newThread.thread_id, "reflection", {
input: reflectionInput,
config: reflectionConfig,
});
await langGraphClient.runs.create(
// We enqueue the memory formation process on the same thread.
// This means that IF this thread doesn't receive more messages before `afterSeconds`,
// it will read from the shared state and extract memories for us.
// If a new request comes in for this thread before the scheduled run is executed,
// that run will be canceled, and a **new** one will be scheduled once
// this node is executed again.
newThread.thread_id,
// Pass the name of the graph to run.
"reflection",
{
input: reflectionInput,
config: reflectionConfig,
// This memory-formation run will be enqueued and run later
// If a new run comes in before it is scheduled, it will be cancelled,
// then when this node is executed again, a *new* run will be scheduled
multitaskStrategy: "enqueue",
// This lets us "debounce" repeated requests to the memory graph
// if the user is actively engaging in a conversation. This saves us $$ and
// can help reduce the occurrence of duplicate memories.
afterSeconds: 15,
}
);

return {};
};
9 changes: 6 additions & 3 deletions src/agent/reflection/prompts.ts
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,17 @@ Your job is to take all of the context and existing reflections and re-generate
- Remove duplicate reflections, or combine multiple reflections into one if they are duplicating content.
- Do not remove reflections unless the conversation/artifact clearly demonstrates they should no longer be included.
This does NOT mean remove reflections if you see no evidence of them in the conversation/artifact, but instead remove them if the user indicates they are no longer relevant.
- Think of why a user said what they said when generating rules. This will help you generate more accurate reflections.
- Keep the rules you list high signal-to-noise - don't include unnecessary reflections, but make sure the ones you do add are descriptive.
This is very important. We do NOT want to confuse the assistant in future interactions by having lots and lots of rules and memories.
- Your reflections should be very descriptive and detailed, ensuring they are clear and will not be misinterpreted.
- Keep your style and user facts rule lists short. It's better to have individual rules be more detailed, than to have multiple rules that are too general.
- Do NOT generate rules off of suspicions. Your rules should be based on cold hard facts from the conversation and artifact.
- Keep the total number of style and user facts low. It's better to have individual rules be more detailed, than to have many rules that are vague.
- Do NOT generate rules off of suspicions. Your rules should be based on cold hard facts from the conversation, and changes to the artifact the user has requested.
You must be able to provide evidence and sources for each rule you generate if asked, so don't make assumptions.
- Content reflections should be based on the user's messages, not the generated artifacts. Ensure you follow this rule closely to ensure you do not record things generated by the assistant as facts about the user.
</system-guidelines>
I'll reiterate one final time: ensure the reflections you generate are kept at a reasonable length, are descriptive, and are based on the conversation and artifact provided.
Finally, use the 'generate_reflections' tool to generate the new, full list of reflections.`;

export const REFLECT_USER_PROMPT = `Here is my conversation:
Expand Down
14 changes: 8 additions & 6 deletions src/hooks/useGraph.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,13 @@ export function useGraph() {
});
return undefined;
}
if (!assistantId) {
toast({
title: "Error",
description: "Assistant ID not found",
});
return undefined;
}

const client = createClient();

Expand All @@ -132,13 +139,8 @@ export function useGraph() {
...params,
};

const stream = client.runs.stream(threadId, "agent", {
const stream = client.runs.stream(threadId, assistantId, {
input,
config: {
configurable: {
assistant_id: assistantId,
},
},
streamMode: "events",
});

Expand Down
8 changes: 4 additions & 4 deletions yarn.lock
Original file line number Diff line number Diff line change
Expand Up @@ -521,10 +521,10 @@
dependencies:
uuid "^10.0.0"

"@langchain/langgraph-sdk@^0.0.14":
version "0.0.14"
resolved "https://registry.yarnpkg.com/@langchain/langgraph-sdk/-/langgraph-sdk-0.0.14.tgz#aae3495208f6bcc2438f7cd6616b21a0dfa91e6f"
integrity sha512-hDu5Q92px6M3frZbKPOg2jWb8cCxU83oEt+GtfOY0MzID60+XocjsHdwSv5EEj32X9yzINGq6jHlHg1EHqjZyA==
"@langchain/langgraph-sdk@^0.0.16":
version "0.0.16"
resolved "https://registry.yarnpkg.com/@langchain/langgraph-sdk/-/langgraph-sdk-0.0.16.tgz#3dc415d78a912f13ab75d98c829cc736d55c978f"
integrity sha512-qbpImYeuDjRGZRno5HTaSbBQYa42wAI1Eeb3tZ+y5ek1rZxArqnyKCX8FH7Eje8NneSZWsGGZsudnOD6NXVwvA==
dependencies:
"@types/json-schema" "^7.0.15"
p-queue "^6.6.2"
Expand Down

0 comments on commit 3a341f6

Please sign in to comment.