Skip to content

Commit

Permalink
docs[minor]: LangGraph Migration Guide (#5487)
Browse files Browse the repository at this point in the history
* [Docs] LangGraph Migration Guide

* fixup

* link

* Update

* Update and polish

* Add to how to index page

---------

Co-authored-by: jacoblee93 <jacoblee93@gmail.com>
  • Loading branch information
hinthornw and jacoblee93 authored May 26, 2024
1 parent ea22597 commit 0a5988b
Show file tree
Hide file tree
Showing 7 changed files with 1,715 additions and 64 deletions.
6 changes: 4 additions & 2 deletions deno.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
"langchain/": "npm:/langchain/",
"@faker-js/faker": "npm:@faker-js/faker",
"@langchain/anthropic": "npm:@langchain/anthropic",
"@langchain/community/": "npm:/@langchain/community/",
"@langchain/community/": "npm:/@langchain/community@0.2.2/",
"@langchain/openai": "npm:@langchain/openai",
"@langchain/cohere": "npm:@langchain/cohere",
"@langchain/textsplitters": "npm:@langchain/textsplitters",
Expand All @@ -12,9 +12,11 @@
"@langchain/core/": "npm:/@langchain/core/",
"@langchain/pinecone": "npm:@langchain/pinecone",
"@langchain/google-common": "npm:@langchain/google-common",
"@langchain/langgraph": "npm:/@langchain/langgraph@0.0.21",
"@langchain/langgraph/": "npm:/@langchain/langgraph@0.0.21/",
"@microsoft/fetch-event-source": "npm:@microsoft/fetch-event-source",
"@pinecone-database/pinecone": "npm:@pinecone-database/pinecone",
"cheerio": "npm:/cheerio",
"cheerio": "npm:cheerio",
"chromadb": "npm:/chromadb",
"dotenv/": "npm:/dotenv/",
"zod": "npm:/zod",
Expand Down
2 changes: 2 additions & 0 deletions docs/core_docs/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,8 @@ docs/how_to/output_parser_fixing.md
docs/how_to/output_parser_fixing.mdx
docs/how_to/multiple_queries.md
docs/how_to/multiple_queries.mdx
docs/how_to/migrate_agent.md
docs/how_to/migrate_agent.mdx
docs/how_to/logprobs.md
docs/how_to/logprobs.mdx
docs/how_to/lcel_cheatsheet.md
Expand Down
121 changes: 62 additions & 59 deletions docs/core_docs/docs/how_to/agent_executor.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@
{
"data": {
"text/plain": [
"\u001b[32m`[{\"title\":\"Weather in San Francisco\",\"url\":\"https://www.weatherapi.com/\",\"content\":\"{'location': {'n`\u001b[39m... 1111 more characters"
"\u001b[32m`[{\"title\":\"Weather in San Francisco\",\"url\":\"https://www.weatherapi.com/\",\"content\":\"{'location': {'n`\u001b[39m... 1347 more characters"
]
},
"execution_count": 1,
Expand All @@ -109,6 +109,7 @@
}
],
"source": [
"import \"cheerio\"; // This is required in notebooks to use the `CheerioWebBaseLoader`\n",
"import { TavilySearchResults } from \"@langchain/community/tools/tavily_search\"\n",
"\n",
"const search = new TavilySearchResults({\n",
Expand Down Expand Up @@ -152,24 +153,24 @@
}
],
"source": [
"import \"cheerio\"; // This is required in notebooks to use the `CheerioWebBaseLoader`\n",
"import { CheerioWebBaseLoader } from \"langchain/document_loaders/web/cheerio\";\n",
"import { CheerioWebBaseLoader } from \"@langchain/community/document_loaders/web/cheerio\";\n",
"import { MemoryVectorStore } from \"langchain/vectorstores/memory\";\n",
"import { OpenAIEmbeddings } from \"@langchain/openai\";\n",
"import { RecursiveCharacterTextSplitter } from \"@langchain/textsplitters\";\n",
"\n",
"const loader = new CheerioWebBaseLoader(\"https://docs.smith.langchain.com/overview\")\n",
"const docs = await loader.load()\n",
"const documents = await new RecursiveCharacterTextSplitter(\n",
" {\n",
" chunkSize: 1000,\n",
" chunkOverlap: 200\n",
" }\n",
").splitDocuments(docs)\n",
"const vectorStore = await MemoryVectorStore.fromDocuments(documents, new OpenAIEmbeddings())\n",
"const loader = new CheerioWebBaseLoader(\"https://docs.smith.langchain.com/overview\");\n",
"const docs = await loader.load();\n",
"const splitter = new RecursiveCharacterTextSplitter(\n",
" {\n",
" chunkSize: 1000,\n",
" chunkOverlap: 200\n",
" }\n",
");\n",
"const documents = await splitter.splitDocuments(docs);\n",
"const vectorStore = await MemoryVectorStore.fromDocuments(documents, new OpenAIEmbeddings());\n",
"const retriever = vectorStore.asRetriever();\n",
"\n",
"(await retriever.invoke(\"how to upload a dataset\"))[0]"
"(await retriever.invoke(\"how to upload a dataset\"))[0];"
]
},
{
Expand Down Expand Up @@ -258,6 +259,9 @@
}
],
"source": [
"import { ChatOpenAI } from \"@langchain/openai\";\n",
"const model = new ChatOpenAI({ model: \"gpt-4\", temperature: 0 })\n",
"\n",
"import { HumanMessage } from \"@langchain/core/messages\";\n",
"\n",
"const response = await model.invoke([new HumanMessage(\"hi!\")]);\n",
Expand Down Expand Up @@ -336,9 +340,9 @@
" {\n",
" \"name\": \"tavily_search_results_json\",\n",
" \"args\": {\n",
" \"input\": \"weather in San Francisco\"\n",
" \"input\": \"current weather in San Francisco\"\n",
" },\n",
" \"id\": \"call_y0nn6mbVCV5paX6RrqqFUqdC\"\n",
" \"id\": \"call_VcSjZAZkEOx9lcHNZNXAjXkm\"\n",
" }\n",
"]\n"
]
Expand Down Expand Up @@ -370,11 +374,7 @@
"\n",
"Now that we have defined the tools and the LLM, we can create the agent. We will be using a tool calling agent - for more information on this type of agent, as well as other options, see [this guide](/docs/concepts/#agent_types/).\n",
"\n",
"We can first choose the prompt we want to use to guide the agent.\n",
"\n",
"If you want to see the contents of this prompt in the hub, you can go to:\n",
"\n",
"[https://smith.langchain.com/hub/hwchase17/openai-functions-agent](https://smith.langchain.com/hub/hwchase17/openai-functions-agent)"
"We can first choose the prompt we want to use to guide the agent:"
]
},
{
Expand All @@ -394,19 +394,18 @@
" prompt: PromptTemplate {\n",
" lc_serializable: true,\n",
" lc_kwargs: {\n",
" template: \"You are a helpful assistant\",\n",
" inputVariables: [],\n",
" templateFormat: \"f-string\",\n",
" partialVariables: {}\n",
" template: \"You are a helpful assistant\"\n",
" },\n",
" lc_runnable: true,\n",
" name: undefined,\n",
" lc_namespace: [ \"langchain_core\", \"prompts\", \"prompt\" ],\n",
" inputVariables: [],\n",
" outputParser: undefined,\n",
" partialVariables: {},\n",
" template: \"You are a helpful assistant\",\n",
" partialVariables: undefined,\n",
" templateFormat: \"f-string\",\n",
" template: \"You are a helpful assistant\",\n",
" validateTemplate: true\n",
" }\n",
" },\n",
Expand All @@ -418,27 +417,26 @@
" prompt: PromptTemplate {\n",
" lc_serializable: true,\n",
" lc_kwargs: {\n",
" template: \"You are a helpful assistant\",\n",
" inputVariables: [],\n",
" templateFormat: \"f-string\",\n",
" partialVariables: {}\n",
" template: \"You are a helpful assistant\"\n",
" },\n",
" lc_runnable: true,\n",
" name: undefined,\n",
" lc_namespace: [ \"langchain_core\", \"prompts\", \"prompt\" ],\n",
" inputVariables: [],\n",
" outputParser: undefined,\n",
" partialVariables: {},\n",
" template: \"You are a helpful assistant\",\n",
" partialVariables: undefined,\n",
" templateFormat: \"f-string\",\n",
" template: \"You are a helpful assistant\",\n",
" validateTemplate: true\n",
" },\n",
" messageClass: undefined,\n",
" chatMessageClass: undefined\n",
" },\n",
" MessagesPlaceholder {\n",
" lc_serializable: true,\n",
" lc_kwargs: { optional: true, variableName: \"chat_history\" },\n",
" lc_kwargs: { variableName: \"chat_history\", optional: true },\n",
" lc_runnable: true,\n",
" name: undefined,\n",
" lc_namespace: [ \"langchain_core\", \"prompts\", \"chat\" ],\n",
Expand All @@ -451,19 +449,18 @@
" prompt: PromptTemplate {\n",
" lc_serializable: true,\n",
" lc_kwargs: {\n",
" template: \"{input}\",\n",
" inputVariables: [Array],\n",
" templateFormat: \"f-string\",\n",
" partialVariables: {}\n",
" template: \"{input}\"\n",
" },\n",
" lc_runnable: true,\n",
" name: undefined,\n",
" lc_namespace: [ \"langchain_core\", \"prompts\", \"prompt\" ],\n",
" inputVariables: [ \"input\" ],\n",
" outputParser: undefined,\n",
" partialVariables: {},\n",
" template: \"{input}\",\n",
" partialVariables: undefined,\n",
" templateFormat: \"f-string\",\n",
" template: \"{input}\",\n",
" validateTemplate: true\n",
" }\n",
" },\n",
Expand All @@ -475,43 +472,45 @@
" prompt: PromptTemplate {\n",
" lc_serializable: true,\n",
" lc_kwargs: {\n",
" template: \"{input}\",\n",
" inputVariables: [ \"input\" ],\n",
" templateFormat: \"f-string\",\n",
" partialVariables: {}\n",
" template: \"{input}\"\n",
" },\n",
" lc_runnable: true,\n",
" name: undefined,\n",
" lc_namespace: [ \"langchain_core\", \"prompts\", \"prompt\" ],\n",
" inputVariables: [ \"input\" ],\n",
" outputParser: undefined,\n",
" partialVariables: {},\n",
" template: \"{input}\",\n",
" partialVariables: undefined,\n",
" templateFormat: \"f-string\",\n",
" template: \"{input}\",\n",
" validateTemplate: true\n",
" },\n",
" messageClass: undefined,\n",
" chatMessageClass: undefined\n",
" },\n",
" MessagesPlaceholder {\n",
" lc_serializable: true,\n",
" lc_kwargs: { optional: false, variableName: \"agent_scratchpad\" },\n",
" lc_kwargs: { variableName: \"agent_scratchpad\", optional: true },\n",
" lc_runnable: true,\n",
" name: undefined,\n",
" lc_namespace: [ \"langchain_core\", \"prompts\", \"chat\" ],\n",
" variableName: \"agent_scratchpad\",\n",
" optional: false\n",
" optional: true\n",
" }\n",
"]\n"
]
}
],
"source": [
"import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n",
"import { pull } from \"langchain/hub\";\n",
"\n",
"// Get the prompt to use - you can modify this!\n",
"const prompt = await pull<ChatPromptTemplate>(\"hwchase17/openai-functions-agent\");\n",
"const prompt = ChatPromptTemplate.fromMessages([\n",
" [\"system\", \"You are a helpful assistant\"],\n",
" [\"placeholder\", \"{chat_history}\"],\n",
" [\"human\", \"{input}\"],\n",
" [\"placeholder\", \"{agent_scratchpad}\"],\n",
"]);\n",
"\n",
"console.log(prompt.promptMessages);"
]
Expand Down Expand Up @@ -617,7 +616,9 @@
"text/plain": [
"{\n",
" input: \u001b[32m\"how can langsmith help with testing?\"\u001b[39m,\n",
" output: \u001b[32m\"LangSmith can help with testing by providing a platform for building production-grade LLM applicatio\"\u001b[39m... 880 more characters\n",
" output: \u001b[32m\"LangSmith can be a valuable tool for testing in several ways:\\n\"\u001b[39m +\n",
" \u001b[32m\"\\n\"\u001b[39m +\n",
" \u001b[32m\"1. **Logging Traces**: LangSmith prov\"\u001b[39m... 960 more characters\n",
"}"
]
},
Expand Down Expand Up @@ -651,7 +652,7 @@
"text/plain": [
"{\n",
" input: \u001b[32m\"whats the weather in sf?\"\u001b[39m,\n",
" output: \u001b[32m\"The current weather in San Francisco is partly cloudy with a temperature of 64.0°F (17.8°C). The win\"\u001b[39m... 112 more characters\n",
" output: \u001b[32m\"The current weather in San Francisco, California is partly cloudy with a temperature of 12.2°C (54.0\"\u001b[39m... 176 more characters\n",
"}"
]
},
Expand Down Expand Up @@ -753,7 +754,7 @@
" }\n",
" ],\n",
" input: \u001b[32m\"what's my name?\"\u001b[39m,\n",
" output: \u001b[32m\"Your name is Bob! How can I help you, Bob?\"\u001b[39m\n",
" output: \u001b[32m\"Your name is Bob. How can I assist you further?\"\u001b[39m\n",
"}"
]
},
Expand Down Expand Up @@ -785,8 +786,8 @@
"\n",
"Because we have multiple inputs, we need to specify two things:\n",
"\n",
"- `input_messages_key`: The input key to use to add to the conversation history.\n",
"- `history_messages_key`: The key to add the loaded messages into.\n",
"- `inputMessagesKey`: The input key to use to add to the conversation history.\n",
"- `historyMessagesKey`: The key to add the loaded messages into.\n",
"\n",
"For more information on how to use this, see [this guide](/docs/how_to/message_history). "
]
Expand Down Expand Up @@ -819,22 +820,22 @@
" AIMessage {\n",
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
" lc_kwargs: {\n",
" content: \u001b[32m\"Hello Bob! How can I assist you today?\"\u001b[39m,\n",
" content: \u001b[32m\"Hello, Bob! How can I assist you today?\"\u001b[39m,\n",
" tool_calls: [],\n",
" invalid_tool_calls: [],\n",
" additional_kwargs: {},\n",
" response_metadata: {}\n",
" },\n",
" lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
" content: \u001b[32m\"Hello Bob! How can I assist you today?\"\u001b[39m,\n",
" content: \u001b[32m\"Hello, Bob! How can I assist you today?\"\u001b[39m,\n",
" name: \u001b[90mundefined\u001b[39m,\n",
" additional_kwargs: {},\n",
" response_metadata: {},\n",
" tool_calls: [],\n",
" invalid_tool_calls: []\n",
" }\n",
" ],\n",
" output: \u001b[32m\"Hello Bob! How can I assist you today?\"\u001b[39m\n",
" output: \u001b[32m\"Hello, Bob! How can I assist you today?\"\u001b[39m\n",
"}"
]
},
Expand Down Expand Up @@ -898,14 +899,14 @@
" AIMessage {\n",
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
" lc_kwargs: {\n",
" content: \u001b[32m\"Hello Bob! How can I assist you today?\"\u001b[39m,\n",
" content: \u001b[32m\"Hello, Bob! How can I assist you today?\"\u001b[39m,\n",
" tool_calls: [],\n",
" invalid_tool_calls: [],\n",
" additional_kwargs: {},\n",
" response_metadata: {}\n",
" },\n",
" lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
" content: \u001b[32m\"Hello Bob! How can I assist you today?\"\u001b[39m,\n",
" content: \u001b[32m\"Hello, Bob! How can I assist you today?\"\u001b[39m,\n",
" name: \u001b[90mundefined\u001b[39m,\n",
" additional_kwargs: {},\n",
" response_metadata: {},\n",
Expand All @@ -928,22 +929,22 @@
" AIMessage {\n",
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
" lc_kwargs: {\n",
" content: \u001b[32m\"Your name is Bob! How can I help you, Bob?\"\u001b[39m,\n",
" content: \u001b[32m\"Your name is Bob. How can I assist you further?\"\u001b[39m,\n",
" tool_calls: [],\n",
" invalid_tool_calls: [],\n",
" additional_kwargs: {},\n",
" response_metadata: {}\n",
" },\n",
" lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
" content: \u001b[32m\"Your name is Bob! How can I help you, Bob?\"\u001b[39m,\n",
" content: \u001b[32m\"Your name is Bob. How can I assist you further?\"\u001b[39m,\n",
" name: \u001b[90mundefined\u001b[39m,\n",
" additional_kwargs: {},\n",
" response_metadata: {},\n",
" tool_calls: [],\n",
" invalid_tool_calls: []\n",
" }\n",
" ],\n",
" output: \u001b[32m\"Your name is Bob! How can I help you, Bob?\"\u001b[39m\n",
" output: \u001b[32m\"Your name is Bob. How can I assist you further?\"\u001b[39m\n",
"}"
]
},
Expand All @@ -954,8 +955,8 @@
],
"source": [
"await agentWithChatHistory.invoke(\n",
" { input: \"what's my name?\" },\n",
" { configurable: { sessionId: \"<foo>\" }},\n",
" { input: \"what's my name?\" },\n",
" { configurable: { sessionId: \"<foo>\" }},\n",
")"
]
},
Expand All @@ -972,12 +973,14 @@
"id": "c029798f",
"metadata": {},
"source": [
"## Conclusion\n",
"## Next steps\n",
"\n",
"That's a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there's lot to learn! \n",
"\n",
":::{.callout-important}\n",
"This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph](/docs/concepts/#langgraph)\n",
"This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph](/docs/concepts/#langgraph).\n",
"\n",
"You can also see [this guide to help migrate to LangGraph](/docs/how_to/migrate_agent).\n",
":::"
]
}
Expand Down
1 change: 1 addition & 0 deletions docs/core_docs/docs/how_to/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -163,6 +163,7 @@ For in depth how-to guides for agents, please check out [LangGraph](https://lang
:::

- [How to: use legacy LangChain Agents (AgentExecutor)](/docs/how_to/agent_executor)
- [How to: migrate from legacy LangChain agents to LangGraph](/docs/how_to/migrate_agent)

### Callbacks

Expand Down
Loading

0 comments on commit 0a5988b

Please sign in to comment.