|
8 | 8 | "\n",
|
9 | 9 | ":::info Prerequisites\n",
|
10 | 10 | "\n",
|
11 |
| - "This guide assumes familiarity with the following:\n", |
| 11 | + "This guide assumes familiarity with the following concepts:\n", |
12 | 12 | "\n",
|
13 |
| - "- [Chatbots](/docs/tutorials/chatbot)\n", |
14 |
| - "- [Tools](/docs/concepts#tools)\n", |
| 13 | + "- [Chatbots](/docs/concepts/#messages)\n", |
| 14 | + "- [Agents](/docs/tutorials/agents)\n", |
| 15 | + "- [Chat history](/docs/concepts/#chat-history)\n", |
15 | 16 | "\n",
|
16 | 17 | ":::\n",
|
17 | 18 | "\n",
|
18 | 19 | "This section will cover how to create conversational agents: chatbots that can interact with other systems and APIs using tools.\n",
|
19 | 20 | "\n",
|
20 | 21 | "## Setup\n",
|
21 | 22 | "\n",
|
22 |
| - "For this guide, we’ll be using an OpenAI tools agent with a single tool for searching the web. The default will be powered by [Tavily](/docs/integrations/tools/tavily_search), but you can switch it out for any similar tool. The rest of this section will assume you’re using Tavily.\n", |
| 23 | + "For this guide, we’ll be using an [tool calling agent](/docs/how_to/agent_executor) with a single tool for searching the web. The default will be powered by [Tavily](/docs/integrations/tools/tavily_search), but you can switch it out for any similar tool. The rest of this section will assume you’re using Tavily.\n", |
23 | 24 | "\n",
|
24 | 25 | "You’ll need to [sign up for an account on the Tavily website](https://tavily.com), and install the following packages:\n",
|
25 | 26 | "\n",
|
|
71 | 72 | " ChatPromptTemplate,\n",
|
72 | 73 | "} from \"@langchain/core/prompts\";\n",
|
73 | 74 | "\n",
|
74 |
| - "// Adapted from https://smith.langchain.com/hub/hwchase17/openai-tools-agent\n", |
| 75 | + "// Adapted from https://smith.langchain.com/hub/jacob/tool-calling-agent\n", |
75 | 76 | "const prompt = ChatPromptTemplate.fromMessages([\n",
|
76 | 77 | " [\n",
|
77 | 78 | " \"system\",\n",
|
|
95 | 96 | "metadata": {},
|
96 | 97 | "outputs": [],
|
97 | 98 | "source": [
|
98 |
| - "import { AgentExecutor, createOpenAIToolsAgent } from \"langchain/agents\";\n", |
| 99 | + "import { AgentExecutor, createToolCallingAgent } from \"langchain/agents\";\n", |
99 | 100 | "\n",
|
100 |
| - "const agent = await createOpenAIToolsAgent({\n", |
| 101 | + "const agent = await createToolCallingAgent({\n", |
101 | 102 | " llm,\n",
|
102 | 103 | " tools,\n",
|
103 | 104 | " prompt,\n",
|
|
139 | 140 | " response_metadata: {}\n",
|
140 | 141 | " }\n",
|
141 | 142 | " ],\n",
|
142 |
| - " output: \u001b[32m\"Hello Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m\n", |
| 143 | + " output: \u001b[32m\"Hi Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m\n", |
143 | 144 | "}"
|
144 | 145 | ]
|
145 | 146 | },
|
|
187 | 188 | " response_metadata: {}\n",
|
188 | 189 | " }\n",
|
189 | 190 | " ],\n",
|
190 |
| - " output: \u001b[32m\"The current conservation status of the Great Barrier Reef is a cause for concern. The International \"\u001b[39m... 801 more characters\n", |
| 191 | + " output: \u001b[32m\"The Great Barrier Reef has recorded its highest amount of coral cover since the Australian Institute\"\u001b[39m... 688 more characters\n", |
191 | 192 | "}"
|
192 | 193 | ]
|
193 | 194 | },
|
|
253 | 254 | " additional_kwargs: {},\n",
|
254 | 255 | " response_metadata: {},\n",
|
255 | 256 | " tool_calls: [],\n",
|
256 |
| - " invalid_tool_calls: []\n", |
| 257 | + " invalid_tool_calls: [],\n", |
| 258 | + " usage_metadata: \u001b[90mundefined\u001b[39m\n", |
257 | 259 | " },\n",
|
258 | 260 | " HumanMessage {\n",
|
259 | 261 | " lc_serializable: \u001b[33mtrue\u001b[39m,\n",
|
|
294 | 296 | "cell_type": "markdown",
|
295 | 297 | "metadata": {},
|
296 | 298 | "source": [
|
297 |
| - "If preferred, you can also wrap the agent executor in a `RunnableWithMessageHistory` class to internally manage history messages. First, we need to slightly modify the prompt to take a separate input variable so that the wrapper can parse which input value to store as history:\n" |
| 299 | + "If preferred, you can also wrap the agent executor in a [`RunnableWithMessageHistory`](/docs/how_to/message_history/) class to internally manage history messages. Let's redeclare it this way:" |
298 | 300 | ]
|
299 | 301 | },
|
300 | 302 | {
|
|
303 | 305 | "metadata": {},
|
304 | 306 | "outputs": [],
|
305 | 307 | "source": [
|
306 |
| - "// Adapted from https://smith.langchain.com/hub/hwchase17/openai-tools-agent\n", |
307 |
| - "const prompt2 = ChatPromptTemplate.fromMessages([\n", |
308 |
| - " [\n", |
309 |
| - " \"system\",\n", |
310 |
| - " \"You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!\",\n", |
311 |
| - " ],\n", |
312 |
| - " [\"placeholder\", \"{chat_history}\"],\n", |
313 |
| - " [\"human\", \"{input}\"],\n", |
314 |
| - " [\"placeholder\", \"{agent_scratchpad}\"],\n", |
315 |
| - "]);\n", |
316 |
| - "\n", |
317 |
| - "const agent2 = await createOpenAIToolsAgent({\n", |
| 308 | + "const agent2 = await createToolCallingAgent({\n", |
318 | 309 | " llm,\n",
|
319 | 310 | " tools,\n",
|
320 |
| - " prompt: prompt2,\n", |
| 311 | + " prompt,\n", |
321 | 312 | "});\n",
|
322 | 313 | "\n",
|
323 | 314 | "const agentExecutor2 = new AgentExecutor({ agent: agent2, tools });"
|
|
332 | 323 | },
|
333 | 324 | {
|
334 | 325 | "cell_type": "code",
|
335 |
| - "execution_count": 9, |
336 |
| - "metadata": {}, |
337 |
| - "outputs": [], |
338 |
| - "source": [ |
339 |
| - "import { ChatMessageHistory } from \"langchain/stores/message/in_memory\";\n", |
340 |
| - "import { RunnableWithMessageHistory } from \"@langchain/core/runnables\";\n", |
341 |
| - "\n", |
342 |
| - "const demoEphemeralChatMessageHistory = new ChatMessageHistory();\n", |
343 |
| - "\n", |
344 |
| - "const conversationalAgentExecutor = new RunnableWithMessageHistory({\n", |
345 |
| - " runnable: agentExecutor2,\n", |
346 |
| - " getMessageHistory: (_sessionId) => demoEphemeralChatMessageHistory,\n", |
347 |
| - " inputMessagesKey: \"input\",\n", |
348 |
| - " outputMessagesKey: \"output\",\n", |
349 |
| - " historyMessagesKey: \"chat_history\",\n", |
350 |
| - "});" |
351 |
| - ] |
352 |
| - }, |
353 |
| - { |
354 |
| - "cell_type": "code", |
355 |
| - "execution_count": 10, |
| 326 | + "execution_count": 11, |
356 | 327 | "metadata": {},
|
357 | 328 | "outputs": [
|
358 | 329 | {
|
359 | 330 | "data": {
|
360 | 331 | "text/plain": [
|
361 | 332 | "{\n",
|
362 |
| - " input: \u001b[32m\"I'm Nemo!\"\u001b[39m,\n", |
363 |
| - " chat_history: [\n", |
| 333 | + " messages: [\n", |
364 | 334 | " HumanMessage {\n",
|
365 | 335 | " lc_serializable: \u001b[33mtrue\u001b[39m,\n",
|
366 | 336 | " lc_kwargs: {\n",
|
|
373 | 343 | " name: \u001b[90mundefined\u001b[39m,\n",
|
374 | 344 | " additional_kwargs: {},\n",
|
375 | 345 | " response_metadata: {}\n",
|
376 |
| - " },\n", |
377 |
| - " AIMessage {\n", |
378 |
| - " lc_serializable: \u001b[33mtrue\u001b[39m,\n", |
379 |
| - " lc_kwargs: {\n", |
380 |
| - " content: \u001b[32m\"Hello Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m,\n", |
381 |
| - " tool_calls: [],\n", |
382 |
| - " invalid_tool_calls: [],\n", |
383 |
| - " additional_kwargs: {},\n", |
384 |
| - " response_metadata: {}\n", |
385 |
| - " },\n", |
386 |
| - " lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n", |
387 |
| - " content: \u001b[32m\"Hello Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m,\n", |
388 |
| - " name: \u001b[90mundefined\u001b[39m,\n", |
389 |
| - " additional_kwargs: {},\n", |
390 |
| - " response_metadata: {},\n", |
391 |
| - " tool_calls: [],\n", |
392 |
| - " invalid_tool_calls: []\n", |
393 | 346 | " }\n",
|
394 | 347 | " ],\n",
|
395 |
| - " output: \u001b[32m\"Hello Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m\n", |
| 348 | + " output: \u001b[32m\"Hi Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m\n", |
396 | 349 | "}"
|
397 | 350 | ]
|
398 | 351 | },
|
399 |
| - "execution_count": 10, |
| 352 | + "execution_count": 11, |
400 | 353 | "metadata": {},
|
401 | 354 | "output_type": "execute_result"
|
402 | 355 | }
|
403 | 356 | ],
|
404 | 357 | "source": [
|
| 358 | + "import { ChatMessageHistory } from \"langchain/stores/message/in_memory\";\n", |
| 359 | + "import { RunnableWithMessageHistory } from \"@langchain/core/runnables\";\n", |
| 360 | + "\n", |
| 361 | + "const demoEphemeralChatMessageHistory = new ChatMessageHistory();\n", |
| 362 | + "\n", |
| 363 | + "const conversationalAgentExecutor = new RunnableWithMessageHistory({\n", |
| 364 | + " runnable: agentExecutor2,\n", |
| 365 | + " getMessageHistory: (_sessionId) => demoEphemeralChatMessageHistory,\n", |
| 366 | + " inputMessagesKey: \"messages\",\n", |
| 367 | + " outputMessagesKey: \"output\",\n", |
| 368 | + "});\n", |
| 369 | + "\n", |
405 | 370 | "await conversationalAgentExecutor.invoke(\n",
|
406 |
| - " { input: \"I'm Nemo!\" },\n", |
| 371 | + " { messages: [new HumanMessage(\"I'm Nemo!\")] },\n", |
407 | 372 | " { configurable: { sessionId: \"unused\" } }\n",
|
408 | 373 | ");"
|
409 | 374 | ]
|
410 | 375 | },
|
411 | 376 | {
|
412 | 377 | "cell_type": "code",
|
413 |
| - "execution_count": 11, |
| 378 | + "execution_count": 12, |
414 | 379 | "metadata": {},
|
415 | 380 | "outputs": [
|
416 | 381 | {
|
417 | 382 | "data": {
|
418 | 383 | "text/plain": [
|
419 | 384 | "{\n",
|
420 |
| - " input: \u001b[32m\"What is my name?\"\u001b[39m,\n", |
421 |
| - " chat_history: [\n", |
| 385 | + " messages: [\n", |
422 | 386 | " HumanMessage {\n",
|
423 | 387 | " lc_serializable: \u001b[33mtrue\u001b[39m,\n",
|
424 | 388 | " lc_kwargs: {\n",
|
|
435 | 399 | " AIMessage {\n",
|
436 | 400 | " lc_serializable: \u001b[33mtrue\u001b[39m,\n",
|
437 | 401 | " lc_kwargs: {\n",
|
438 |
| - " content: \u001b[32m\"Hello Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m,\n", |
| 402 | + " content: \u001b[32m\"Hi Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m,\n", |
439 | 403 | " tool_calls: [],\n",
|
440 | 404 | " invalid_tool_calls: [],\n",
|
441 | 405 | " additional_kwargs: {},\n",
|
442 | 406 | " response_metadata: {}\n",
|
443 | 407 | " },\n",
|
444 | 408 | " lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
|
445 |
| - " content: \u001b[32m\"Hello Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m,\n", |
| 409 | + " content: \u001b[32m\"Hi Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m,\n", |
446 | 410 | " name: \u001b[90mundefined\u001b[39m,\n",
|
447 | 411 | " additional_kwargs: {},\n",
|
448 | 412 | " response_metadata: {},\n",
|
449 | 413 | " tool_calls: [],\n",
|
450 |
| - " invalid_tool_calls: []\n", |
| 414 | + " invalid_tool_calls: [],\n", |
| 415 | + " usage_metadata: \u001b[90mundefined\u001b[39m\n", |
451 | 416 | " },\n",
|
452 | 417 | " HumanMessage {\n",
|
453 | 418 | " lc_serializable: \u001b[33mtrue\u001b[39m,\n",
|
|
461 | 426 | " name: \u001b[90mundefined\u001b[39m,\n",
|
462 | 427 | " additional_kwargs: {},\n",
|
463 | 428 | " response_metadata: {}\n",
|
464 |
| - " },\n", |
465 |
| - " AIMessage {\n", |
466 |
| - " lc_serializable: \u001b[33mtrue\u001b[39m,\n", |
467 |
| - " lc_kwargs: {\n", |
468 |
| - " content: \u001b[32m\"Your name is Nemo!\"\u001b[39m,\n", |
469 |
| - " tool_calls: [],\n", |
470 |
| - " invalid_tool_calls: [],\n", |
471 |
| - " additional_kwargs: {},\n", |
472 |
| - " response_metadata: {}\n", |
473 |
| - " },\n", |
474 |
| - " lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n", |
475 |
| - " content: \u001b[32m\"Your name is Nemo!\"\u001b[39m,\n", |
476 |
| - " name: \u001b[90mundefined\u001b[39m,\n", |
477 |
| - " additional_kwargs: {},\n", |
478 |
| - " response_metadata: {},\n", |
479 |
| - " tool_calls: [],\n", |
480 |
| - " invalid_tool_calls: []\n", |
481 | 429 | " }\n",
|
482 | 430 | " ],\n",
|
483 | 431 | " output: \u001b[32m\"Your name is Nemo!\"\u001b[39m\n",
|
484 | 432 | "}"
|
485 | 433 | ]
|
486 | 434 | },
|
487 |
| - "execution_count": 11, |
| 435 | + "execution_count": 12, |
488 | 436 | "metadata": {},
|
489 | 437 | "output_type": "execute_result"
|
490 | 438 | }
|
491 | 439 | ],
|
492 | 440 | "source": [
|
493 | 441 | "await conversationalAgentExecutor.invoke(\n",
|
494 |
| - " { input: \"What is my name?\" },\n", |
| 442 | + " { messages: [new HumanMessage(\"What is my name?\")] },\n", |
495 | 443 | " { configurable: { sessionId: \"unused\" } }\n",
|
496 | 444 | ");"
|
497 | 445 | ]
|
|
0 commit comments