Skip to content

Commit a4a2d11

Browse files
authored
Add callout, update chatbot tools guide (#5883)
1 parent ff0ac6e commit a4a2d11

File tree

2 files changed

+48
-93
lines changed

2 files changed

+48
-93
lines changed

docs/core_docs/docs/concepts.mdx

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -160,8 +160,15 @@ For specifics on how to use chat models, see the [relevant how-to guides here](/
160160

161161
<span data-heading-keywords="llm,llms"></span>
162162

163+
:::caution
164+
Pure text-in/text-out LLMs tend to be older or lower-level. Many popular models are best used as [chat completion models](/docs/concepts/#chat-models),
165+
even for non-chat use cases.
166+
167+
You are probably looking for [the section above instead](/docs/concepts/#chat-models).
168+
:::
169+
163170
Language models that takes a string as input and returns a string.
164-
These are traditionally older models (newer models generally are [Chat Models](/docs/concepts/#chat-models), see below).
171+
These are traditionally older models (newer models generally are [Chat Models](/docs/concepts/#chat-models), see above).
165172

166173
Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input.
167174
This gives them the same interface as [Chat Models](/docs/concepts/#chat-models).

docs/core_docs/docs/how_to/chatbots_tools.ipynb

Lines changed: 40 additions & 92 deletions
Original file line numberDiff line numberDiff line change
@@ -8,18 +8,19 @@
88
"\n",
99
":::info Prerequisites\n",
1010
"\n",
11-
"This guide assumes familiarity with the following:\n",
11+
"This guide assumes familiarity with the following concepts:\n",
1212
"\n",
13-
"- [Chatbots](/docs/tutorials/chatbot)\n",
14-
"- [Tools](/docs/concepts#tools)\n",
13+
"- [Chatbots](/docs/concepts/#messages)\n",
14+
"- [Agents](/docs/tutorials/agents)\n",
15+
"- [Chat history](/docs/concepts/#chat-history)\n",
1516
"\n",
1617
":::\n",
1718
"\n",
1819
"This section will cover how to create conversational agents: chatbots that can interact with other systems and APIs using tools.\n",
1920
"\n",
2021
"## Setup\n",
2122
"\n",
22-
"For this guide, we’ll be using an OpenAI tools agent with a single tool for searching the web. The default will be powered by [Tavily](/docs/integrations/tools/tavily_search), but you can switch it out for any similar tool. The rest of this section will assume you’re using Tavily.\n",
23+
"For this guide, we’ll be using an [tool calling agent](/docs/how_to/agent_executor) with a single tool for searching the web. The default will be powered by [Tavily](/docs/integrations/tools/tavily_search), but you can switch it out for any similar tool. The rest of this section will assume you’re using Tavily.\n",
2324
"\n",
2425
"You’ll need to [sign up for an account on the Tavily website](https://tavily.com), and install the following packages:\n",
2526
"\n",
@@ -71,7 +72,7 @@
7172
" ChatPromptTemplate,\n",
7273
"} from \"@langchain/core/prompts\";\n",
7374
"\n",
74-
"// Adapted from https://smith.langchain.com/hub/hwchase17/openai-tools-agent\n",
75+
"// Adapted from https://smith.langchain.com/hub/jacob/tool-calling-agent\n",
7576
"const prompt = ChatPromptTemplate.fromMessages([\n",
7677
" [\n",
7778
" \"system\",\n",
@@ -95,9 +96,9 @@
9596
"metadata": {},
9697
"outputs": [],
9798
"source": [
98-
"import { AgentExecutor, createOpenAIToolsAgent } from \"langchain/agents\";\n",
99+
"import { AgentExecutor, createToolCallingAgent } from \"langchain/agents\";\n",
99100
"\n",
100-
"const agent = await createOpenAIToolsAgent({\n",
101+
"const agent = await createToolCallingAgent({\n",
101102
" llm,\n",
102103
" tools,\n",
103104
" prompt,\n",
@@ -139,7 +140,7 @@
139140
" response_metadata: {}\n",
140141
" }\n",
141142
" ],\n",
142-
" output: \u001b[32m\"Hello Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m\n",
143+
" output: \u001b[32m\"Hi Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m\n",
143144
"}"
144145
]
145146
},
@@ -187,7 +188,7 @@
187188
" response_metadata: {}\n",
188189
" }\n",
189190
" ],\n",
190-
" output: \u001b[32m\"The current conservation status of the Great Barrier Reef is a cause for concern. The International \"\u001b[39m... 801 more characters\n",
191+
" output: \u001b[32m\"The Great Barrier Reef has recorded its highest amount of coral cover since the Australian Institute\"\u001b[39m... 688 more characters\n",
191192
"}"
192193
]
193194
},
@@ -253,7 +254,8 @@
253254
" additional_kwargs: {},\n",
254255
" response_metadata: {},\n",
255256
" tool_calls: [],\n",
256-
" invalid_tool_calls: []\n",
257+
" invalid_tool_calls: [],\n",
258+
" usage_metadata: \u001b[90mundefined\u001b[39m\n",
257259
" },\n",
258260
" HumanMessage {\n",
259261
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
@@ -294,7 +296,7 @@
294296
"cell_type": "markdown",
295297
"metadata": {},
296298
"source": [
297-
"If preferred, you can also wrap the agent executor in a `RunnableWithMessageHistory` class to internally manage history messages. First, we need to slightly modify the prompt to take a separate input variable so that the wrapper can parse which input value to store as history:\n"
299+
"If preferred, you can also wrap the agent executor in a [`RunnableWithMessageHistory`](/docs/how_to/message_history/) class to internally manage history messages. Let's redeclare it this way:"
298300
]
299301
},
300302
{
@@ -303,21 +305,10 @@
303305
"metadata": {},
304306
"outputs": [],
305307
"source": [
306-
"// Adapted from https://smith.langchain.com/hub/hwchase17/openai-tools-agent\n",
307-
"const prompt2 = ChatPromptTemplate.fromMessages([\n",
308-
" [\n",
309-
" \"system\",\n",
310-
" \"You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!\",\n",
311-
" ],\n",
312-
" [\"placeholder\", \"{chat_history}\"],\n",
313-
" [\"human\", \"{input}\"],\n",
314-
" [\"placeholder\", \"{agent_scratchpad}\"],\n",
315-
"]);\n",
316-
"\n",
317-
"const agent2 = await createOpenAIToolsAgent({\n",
308+
"const agent2 = await createToolCallingAgent({\n",
318309
" llm,\n",
319310
" tools,\n",
320-
" prompt: prompt2,\n",
311+
" prompt,\n",
321312
"});\n",
322313
"\n",
323314
"const agentExecutor2 = new AgentExecutor({ agent: agent2, tools });"
@@ -332,35 +323,14 @@
332323
},
333324
{
334325
"cell_type": "code",
335-
"execution_count": 9,
336-
"metadata": {},
337-
"outputs": [],
338-
"source": [
339-
"import { ChatMessageHistory } from \"langchain/stores/message/in_memory\";\n",
340-
"import { RunnableWithMessageHistory } from \"@langchain/core/runnables\";\n",
341-
"\n",
342-
"const demoEphemeralChatMessageHistory = new ChatMessageHistory();\n",
343-
"\n",
344-
"const conversationalAgentExecutor = new RunnableWithMessageHistory({\n",
345-
" runnable: agentExecutor2,\n",
346-
" getMessageHistory: (_sessionId) => demoEphemeralChatMessageHistory,\n",
347-
" inputMessagesKey: \"input\",\n",
348-
" outputMessagesKey: \"output\",\n",
349-
" historyMessagesKey: \"chat_history\",\n",
350-
"});"
351-
]
352-
},
353-
{
354-
"cell_type": "code",
355-
"execution_count": 10,
326+
"execution_count": 11,
356327
"metadata": {},
357328
"outputs": [
358329
{
359330
"data": {
360331
"text/plain": [
361332
"{\n",
362-
" input: \u001b[32m\"I'm Nemo!\"\u001b[39m,\n",
363-
" chat_history: [\n",
333+
" messages: [\n",
364334
" HumanMessage {\n",
365335
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
366336
" lc_kwargs: {\n",
@@ -373,52 +343,46 @@
373343
" name: \u001b[90mundefined\u001b[39m,\n",
374344
" additional_kwargs: {},\n",
375345
" response_metadata: {}\n",
376-
" },\n",
377-
" AIMessage {\n",
378-
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
379-
" lc_kwargs: {\n",
380-
" content: \u001b[32m\"Hello Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m,\n",
381-
" tool_calls: [],\n",
382-
" invalid_tool_calls: [],\n",
383-
" additional_kwargs: {},\n",
384-
" response_metadata: {}\n",
385-
" },\n",
386-
" lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
387-
" content: \u001b[32m\"Hello Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m,\n",
388-
" name: \u001b[90mundefined\u001b[39m,\n",
389-
" additional_kwargs: {},\n",
390-
" response_metadata: {},\n",
391-
" tool_calls: [],\n",
392-
" invalid_tool_calls: []\n",
393346
" }\n",
394347
" ],\n",
395-
" output: \u001b[32m\"Hello Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m\n",
348+
" output: \u001b[32m\"Hi Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m\n",
396349
"}"
397350
]
398351
},
399-
"execution_count": 10,
352+
"execution_count": 11,
400353
"metadata": {},
401354
"output_type": "execute_result"
402355
}
403356
],
404357
"source": [
358+
"import { ChatMessageHistory } from \"langchain/stores/message/in_memory\";\n",
359+
"import { RunnableWithMessageHistory } from \"@langchain/core/runnables\";\n",
360+
"\n",
361+
"const demoEphemeralChatMessageHistory = new ChatMessageHistory();\n",
362+
"\n",
363+
"const conversationalAgentExecutor = new RunnableWithMessageHistory({\n",
364+
" runnable: agentExecutor2,\n",
365+
" getMessageHistory: (_sessionId) => demoEphemeralChatMessageHistory,\n",
366+
" inputMessagesKey: \"messages\",\n",
367+
" outputMessagesKey: \"output\",\n",
368+
"});\n",
369+
"\n",
405370
"await conversationalAgentExecutor.invoke(\n",
406-
" { input: \"I'm Nemo!\" },\n",
371+
" { messages: [new HumanMessage(\"I'm Nemo!\")] },\n",
407372
" { configurable: { sessionId: \"unused\" } }\n",
408373
");"
409374
]
410375
},
411376
{
412377
"cell_type": "code",
413-
"execution_count": 11,
378+
"execution_count": 12,
414379
"metadata": {},
415380
"outputs": [
416381
{
417382
"data": {
418383
"text/plain": [
419384
"{\n",
420-
" input: \u001b[32m\"What is my name?\"\u001b[39m,\n",
421-
" chat_history: [\n",
385+
" messages: [\n",
422386
" HumanMessage {\n",
423387
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
424388
" lc_kwargs: {\n",
@@ -435,19 +399,20 @@
435399
" AIMessage {\n",
436400
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
437401
" lc_kwargs: {\n",
438-
" content: \u001b[32m\"Hello Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m,\n",
402+
" content: \u001b[32m\"Hi Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m,\n",
439403
" tool_calls: [],\n",
440404
" invalid_tool_calls: [],\n",
441405
" additional_kwargs: {},\n",
442406
" response_metadata: {}\n",
443407
" },\n",
444408
" lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
445-
" content: \u001b[32m\"Hello Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m,\n",
409+
" content: \u001b[32m\"Hi Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m,\n",
446410
" name: \u001b[90mundefined\u001b[39m,\n",
447411
" additional_kwargs: {},\n",
448412
" response_metadata: {},\n",
449413
" tool_calls: [],\n",
450-
" invalid_tool_calls: []\n",
414+
" invalid_tool_calls: [],\n",
415+
" usage_metadata: \u001b[90mundefined\u001b[39m\n",
451416
" },\n",
452417
" HumanMessage {\n",
453418
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
@@ -461,37 +426,20 @@
461426
" name: \u001b[90mundefined\u001b[39m,\n",
462427
" additional_kwargs: {},\n",
463428
" response_metadata: {}\n",
464-
" },\n",
465-
" AIMessage {\n",
466-
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
467-
" lc_kwargs: {\n",
468-
" content: \u001b[32m\"Your name is Nemo!\"\u001b[39m,\n",
469-
" tool_calls: [],\n",
470-
" invalid_tool_calls: [],\n",
471-
" additional_kwargs: {},\n",
472-
" response_metadata: {}\n",
473-
" },\n",
474-
" lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
475-
" content: \u001b[32m\"Your name is Nemo!\"\u001b[39m,\n",
476-
" name: \u001b[90mundefined\u001b[39m,\n",
477-
" additional_kwargs: {},\n",
478-
" response_metadata: {},\n",
479-
" tool_calls: [],\n",
480-
" invalid_tool_calls: []\n",
481429
" }\n",
482430
" ],\n",
483431
" output: \u001b[32m\"Your name is Nemo!\"\u001b[39m\n",
484432
"}"
485433
]
486434
},
487-
"execution_count": 11,
435+
"execution_count": 12,
488436
"metadata": {},
489437
"output_type": "execute_result"
490438
}
491439
],
492440
"source": [
493441
"await conversationalAgentExecutor.invoke(\n",
494-
" { input: \"What is my name?\" },\n",
442+
" { messages: [new HumanMessage(\"What is my name?\")] },\n",
495443
" { configurable: { sessionId: \"unused\" } }\n",
496444
");"
497445
]

0 commit comments

Comments
 (0)