diff --git a/langgraph_engineer/data/docs.json b/langgraph_engineer/data/docs.json deleted file mode 100644 index b361205..0000000 --- a/langgraph_engineer/data/docs.json +++ /dev/null @@ -1 +0,0 @@ -[{"lc": 1, "type": "constructor", "id": ["langchain", "schema", "document", "Document"], "kwargs": {"page_content": "# \ud83e\udd9c\ud83d\udd78\ufe0fLangGraph\n\n\u26a1 Building language agents as graphs \u26a1\n\n## Overview\u200b\n\nLangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain.\nIt extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.\nIt is inspired by Pregel and Apache Beam.\nThe current interface exposed is one inspired by NetworkX.\n\nThe main use is for adding cycles to your LLM application.\nCrucially, this is NOT a DAG framework.\nIf you want to build a DAG, you should just use LangChain Expression Language.\n\nCycles are important for agent-like behaviors, where you call an LLM in a loop, asking it what action to take next.\n\n## Installation\u200b\n\n```\npip install langgraph\n```\n\n## Quick Start\u200b\n\nHere we will go over an example of creating a simple agent that uses chat models and function calling.\nThis agent will represent all its state as a list of messages.\n\nWe will need to install some LangChain packages, as well as Tavily to use as an example tool.\n\n```\npip install -U langchain langchain_openai tavily-python\n```\n\nWe also need to export some environment variables for OpenAI and Tavily API access.\n\n```\nexport OPENAI_API_KEY=sk-...export TAVILY_API_KEY=tvly-...\n```\n\nOptionally, we can set up LangSmith for best-in-class observability.\n\n```\nexport LANGCHAIN_TRACING_V2=\"true\"export LANGCHAIN_API_KEY=ls__...\n```\n\n### Set up the tools\u200b\n\nWe will first define the tools we want to use.\nFor this simple example, we will use a built-in search tool via Tavily.\nHowever, it is really easy to create your own tools - see documentation here on how to do that.\n\n```\nfrom langchain_community.tools.tavily_search import TavilySearchResultstools = [TavilySearchResults(max_results=1)]\n```\n\nWe can now wrap these tools in a simple LangGraph ToolExecutor.\nThis is a simple class that receives ToolInvocation objects, calls that tool, and returns the output.\nToolInvocation is any class with tool and tool_input attributes.\n\n```\nfrom langgraph.prebuilt import ToolExecutortool_executor = ToolExecutor(tools)\n```\n\n### Set up the model\u200b\n\nNow we need to load the chat model we want to use.\nImportantly, this should satisfy two criteria:\n\nNote: these model requirements are not requirements for using LangGraph - they are just requirements for this one example.\n\n```\nfrom langchain_openai import ChatOpenAI# We will set streaming=True so that we can stream tokens# See the streaming section for more information on this.model = ChatOpenAI(temperature=0, streaming=True)\n```\n\nAfter we've done this, we should make sure the model knows that it has these tools available to call.\nWe can do this by converting the LangChain tools into the format for OpenAI function calling, and then bind them to the model class.\n\n```\nfrom langchain.tools.render import format_tool_to_openai_functionfunctions = [format_tool_to_openai_function(t) for t in tools]model = model.bind_functions(functions)\n```\n\n### Define the agent state\u200b\n\nThe main type of graph in langgraph is the StatefulGraph.\nThis graph is parameterized by a state object that it passes around to each node.\nEach node then returns operations to update that state.\nThese operations can either SET specific attributes on the state (e.g. overwrite the existing values) or ADD to the existing attribute.\nWhether to set or add is denoted by annotating the state object you construct the graph with.\n\nFor this example, the state we will track will just be a list of messages.\nWe want each node to just add messages to that list.\nTherefore, we will use a TypedDict with one key (messages) and annotate it so that the messages attribute is always added to.\n\n```\nfrom typing import TypedDict, Annotated, Sequenceimport operatorfrom langchain_core.messages import BaseMessageclass AgentState(TypedDict): messages: Annotated[Sequence[BaseMessage], operator.add]\n```\n\n### Define the nodes\u200b\n\nWe now need to define a few different nodes in our graph.\nIn langgraph, a node can be either a function or a runnable.\nThere are two main nodes we need for this:\n\nWe will also need to define some edges.\nSome of these edges may be conditional.\nThe reason they are conditional is that based on the output of a node, one of several paths may be taken.\nThe path that is taken is not known until that node is run (the LLM decides).\n\nConditional Edge: after the agent is called, we should either:\n\na. If the agent said to take an action, then the function to invoke tools should be called\n\nb. If the agent said that it was finished, then it should finish\n\nNormal Edge: after the tools are invoked, it should always go back to the agent to decide what to do next\n\nLet's define the nodes, as well as a function to decide how what conditional edge to take.\n\n```\nfrom langgraph.prebuilt import ToolInvocationimport jsonfrom langchain_core.messages import FunctionMessage# Define the function that determines whether to continue or notdef should_continue(state): messages = state['messages'] last_message = messages[-1] # If there is no function call, then we finish if \"function_call\" not in last_message.additional_kwargs: return \"end\" # Otherwise if there is, we continue else: return \"continue\"# Define the function that calls the modeldef call_model(state): messages = state['messages'] response = model.invoke(messages) # We return a list, because this will get added to the existing list return {\"messages\": [response]}# Define the function to execute toolsdef call_tool(state): messages = state['messages'] # Based on the continue condition # we know the last message involves a function call last_message = messages[-1] # We construct an ToolInvocation from the function_call action = ToolInvocation( tool=last_message.additional_kwargs[\"function_call\"][\"name\"], tool_input=json.loads(last_message.additional_kwargs[\"function_call\"][\"arguments\"]), ) # We call the tool_executor and get back a response response = tool_executor.invoke(action) # We use the response to create a FunctionMessage function_message = FunctionMessage(content=str(response), name=action.tool) # We return a list, because this will get added to the existing list return {\"messages\": [function_message]}\n```\n\n### Define the graph\u200b\n\nWe can now put it all together and define the graph!\n\n```\nfrom langgraph.graph import StateGraph, END# Define a new graphworkflow = StateGraph(AgentState)# Define the two nodes we will cycle betweenworkflow.add_node(\"agent\", call_model)workflow.add_node(\"action\", call_tool)# Set the entrypoint as `agent`# This means that this node is the first one calledworkflow.set_entry_point(\"agent\")# We now add a conditional edgeworkflow.add_conditional_edges( # First, we define the start node. We use `agent`. # This means these are the edges taken after the `agent` node is called. \"agent\", # Next, we pass in the function that will determine which node is called next. should_continue, # Finally we pass in a mapping. # The keys are strings, and the values are other nodes. # END is a special node marking that the graph should finish. # What will happen is we will call `should_continue`, and then the output of that # will be matched against the keys in this mapping. # Based on which one it matches, that node will then be called. { # If `tools`, then we call the tool node. \"continue\": \"action\", # Otherwise we finish. \"end\": END })# We now add a normal edge from `tools` to `agent`.# This means that after `tools` is called, `agent` node is called next.workflow.add_edge('action', 'agent')# Finally, we compile it!# This compiles it into a LangChain Runnable,# meaning you can use it as you would any other runnableapp = workflow.compile()\n```\n\n### Use it!\u200b\n\nWe can now use it!\nThis now exposes the same interface as all other LangChain runnables.\nThis runnable accepts a list of messages.\n\n```\nfrom langchain_core.messages import HumanMessageinputs = {\"messages\": [HumanMessage(content=\"what is the weather in sf\")]}app.invoke(inputs)\n```\n\nThis may take a little bit - it's making a few calls behind the scenes.\nIn order to start seeing some intermediate results as they happen, we can use streaming - see below for more information on that.\n\n## Streaming\u200b\n\nLangGraph has support for several different types of streaming.\n\n### Streaming Node Output\u200b\n\nOne of the benefits of using LangGraph is that it is easy to stream output as it's produced by each node.\n\n```\ninputs = {\"messages\": [HumanMessage(content=\"what is the weather in sf\")]}for output in app.stream(inputs): # stream() yields dictionaries with output keyed by node name for key, value in output.items(): print(f\"Output from node '{key}':\") print(\"---\") print(value) print(\"\\n---\\n\")\n```\n\n```\nOutput from node 'agent':---{'messages': [AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\\n \"query\": \"weather in San Francisco\"\\n}', 'name': 'tavily_search_results_json'}})]}---Output from node 'action':---{'messages': [FunctionMessage(content=\"[{'url': 'https://weatherspark.com/h/m/557/2024/1/Historical-Weather-in-January-2024-in-San-Francisco-California-United-States', 'content': 'January 2024 Weather History in San Francisco California, United States Daily Precipitation in January 2024 in San Francisco Observed Weather in January 2024 in San Francisco San Francisco Temperature History January 2024 Hourly Temperature in January 2024 in San Francisco Hours of Daylight and Twilight in January 2024 in San FranciscoThis report shows the past weather for San Francisco, providing a weather history for January 2024. It features all historical weather data series we have available, including the San Francisco temperature history for January 2024. You can drill down from year to month and even day level reports by clicking on the graphs.'}]\", name='tavily_search_results_json')]}---Output from node 'agent':---{'messages': [AIMessage(content=\"I couldn't find the current weather in San Francisco. However, you can visit [WeatherSpark](https://weatherspark.com/h/m/557/2024/1/Historical-Weather-in-January-2024-in-San-Francisco-California-United-States) to check the historical weather data for January 2024 in San Francisco.\")]}---Output from node '__end__':---{'messages': [HumanMessage(content='what is the weather in sf'), AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\\n \"query\": \"weather in San Francisco\"\\n}', 'name': 'tavily_search_results_json'}}), FunctionMessage(content=\"[{'url': 'https://weatherspark.com/h/m/557/2024/1/Historical-Weather-in-January-2024-in-San-Francisco-California-United-States', 'content': 'January 2024 Weather History in San Francisco California, United States Daily Precipitation in January 2024 in San Francisco Observed Weather in January 2024 in San Francisco San Francisco Temperature History January 2024 Hourly Temperature in January 2024 in San Francisco Hours of Daylight and Twilight in January 2024 in San FranciscoThis report shows the past weather for San Francisco, providing a weather history for January 2024. It features all historical weather data series we have available, including the San Francisco temperature history for January 2024. You can drill down from year to month and even day level reports by clicking on the graphs.'}]\", name='tavily_search_results_json'), AIMessage(content=\"I couldn't find the current weather in San Francisco. However, you can visit [WeatherSpark](https://weatherspark.com/h/m/557/2024/1/Historical-Weather-in-January-2024-in-San-Francisco-California-United-States) to check the historical weather data for January 2024 in San Francisco.\")]}---\n```\n\n### Streaming LLM Tokens\u200b\n\nYou can also access the LLM tokens as they are produced by each node.\nIn this case only the \"agent\" node produces LLM tokens.\nIn order for this to work properly, you must be using an LLM that supports streaming as well as have set it when constructing the LLM (e.g. ChatOpenAI(model=\"gpt-3.5-turbo-1106\", streaming=True))\n\n```\ninputs = {\"messages\": [HumanMessage(content=\"what is the weather in sf\")]}async for output in app.astream_log(inputs, include_types=[\"llm\"]): # astream_log() yields the requested logs (here LLMs) in JSONPatch format for op in output.ops: if op[\"path\"] == \"/streamed_output/-\": # this is the output from .stream() ... elif op[\"path\"].startswith(\"/logs/\") and op[\"path\"].endswith( \"/streamed_output/-\" ): # because we chose to only include LLMs, these are LLM tokens print(op[\"value\"])\n```\n\n```\ncontent='' additional_kwargs={'function_call': {'arguments': '', 'name': 'tavily_search_results_json'}}content='' additional_kwargs={'function_call': {'arguments': '{\\n', 'name': ''}}content='' additional_kwargs={'function_call': {'arguments': ' ', 'name': ''}}content='' additional_kwargs={'function_call': {'arguments': ' \"', 'name': ''}}content='' additional_kwargs={'function_call': {'arguments': 'query', 'name': ''}}content='' additional_kwargs={'function_call': {'arguments': '\":', 'name': ''}}content='' additional_kwargs={'function_call': {'arguments': ' \"', 'name': ''}}content='' additional_kwargs={'function_call': {'arguments': 'weather', 'name': ''}}content='' additional_kwargs={'function_call': {'arguments': ' in', 'name': ''}}content='' additional_kwargs={'function_call': {'arguments': ' San', 'name': ''}}content='' additional_kwargs={'function_call': {'arguments': ' Francisco', 'name': ''}}content='' additional_kwargs={'function_call': {'arguments': '\"\\n', 'name': ''}}content='' additional_kwargs={'function_call': {'arguments': '}', 'name': ''}}content=''content=''content='I'content=\"'m\"content=' sorry'content=','content=' but'content=' I'content=' couldn'content=\"'t\"content=' find'content=' the'content=' current'content=' weather'content=' in'content=' San'content=' Francisco'content='.'content=' However'content=','content=' you'content=' can'content=' check'content=' the'content=' historical'content=' weather'content=' data'content=' for'content=' January'content=' 'content='202'content='4'content=' in'content=' San'content=' Francisco'content=' ['content='here'content=']('content='https'content='://'content='we'content='athers'content='park'content='.com'content='/h'content='/m'content='/'content='557'content='/'content='202'content='4'content='/'content='1'content='/H'content='istorical'content='-'content='Weather'content='-in'content='-Jan'content='uary'content='-'content='202'content='4'content='-in'content='-S'content='an'content='-F'content='r'content='anc'content='isco'content='-Cal'content='ifornia'content='-'content='United'content='-'content='States'content=').'content=''\n```\n\n## When to Use\u200b\n\nWhen should you use this versus LangChain Expression Language?\n\nIf you need cycles.\n\nLangchain Expression Language allows you to easily define chains (DAGs) but does not have a good mechanism for adding in cycles.\nlanggraph adds that syntax.\n\n## Examples\u200b\n\n### ChatAgentExecutor: with function calling\u200b\n\nThis agent executor takes a list of messages as input and outputs a list of messages.\nAll agent state is represented as a list of messages.\nThis specifically uses OpenAI function calling.\nThis is recommended agent executor for newer chat based models that support function calling.\n\nModifications\n\nWe also have a lot of examples highlighting how to slightly modify the base chat agent executor. These all build off the getting started notebook so it is recommended you start with that first.\n\n### AgentExecutor\u200b\n\nThis agent executor uses existing LangChain agents.\n\nModifications\n\nWe also have a lot of examples highlighting how to slightly modify the base chat agent executor. These all build off the getting started notebook so it is recommended you start with that first.\n\n### Async\u200b\n\nIf you are running LangGraph in async workflows, you may want to create the nodes to be async by default.\nFor a walkthrough on how to do that, see this documentation\n\n### Streaming Tokens\u200b\n\nSometimes language models take a while to respond and you may want to stream tokens to end users.\nFor a guide on how to do this, see this documentation\n\n### Persistence\u200b\n\nLangGraph comes with built-in persistence, allowing you to save the state of the graph at point and resume from there.\nFor a walkthrough on how to do that, see this documentation\n\n### Human-in-the-loop\u200b\n\nLangGraph comes with built-in support for human-in-the-loop workflows. This is useful when you want to have a human review the current state before proceeding to a particular node.\nFor a walkthrough on how to do that, see this documentation\n\n### Planning Agent Examples\u200b\n\nThe following notebooks implement agent architectures prototypical of the \"plan-and-execute\" style, where an LLM planner decomposes a user request into a program, an executor executes the program, and an LLM synthesizes a response (and/or dynamically replans) based on the program outputs.\n\n### Reflection / Self-Critique\u200b\n\nWhen output quality is a major concern, it's common to incorporate some combination of self-critique or reflection and external validation to refine your system's outputs. The following examples demonstrate research that implement this type of design.\n\n### Multi-agent Examples\u200b\n\n### Chatbot Evaluation via Simulation\u200b\n\nIt can often be tough to evaluation chat bots in multi-turn situations. One way to do this is with simulations.\n\n### Multimodal Examples\u200b\n\n## Documentation\u200b\n\nThere are only a few new APIs to use.\n\n### StateGraph\u200b\n\nThe main entrypoint is StateGraph.\n\n```\nfrom langgraph.graph import StateGraph\n```\n\nThis class is responsible for constructing the graph.\nIt exposes an interface inspired by NetworkX.\nThis graph is parameterized by a state object that it passes around to each node.\n\n#### __init__\u200b\n\n```\n def __init__(self, schema: Type[Any]) -> None:\n```\n\nWhen constructing the graph, you need to pass in a schema for a state.\nEach node then returns operations to update that state.\nThese operations can either SET specific attributes on the state (e.g. overwrite the existing values) or ADD to the existing attribute.\nWhether to set or add is denoted by annotating the state object you construct the graph with.\n\nThe recommended way to specify the schema is with a typed dictionary: from typing import TypedDict\n\nYou can then annotate the different attributes using from typing imoport Annotated.\nCurrently, the only supported annotation is import operator; operator.add.\nThis annotation will make it so that any node that returns this attribute ADDS that new result to the existing value.\n\nLet's take a look at an example:\n\n```\nfrom typing import TypedDict, Annotated, Unionfrom langchain_core.agents import AgentAction, AgentFinishimport operatorclass AgentState(TypedDict): # The input string input: str # The outcome of a given call to the agent # Needs `None` as a valid type, since this is what this will start as agent_outcome: Union[AgentAction, AgentFinish, None] # List of actions and corresponding observations # Here we annotate this with `operator.add` to indicate that operations to # this state should be ADDED to the existing values (not overwrite it) intermediate_steps: Annotated[list[tuple[AgentAction, str]], operator.add]\n```\n\nWe can then use this like:\n\n```\n# Initialize the StateGraph with this stategraph = StateGraph(AgentState)# Create nodes and edges...# Compile the graphapp = graph.compile()# The inputs should be a dictionary, because the state is a TypedDictinputs = { # Let's assume this the input \"input\": \"hi\" # Let's assume agent_outcome is set by the graph as some point # It doesn't need to be provided, and it will be None by default # Let's assume `intermediate_steps` is built up over time by the graph # It doesn't need to provided, and it will be empty list by default # The reason `intermediate_steps` is an empty list and not `None` is because # it's annotated with `operator.add`}\n```\n\n#### .add_node\u200b\n\n```\n def add_node(self, key: str, action: RunnableLike) -> None:\n```\n\nThis method adds a node to the graph.\nIt takes two arguments:\n\n#### .add_edge\u200b\n\n```\n def add_edge(self, start_key: str, end_key: str) -> None:\n```\n\nCreates an edge from one node to the next.\nThis means that output of the first node will be passed to the next node.\nIt takes two arguments.\n\n#### .add_conditional_edges\u200b\n\n```\n def add_conditional_edges( self, start_key: str, condition: Callable[..., str], conditional_edge_mapping: Dict[str, str], ) -> None:\n```\n\nThis method adds conditional edges.\nWhat this means is that only one of the downstream edges will be taken, and which one that is depends on the results of the start node.\nThis takes three arguments:\n\n#### .set_entry_point\u200b\n\n```\n def set_entry_point(self, key: str) -> None:\n```\n\nThe entrypoint to the graph.\nThis is the node that is first called.\nIt only takes one argument:\n\n#### .set_finish_point\u200b\n\n```\n def set_finish_point(self, key: str) -> None:\n```\n\nThis is the exit point of the graph.\nWhen this node is called, the results will be the final result from the graph.\nIt only has one argument:\n\nNote: This does not need to be called if at any point you previously created an edge (conditional or normal) to END\n\n### Graph\u200b\n\n```\nfrom langgraph.graph import Graphgraph = Graph()\n```\n\nThis has the same interface as StateGraph with the exception that it doesn't update a state object over time, and rather relies on passing around the full state from each step.\nThis means that whatever is returned from one node is the input to the next as is.\n\n### END\u200b\n\n```\nfrom langgraph.graph import END\n```\n\nThis is a special node representing the end of the graph.\nThis means that anything passed to this node will be the final output of the graph.\nIt can be used in two places:\n\n## Prebuilt Examples\u200b\n\nThere are also a few methods we've added to make it easy to use common, prebuilt graphs and components.\n\n### ToolExecutor\u200b\n\n```\nfrom langgraph.prebuilt import ToolExecutor\n```\n\nThis is a simple helper class to help with calling tools.\nIt is parameterized by a list of tools:\n\n```\ntools = [...]tool_executor = ToolExecutor(tools)\n```\n\nIt then exposes a runnable interface.\nIt can be used to call tools: you can pass in an AgentAction and it will look up the relevant tool and call it with the appropriate input.\n\n### chat_agent_executor.create_function_calling_executor\u200b\n\n```\nfrom langgraph.prebuilt import chat_agent_executor\n```\n\nThis is a helper function for creating a graph that works with a chat model that utilizes function calling.\nCan be created by passing in a model and a list of tools.\nThe model must be one that supports OpenAI function calling.\n\n```\nfrom langchain_openai import ChatOpenAIfrom langchain_community.tools.tavily_search import TavilySearchResultsfrom langgraph.prebuilt import chat_agent_executorfrom langchain_core.messages import HumanMessagetools = [TavilySearchResults(max_results=1)]model = ChatOpenAI()app = chat_agent_executor.create_function_calling_executor(model, tools)inputs = {\"messages\": [HumanMessage(content=\"what is the weather in sf\")]}for s in app.stream(inputs): print(list(s.values())[0]) print(\"----\")\n```\n\n### create_agent_executor\u200b\n\n```\nfrom langgraph.prebuilt import create_agent_executor\n```\n\nThis is a helper function for creating a graph that works with LangChain Agents.\nCan be created by passing in an agent and a list of tools.\n\n```\nfrom langgraph.prebuilt import create_agent_executorfrom langchain_openai import ChatOpenAIfrom langchain import hubfrom langchain.agents import create_openai_functions_agentfrom langchain_community.tools.tavily_search import TavilySearchResultstools = [TavilySearchResults(max_results=1)]# Get the prompt to use - you can modify this!prompt = hub.pull(\"hwchase17/openai-functions-agent\")# Choose the LLM that will drive the agentllm = ChatOpenAI(model=\"gpt-3.5-turbo-1106\")# Construct the OpenAI Functions agentagent_runnable = create_openai_functions_agent(llm, tools, prompt)app = create_agent_executor(agent_runnable, tools)inputs = {\"input\": \"what is the weather in sf\", \"chat_history\": []}for s in app.stream(inputs): print(list(s.values())[0]) print(\"----\")\n```\n\n", "metadata": {"source": "https://python.langchain.com/docs/langgraph/", "title": "\ud83e\udd9c\ud83d\udd78\ufe0fLangGraph | \ud83e\udd9c\ufe0f\ud83d\udd17 Langchain", "description": "\u26a1 Building language agents as graphs \u26a1", "language": "en"}}}] \ No newline at end of file