-
Notifications
You must be signed in to change notification settings - Fork 15.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RunnableWithMessageHistory does not work with ChatAnthropic - AsyncRootListenersTracer.on_chain_end
error
#26563
Comments
okay I am trying to replicate this issue and see. |
@selvaradov can you send the entire traceback please? |
@keenborder786 Thanks! There is no further stack trace that shows up unfortunately, is there a way that I can find it apart from in the terminal / notebook output? |
Let me know if you can't replicate it, but the code above was sufficient to cause the issue in a fresh Colab notebook for me. (I think the |
I was not able to replicate it but can you run the following, see if it works for you (This basically follows the old style of using memory but it might work for your use case): from langchain_openai import OpenAIEmbeddings
from langchain_anthropic import ChatAnthropic
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain.tools.retriever import create_retriever_tool
from langchain_core.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
)
from langchain.chains.query_constructor.schema import AttributeInfo
from langchain_community.chat_message_histories import SQLChatMessageHistory
from langchain.retrievers import SelfQueryRetriever
from langchain.schema import Document
import asyncio
from langchain.vectorstores import Chroma
from langchain.memory import ConversationBufferMemory
# Initialize LLM
llm = ChatAnthropic(
model="claude-3-5-sonnet-20240620",
max_tokens_to_sample=8192,
)
example_doc = Document("In 2014 something very important happened")
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(documents=[example_doc], embedding=embeddings)
def create_self_query_retriever(vectorstore):
metadata_field_info = [
AttributeInfo(
name="date",
description=f"The year associated with the information.",
type="string",
),
]
document_content_description = "Landmark developments in AI."
return SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
)
self_query_retriever = create_self_query_retriever(vectorstore)
prompt = ChatPromptTemplate.from_messages(
[
MessagesPlaceholder(variable_name="history"),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
tool = create_retriever_tool(
self_query_retriever,
"ai_retriever",
"Searches for information about developments in AI.",
)
tools = [tool]
# Create the agent
memory_history = SQLChatMessageHistory(
session_id="",
connection="sqlite:///chats.db",
async_mode=False,
)
memory = ConversationBufferMemory(chat_memory = memory_history,
input_key = "input", memory_key = "history", return_messages = True)
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory)
async def run_agent_with_updates(agent, query, sid):
config = {"configurable": {"session_id": sid}}
async for event in agent_executor.astream_events(
{"input": query},
config,
version="v2",
):
kind = event["event"]
if kind == "on_chat_model_stream":
content = event["data"]["chunk"].content
if content:
print(content, end="", flush=True)
async def main(session_id:str):
agent_executor.memory.chat_memory.session_id = session_id
await run_agent_with_updates(agent_executor, "What was the main development in AI in 2014?", "123")
asyncio.run(main('foo')) |
Here is a notebook to replicate the issue precisely: https://colab.research.google.com/drive/1OIziMD6Bk9YEgVWNFycV7aGWEoWoTU8O?usp=sharing. You have to scroll all the way along to the right of the two output cells and you'll see the [Edit: I've added some more context to the notebook to show that this is only a problem which occurs with the Anthropic model, not OpenAI ones.] I'll try your solution and see if it works for my use case. One difference I can see now is that it's not using the async mode for the SQLChat history, which possibly matters |
Yep, the solution you came up with is working. The main issue about the Two questions about the new code:
|
|
Checked other resources
Example Code
Try the following code:
And then ask a question relying on context:
Error Message and Stack Trace (if applicable)
Description
chats.db
SQLite databaseSystem Info
The text was updated successfully, but these errors were encountered: