Chat history not getting logged in Redis when using Anthropic based tool calling agent #25223
Replies: 2 comments
-
Hello, @raihankhan-rk! I'm here to help you with any questions or issues you have. Let's work together to solve this! The issue you're encountering is likely due to the To resolve this, you need to add explicit code to handle logging within the import redis
import json
from langchain_core.chat_history import BaseChatMessageHistory
from langchain_core.messages import AIMessage, HumanMessage, SystemMessage
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_anthropic import ChatAnthropic
class AnthropicFunctions(BaseChatModel):
"""Chat model for interacting with Anthropic functions."""
llm: BaseChatModel
redis_client = redis.StrictRedis(host='localhost', port=6379, db=0)
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:
values["llm"] = values.get("llm") or ChatAnthropic(**values)
return values
@property
def model(self) -> BaseChatModel:
"""For backwards compatibility."""
return self.llm
def _generate(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> ChatResult:
forced = False
function_call = ""
if "functions" in kwargs:
if "function_call" in kwargs:
function_call = kwargs["function_call"]
del kwargs["function_call"]
else:
function_call = "auto"
if function_call != "none":
content = prompt.format(tools=json.dumps(kwargs["functions"], indent=2))
system = SystemMessage(content=content)
messages = [system] + messages
if isinstance(function_call, dict):
forced = True
function_call_name = function_call["name"]
messages.append(AIMessage(content=f"<tool>{function_call_name}</tool>"))
del kwargs["functions"]
if stop is None:
stop = ["</tool_input>"]
else:
stop.append("</tool_input>")
else:
if "function_call" in kwargs:
raise ValueError(
"if `function_call` provided, `functions` must also be"
)
response = self.model.invoke(
messages, stop=stop, callbacks=run_manager, **kwargs
)
completion = cast(str, response.content)
if forced:
tag_parser = TagParser()
if "<tool_input>" in completion:
tag_parser.feed(completion.strip() + "</tool_input>")
v1 = tag_parser.parse_data["tool_input"][0]
arguments = json.dumps(_destrip(v1))
else:
v1 = completion
arguments = ""
kwargs = {
"function_call": {
"name": function_call_name,
"arguments": arguments,
}
}
message = AIMessage(content="", additional_kwargs=kwargs)
self.redis_client.lpush('chat_history', json.dumps(messages))
return ChatResult(generations=[ChatGeneration(message=message)])
elif "<tool>" in completion:
tag_parser = TagParser()
tag_parser.feed(completion.strip() + "</tool_input>")
msg = completion.split("<tool>")[0].strip()
v1 = tag_parser.parse_data["tool_input"][0]
kwargs = {
"function_call": {
"name": tag_parser.parse_data["tool"][0],
"arguments": json.dumps(_destrip(v1)),
}
}
message = AIMessage(content=msg, additional_kwargs=kwargs)
self.redis_client.lpush('chat_history', json.dumps(messages))
return ChatResult(generations=[ChatGeneration(message=message)])
else:
response.content = cast(str, response.content).strip()
self.redis_client.lpush('chat_history', json.dumps(messages))
return ChatResult(generations=[ChatGeneration(message=response)]) This example assumes you have a Redis server running locally. The Additionally, ensure that your For more detailed information, you can refer to the LangChain documentation on managing chat history with Redis [1][2]. |
Beta Was this translation helpful? Give feedback.
-
@raihankhan-rk were you able to find a fix for this? I am having the same issue with an SQLite database and I can't see any info about this in the docs. |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Description
In the code snippet that I've provided, It does not save chat history of the AI in Redis for some reason. Only the user query is getting saved in the chat history. This code works perfectly fine when I'm using OpenAI models and OpenAI methods for agent. But it hasn't been working since I switched to claude.
I'm also receiving this error:
Error in AsyncRootListenersTracer.on_chain_end callback: AttributeError("'dict' object has no attribute 'type'")
This is what the response from the agent looks like:
{'input': 'Just for context, today is Friday, 09 August 2024 and your data is forecasted until August 2025.\nWhat is my runway?', 'chat_history': [HumanMessage(content='Just for context, today is Friday, 09 August 2024 and your data is forecasted until August 2025.\nWhat is my runway?')], 'output': [{'text': '\n\nBased on the current date and forecasted data, your cash runway is approximately 12.9 months. This means you have sufficient funds to sustain operations until around late August 2025, given your current cash balance and projected cash flows.', 'type': 'text', 'index': 0}]}
Any solutions or bugs that you notice?
System Info
python 3.11.6
Beta Was this translation helpful? Give feedback.
All reactions