Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: langchain_core.language_models.chat_models.BaseChatModel.generate_prompt() got multiple values for keyword argument 'callbacks' #23379

Open
5 tasks done
vectornguyen76 opened this issue Jun 25, 2024 · 3 comments
Labels
🔌: anthropic Primarily related to Anthropic integrations 🔌: aws Primarily related to Amazon Web Services (AWS) integrations 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: core Related to langchain-core

Comments

@vectornguyen76
Copy link

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

from langchain_aws import ChatBedrock
from langchain_experimental.llms.anthropic_functions import AnthropicFunctions
from dotenv import load_dotenv

load_dotenv()

Initialize the LLM with the required parameters

llm = ChatBedrock(
model_id="anthropic.claude-3-haiku-20240307-v1:0",
model_kwargs={"temperature": 0.1},
region_name="us-east-1"
)

Initialize AnthropicFunctions with the LLM

base_model = AnthropicFunctions(llm=llm)

Define the function parameters for the model

functions = [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
},
},
"required": ["location"],
},
}
]

Bind the functions to the model without causing keyword conflicts

model = base_model.bind(
functions=functions,
function_call={"name": "get_current_weather"}
)

Invoke the model with the provided input

res = model.invoke("What's the weather in San Francisco?")

Extract and print the function call from the response

function_call = res.additional_kwargs.get("function_call")
print("function_call", function_call)

Error Message and Stack Trace (if applicable)

TypeError: langchain_core.language_models.chat_models.BaseChatModel.generate_prompt() got multiple values for keyword argument 'callbacks'

Description

I try to use function calling in Anthropic's models through Bedrock. Help me fix problem!!

System Info

I use latest version

@dosubot dosubot bot added Ɑ: core Related to langchain-core 🔌: anthropic Primarily related to Anthropic integrations 🔌: aws Primarily related to Amazon Web Services (AWS) integrations 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Jun 25, 2024
@jhachirag7
Copy link

Is there any workaround?

@jhachirag7
Copy link

jhachirag7 commented Jul 8, 2024

import json
from json import JSONDecodeError
from typing import List, Union

import xml.etree.ElementTree as ET

from langchain_core.agents import AgentAction, AgentActionMessageLog, AgentFinish
from langchain_core.exceptions import OutputParserException
from langchain_core.messages import (
    AIMessage,
    BaseMessage,
)
from langchain_core.outputs import ChatGeneration, Generation

from langchain.agents.agent import AgentOutputParser




import copy
import json
from typing import Any, Dict, List, Optional, Type, Union

import jsonpatch  # type: ignore[import]

from langchain_core.exceptions import OutputParserException
from langchain_core.output_parsers import (
    BaseCumulativeTransformOutputParser,
    BaseGenerationOutputParser,
)
from langchain_core.output_parsers.json import parse_partial_json
from langchain_core.outputs import ChatGeneration, Generation
from langchain_core.pydantic_v1 import BaseModel, root_validator



class AnthropicJsonOutputFunctionsParser(BaseCumulativeTransformOutputParser[Any]):
    """Parse an output as the Json object."""

    strict: bool = False
    """Whether to allow non-JSON-compliant strings.
    
    See: https://docs.python.org/3/library/json.html#encoders-and-decoders
    
    Useful when the parsed output may include unicode characters or new lines.
    """

    args_only: bool = True
    """Whether to only return the arguments to the function call."""

    @property
    def _type(self) -> str:
        return "json_functions"
    


    def _diff(self, prev: Optional[Any], next: Any) -> Any:
        return jsonpatch.make_patch(prev, next).patch
    

    def parse_result(self, result: List[Generation], *, partial: bool = False) -> Any:
        if len(result) != 1:
            raise OutputParserException(
                f"Expected exactly one result, but got {len(result)}"
            )
        generation = result[0]
        if not isinstance(generation, ChatGeneration):
            raise OutputParserException(
                "This output parser can only be used with a chat generation."
            )
        message = generation.message.content
        
        if "function_calls" not in message:
                return None
        start = message.find("<function_calls>")
        end = message.find("</function_calls>") + len("</function_calls>")

        # Extrair a string XML
        xml_string = message[start:end]
        
        # Análise da string XML
        root = ET.fromstring(xml_string)

        # Encontrar o nome da ferramenta
        tool_name = root.find(".//tool_name").text

        # Construir o dicionário de parâmetros
        parameters = {}
        for param in root.findall(".//parameters/*"):
            parameters[param.tag] = param.text
        data = {
            "next": tool_name
        }

        json_output = json.dumps(data)
        # Construir o dicionário final com nome e parâmetros
        additional_kwargs={'function_call': {'arguments': json_output, 'name': 'route'}}
        print(additional_kwargs)

        try:
            function_call = additional_kwargs["function_call"]
        except KeyError as exc:
            if partial:
                return None
            else:
                raise OutputParserException(f"Could not parse function call: {exc}")
        try:
            if partial:
                try:
                    if self.args_only:
                        
                        return parse_partial_json(
                            function_call["arguments"], strict=self.strict
                        )
                    else:
        
                        return {
                            **function_call,
                            "arguments": parse_partial_json(
                                function_call["arguments"], strict=self.strict
                            ),
                        }
                except json.JSONDecodeError:
                    return None
            else:
                if self.args_only:
                    try:
                    
                        return json.loads(
                            function_call["arguments"], strict=self.strict
                        )
                    except (json.JSONDecodeError, TypeError) as exc:
                        raise OutputParserException(
                            f"Could not parse function call data: {exc}"
                        )
                else:
                    try:
                        return {
                            **function_call,
                            "arguments": json.loads(
                                function_call["arguments"], strict=self.strict
                            ),
                        }
                    except (json.JSONDecodeError, TypeError) as exc:
                        raise OutputParserException(
                            f"Could not parse function call data: {exc}"
                        )
        except KeyError:
            return None

    # This method would be called by the default implementation of `parse_result`
    # but we're overriding that method so it's not needed.
    def parse(self, text: str) -> Any:
        raise NotImplementedError()

I have created my own parser and it's actually working, suppose i have supervisor agent and 2 more agent, if wan't them to talk to supervisor then gives me

An error occurred (ValidationException) when calling the Converse operation: A conversation must alternate between user and assistant roles. Make sure the conversation alternates between user and assistant roles and try again.

@djprawns
Copy link

hi, any resolution around this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🔌: anthropic Primarily related to Anthropic integrations 🔌: aws Primarily related to Amazon Web Services (AWS) integrations 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: core Related to langchain-core
Projects
None yet
Development

No branches or pull requests

3 participants