Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mistype issue using MLX Chat Model via MLXPipeline #25134

Open
5 tasks done
kozachynskyi opened this issue Aug 7, 2024 · 0 comments
Open
5 tasks done

Mistype issue using MLX Chat Model via MLXPipeline #25134

kozachynskyi opened this issue Aug 7, 2024 · 0 comments
Labels
Ɑ: agent Related to agents module 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@kozachynskyi
Copy link

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

from langchain_community.llms import MLXPipeline
from langchain_community.chat_models.mlx import ChatMLX
from langchain.agents import AgentExecutor, load_tools

from langchain_core.prompts.chat import (
    ChatPromptTemplate,
    HumanMessagePromptTemplate,
    SystemMessagePromptTemplate,
)
from langchain.tools.render import render_text_description
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.output_parsers import (
    ReActJsonSingleInputOutputParser,
)

system = '''a'''
human = '''{input}
{agent_scratchpad}
'''
def get_custom_prompt():
    messages = [
            SystemMessagePromptTemplate.from_template(system),
            HumanMessagePromptTemplate.from_template(human),
    ]
    input_variables = ["agent_scratchpad", "input", "tool_names", "tools"]
    return ChatPromptTemplate(input_variables=input_variables, messages=messages)

llm = MLXPipeline.from_model_id(
    model_id="mlx-community/Meta-Llama-3-8B-Instruct-4bit",
)
chat_model = ChatMLX(llm=llm)

prompt = get_custom_prompt()
prompt = prompt.partial(
    tools=render_text_description([]),
    tool_names=", ".join([t.name for t in []]),
)

chat_model_with_stop = chat_model.bind(stop=["\nObservation"])
agent = (
    {
        "input": lambda x: x["input"],
        "agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
    }
    | prompt
    | chat_model_with_stop
    | ReActJsonSingleInputOutputParser()
)

# instantiate AgentExecutor
agent_executor = AgentExecutor(agent=agent, tools=[], verbose=True)

agent_executor.invoke(
    {
        "input": "What is your name?"
    }
)

Error Message and Stack Trace (if applicable)

File "/Users/==/.pyenv/versions/hack/lib/python3.11/site-packages/langchain_community/chat_models/mlx.py", line 184, in _stream
text = self.tokenizer.decode(token.item())
^^^^^^^^^^
AttributeError: 'int' object has no attribute 'item'
Uncaught exception. Entering post mortem debugging

Description

Hi there,

I assume this bug is similar to this issue #20561. Why is that? Because if you locally apply changes from this patch ad48f77 starting from line 174 to langchain_community/chat_models/mlx.py, the bug disappears.

Best wishes

System Info

langchain==0.2.12
langchain-community==0.2.11
langchain-core==0.2.28
langchain-experimental==0.0.64
langchain-huggingface==0.0.3
langchain-text-splitters==0.2.2

mac

3.11.9

@dosubot dosubot bot added Ɑ: agent Related to agents module 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Aug 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Ɑ: agent Related to agents module 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

1 participant