Skip to content

Is there support for LangChain agents?  #1610

Open
@kovla

Description

@kovla

ADDENDUM 20/08/24

Having posted the issue, I've been working on the agent output in the ui.Chat() component in my own application. The "support for agents", which I originally mentioned, seems to mean support for different kinds of messages in practice. If one takes a look at the documentation of LangGraph (which can be seen as a next evolution of agentic implementations of LangChain), multiple streaming outputs can be produced by it, such as streaming the entire state of the agent, state updates, LLM token from within chain steps, etc.

https://langchain-ai.github.io/langgraph/how-tos/stream-values/
https://langchain-ai.github.io/langgraph/how-tos/stream-updates/
https://langchain-ai.github.io/langgraph/how-tos/streaming-tokens/

I imagine it is cumbersome to support rendering this streaming in GUI out of the box for all kinds of output, but what could be improved, I believe, is support for various kinds of formatting for:

  • tool calls
  • tool messages
  • log events from agent nodes

The key idea would be to indicate to the user, which output corresponds to each type visually, while having the option to include some metadata with the message (e.g. for tool calls: state that the tool is being called in plain language, and then include a JSON with the tool name and call parameters as a collapsible element.

There is an example with a competing library that attempts at this, though also without having native support from the library: https://huggingface.co/spaces/fehmikaya/rag_agent_langgraph/blob/main/app.py

Another competing library takes a step in this direction as well:
https://www.gradio.app/guides/agents-and-tool-usage#the-metadata-key

It should be up to the user to transform the specific kind of message they want into the right format, given that there is support for various types of message formats, such as outlined above.

NB: I have yet to explore how transforming functions work in Shiny, but on the first glance it doesn't seem that there is conditionality to enable multiple types. Perhaps the transformation type could become an additional parameter for append_message*()?

ORIGINAL POST/ISSUE

Hi,

It was good to learn that Shiny has out-of-the-box support for LLM chats. I wonder whether LLM agents are also supported in that component. All the examples I've seen are straight LLM interactions with no tools (https://github.com/posit-dev/py-shiny/tree/main/examples/chat).

I tried adjusting the examples to work with an agent, but that did not work. There are no errors, the user can enter the prompt, and the AI icon appears with an empty response, and further nothing happens. Here is the code:

from shiny import App, ui
import dotenv
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain_core.prompts.chat import MessagesPlaceholder
from langchain.tools import tool
from langchain.agents import create_tool_calling_agent
from langchain.agents import AgentExecutor

dotenv.load_dotenv()
# Initialize the language model
llm = ChatOpenAI(model="gpt-4o", streaming=True)

# Define a basic tool (this one doesn't do anything, just a placeholder)
@tool
def basic_tool(input: str) -> str:
    """
    Basic tool that does nothing
    """
    return f"Processed input: {input}"

# Define a simple prompt template
prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant"),
        ("user", "{input}"),
        MessagesPlaceholder(variable_name="agent_scratchpad"),
    ]
)

tools = [basic_tool]

model_with_tools = llm.bind_tools(tools)

# Create the agent with no memory and a simple tool
agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)


app_ui = ui.page_fillable(
    ui.panel_title("Hello Shiny Chat"),
    ui.chat_ui("chat"),
    fillable_mobile=True,
)

# Create a welcome message
welcome = ui.markdown(
    """
    Hi! How can I help you today?
    """
)


def server(input, output, session):
    chat = ui.Chat(id="chat", messages=[welcome])

    # Define a callback to run when the user submits a message
    @chat.on_user_submit
    async def _():
        # Get messages currently in the chat
        messages = chat.messages(format="langchain")
        # Create a response message stream
        response = agent_executor.stream({'input': messages})
        # Append the response stream into the chat
        await chat.append_message_stream(response)


app = App(app_ui, server)

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions