Closed
Description
Hi, first thanks for building this awesome library!
I work with an existing chat app and it represents a chat as a list of messages, and the goal is the get a next message.
The last message is not always sent by a user, there are situations where you have an assistant message last and still want to generate a new one.
So doing this would be nice:
result = agent.run_sync(
[
{"type": "user", "content": 'Where does "hello world" come from?'},
{"type": "assistant", "content": "Let me look it up..."},
]
)
I know I can format the chat into the user prompt like this:
messages = [
{"type": "user", "content": "Where does "hello world" come from?"},
{"type": "assistant", "content": "Let me look it up..."},
]
chat_formatted = "\n".join(
f"{msg['type']}: {msg['content']}" for msg in messages
)
prompt = """
Please generate an assistant answer in this chat:
{chat}
"""
result = agent.run_sync(prompt.format(chat=chat_formatted))
But I'm not fully sure what downside this would have. I could imagine:
- Generation quality being worse, at least for some models
- We use an obeservability and tracing tool. I will likely represent the input worse in the UI. Currently I can also add observed generations to a dataset, where a list representation is also used
Would it be possible to allow the agent to run on a list of messages? Or is it already possible in some way and I missed it?