Open
Description
Currently, if you want to teach Chat.append_message_stream()
about response formats that it doesn't already know about, you have to register a new "message normalizer" strategy with this internal object.
Although convenient for developers and still potentially worth exporting (it's currently internal), it's a lot to ask for most users to learn and implement.
It'd be much easier if you could just pass a function to .append_message_stream()
to grab the relevant content from each iteration of the stream.
For example, something like this (from #1610):
from shiny.ui._chat_normalize import BaseMessageNormalizer, message_normalizer_registry
class LangchainAgentResponseNormalizer(BaseMessageNormalizer):
# For each chunk of a .append_message_stream()
def normalize_chunk(self, chunk):
return chunk["messages"][0].content
def can_normalize_chunk(self, chunk):
return "messages" in chunk and len(chunk["messages"]) > 0
# For .append_message()
def normalize(self, message):
return message["messages"][0].content
def can_normalize(self, message):
return "messages" in message and len(message["messages"]) > 0
message_normalizer_registry.register(
"langchain-agents", LangchainAgentResponseNormalizer()
)
could instead become something like:
chat.append_message_stream(response, lambda x: x["messages"][0].content)
Metadata
Metadata
Assignees
Labels
No labels