Skip to content

Conversation

@ccurme
Copy link
Contributor

@ccurme ccurme commented Dec 16, 2025

No description provided.

@ccurme ccurme requested a review from lnhsingh as a code owner December 16, 2025 21:20
@github-actions github-actions bot added langchain For docs changes to LangChain oss internal labels Dec 16, 2025
@github-actions
Copy link
Contributor

Mintlify preview ID generated: preview-ccstre-1765920086-1fd0a8a

1. Partial JSON as [tool calls](/oss/langchain/models#tool-calling) are generated
2. The completed, parsed tool calls that are executed
To do this, apply both [`"messages"`](#llm-tokens) and [`"updates"`](#agent-progress) streaming modes. The `"messages"` streaming mode will include [message chunks](/oss/langchain/messages#streaming-and-chunks) from all LLM calls in the agent. The `"updates"` mode will include completed messages with tool calls before they are routed to tools for execution.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we avoid relying on updates for this information and show how to aggregate the tool message? There's no guarantee that the message comes from the same source?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are alternatives although I'm not sure any of them is great. Issue is that atm there is no ID available during stream_mode="messages" that will work to group streams from individual LLM calls.

langgraph_checkpoint_ns is available and will work for sub-agents but not for direct chat model calls in nodes.

Some options:

  • Update integrations to attach meaningful, consistent IDs on all chunks
    • To avoid breaking changes, we'd need to make sure the aggregated ID is unchanged, which means making sure we collect the provider's message ID on the first chunk.
    • Can update major integrations in ~1 day, assuming they provide the ID on the first chunk, but will be unreliable generally.
  • Use run_id, which does appear to identify LLM calls
    • Can likely expose in stream_mode="messages" metadata
    • Available via callback system / astream_events
  • Apply tags (at model init, or with something like model.with_config({"tags": [str(uuid.uuid4())]}).invoke(...))
    • Band-aid / ugly, points to needed functionality

The pro of using stream_mode="updates" is that if you're using create_agent, it works in typical cases and captures the actual tool call that is sent to tools (let me know if there are failure modes with create_agent).

Also, unless we use callback system (e.g., as in astream_events) or stream_mode="updates", we're duplicating work aggregating the messages, so ideally we figure out a way to expose the aggregated message-- may be possible to update stream_mode="messages" to do this.

def get_weather(city: str) -> str:
"""Get weather for a given city."""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Empty line

@github-actions
Copy link
Contributor

Mintlify preview ID generated: preview-ccstre-1765987209-ecc12d4

@github-actions
Copy link
Contributor

Mintlify preview ID generated: preview-ccstre-1766006521-72e89bc

@github-actions
Copy link
Contributor

Mintlify preview ID generated: preview-ccstre-1766082137-0a5ddd8

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

internal langchain For docs changes to LangChain oss

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants