A cross.stream command and Nushell module for interacting with Anthropic's Claude AI models. This add-on leverages cross.stream's event-sourced architecture to provide persistent, stateful conversations with Claude that can be integrated into your terminal workflow.
- cross.stream
- anthropic-text-editor (for the --with-tools option): A micro-CLI to apply tool calls from Anthropic for their text_editor_20250124 built-in computer use tool
Quick start with the llm
module:
- Load the module overlay:
overlay use -p ./llm
help llm
- Initialize your API key and register the cross.stream command:
$env.ANTHROPIC_API_KEY | llm init-store
- Make a test call:
llm call
Enter prompt: hola
Text:
¡Hola! ¿En qué puedo ayudarte hoy?
You're ready to go!
- An interactive harness for processing Claude's built-in
bash_20250124
andtext_editor_20250124
tools - Rich documents, e.g. pdfs
- Message caching: control which messages are cached using the
--cache
flag
- document how to run llm.call without registering it
let c = source xs-command-llm.call-anthropic.nu ; do $c.process ("hi" | .append go)
- Working with the response
.head llm.response | .cas | from json
Adhoc request: translate the current clipboard to english
[
(bp) # our current clipboard: but really you want to "pin" a
# snippet of content
"please translate to english" # tool selection
]
# we should be able to pipe a list of strings directly into llm.call
| str join "\n\n---\n\n"
| (.append
-c 03dg9w21nbjwon13m0iu6ek0a # the context which has llm.define and is generally considered adhoc
llm.call
)
Using the cache flag with large documents or inputs:
# Load a large document and process it with caching enabled
open large_document.pdf | llm call --cache
llm call "Summarize the key points from the document"
# The document content is marked as ephemeral in Claude's context
# This reduces token usage in subsequent exchanges
# while still allowing Claude to reference the semantic content
View outstanding calls:
.cat | where topic in ["llm.call" "llm.error" "llm.response"] | reduce --fold {} {|frame acc|
if $frame.topic == "llm.call" {
return ($acc | insert $frame.id "pending")
}
$acc | upsert $frame.meta.frame_id ($frame | reject meta)
}
The llm call
command supports the following options:
--with-tools
: Enable Claude to use bash and text editor tools--cache
: Mark messages as ephemeral, which prevents them from being used in subsequent responses. This is useful for excluding context-heavy content (like large documents) from being re-tokenized in future exchanges while preserving the semantic understanding from those messages.--respond (-r)
: Continue from the last response--json (-j)
: Treat input as JSON formatted content--separator (-s)
: Specify a custom separator when joining lists of strings (default: "\n\n---\n\n")
sequenceDiagram
participant User
participant CLI as llm-anthropic.nu CLI
participant Store as [cross.stream](https://github.com/cablehead/xs) Store
participant Command as llm.call Command
participant API as Anthropic API
User->>CLI: "Hello Claude" | .llm
CLI->>Store: .append llm.call
Store-->>Command: Executes Command
Command->>Store: .head ANTHROPIC_API_KEY
Store-->>Command: API Key
Command->>Store: traverse-thread <id>
Store-->>Command: Previous messages
Command->>API: HTTP POST /v1/messages
API-->>Command: SSE Stream (text chunks)
loop For each response chunk
Command->>Store: .append llm.recv
Store-->>CLI: Stream response chunk
CLI-->>User: Display streaming text
end
Command->>Store: .append llm.response
alt Tool Use Request
CLI->>User: Display tool use request
User->>CLI: Confirm execution
CLI->>Store: .append with tool results
Store-->>Command: Continue with results
end
The cross.stream framework offers significant advantages over traditional AI integration approaches:
This system stores all interactions as a linked chain of events, creating powerful capabilities:
- Streaming Responses: Any UI (terminal, web, desktop) can subscribe to see Claude's responses as they arrive
- Temporal Navigation: Browse conversation history at any point, fork discussions from previous messages
- Resilience: Interrupted responses retain all partial data
- Asynchronous Processing: LLM calls run independently in the background, managed by the cross.stream process
By registering llm.call
as a cross.stream command:
- Operations run independently of client processes
- State is managed through the event stream rather than memory
- Multiple consumers can observe the same operation
- Persistence is maintained across client restarts
- Seamlessly integrates with developer command-line workflows
- Leverages Nushell's powerful data manipulation capabilities
- Creates composable pipelines between AI outputs and other tools
- Provides a foundation for custom tooling built around LLM interactions
This approach creates a clean separation between API mechanisms and clients, making it easier to build specialized interfaces while maintaining a centralized conversation store.