Skip to content

nushell + anthropic + tool use + rich documents + cross.stream

Notifications You must be signed in to change notification settings

cablehead/xs-command-llm-anthropic

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

48 Commits
 
 
 
 
 
 

Repository files navigation

xs-command-llm-anthropic

A cross.stream command and Nushell module for interacting with Anthropic's Claude AI models. This add-on leverages cross.stream's event-sourced architecture to provide persistent, stateful conversations with Claude that can be integrated into your terminal workflow.

image

Requirements

Onboarding

Quick start with the llm module:

  1. Load the module overlay:
overlay use -p ./llm
help llm
  1. Initialize your API key and register the cross.stream command:
$env.ANTHROPIC_API_KEY | llm init-store
  1. Make a test call:
llm call
Enter prompt: hola
Text:
¡Hola! ¿En qué puedo ayudarte hoy?

You're ready to go!

Features

To document

  • document how to run llm.call without registering it
let c = source xs-command-llm.call-anthropic.nu ; do $c.process ("hi" | .append go)
  • Working with the response
.head llm.response | .cas | from json

Adhoc request: translate the current clipboard to english

[
    (bp)               # our current clipboard: but really you want to "pin" a
                       # snippet of content
    "please translate to english"  # tool selection
]
# we should be able to pipe a list of strings directly into llm.call
| str join "\n\n---\n\n"
| (.append
    -c 03dg9w21nbjwon13m0iu6ek0a # the context which has llm.define and is generally considered adhoc
    llm.call
    )

Using the cache flag with large documents or inputs:

# Load a large document and process it with caching enabled
open large_document.pdf | llm call --cache
llm call "Summarize the key points from the document"

# The document content is marked as ephemeral in Claude's context
# This reduces token usage in subsequent exchanges
# while still allowing Claude to reference the semantic content

View outstanding calls:

.cat | where topic in ["llm.call" "llm.error" "llm.response"] | reduce --fold {} {|frame acc|
     if $frame.topic == "llm.call" {
       return ($acc | insert $frame.id "pending")
     }

     $acc | upsert $frame.meta.frame_id ($frame | reject meta)

   }

Reference

Command Options

The llm call command supports the following options:

  • --with-tools: Enable Claude to use bash and text editor tools
  • --cache: Mark messages as ephemeral, which prevents them from being used in subsequent responses. This is useful for excluding context-heavy content (like large documents) from being re-tokenized in future exchanges while preserving the semantic understanding from those messages.
  • --respond (-r): Continue from the last response
  • --json (-j): Treat input as JSON formatted content
  • --separator (-s): Specify a custom separator when joining lists of strings (default: "\n\n---\n\n")
sequenceDiagram
    participant User
    participant CLI as llm-anthropic.nu CLI
    participant Store as [cross.stream](https://github.com/cablehead/xs) Store
    participant Command as llm.call Command
    participant API as Anthropic API

    User->>CLI: "Hello Claude" | .llm
    CLI->>Store: .append llm.call
    Store-->>Command: Executes Command

    Command->>Store: .head ANTHROPIC_API_KEY
    Store-->>Command: API Key

    Command->>Store: traverse-thread <id>
    Store-->>Command: Previous messages

    Command->>API: HTTP POST /v1/messages

    API-->>Command: SSE Stream (text chunks)

    loop For each response chunk
        Command->>Store: .append llm.recv
        Store-->>CLI: Stream response chunk
        CLI-->>User: Display streaming text
    end

    Command->>Store: .append llm.response

    alt Tool Use Request
        CLI->>User: Display tool use request
        User->>CLI: Confirm execution
        CLI->>Store: .append with tool results
        Store-->>Command: Continue with results
    end
Loading

Why Use This Approach

The cross.stream framework offers significant advantages over traditional AI integration approaches:

Event-Sourced Architecture

This system stores all interactions as a linked chain of events, creating powerful capabilities:

  • Streaming Responses: Any UI (terminal, web, desktop) can subscribe to see Claude's responses as they arrive
  • Temporal Navigation: Browse conversation history at any point, fork discussions from previous messages
  • Resilience: Interrupted responses retain all partial data
  • Asynchronous Processing: LLM calls run independently in the background, managed by the cross.stream process

Command-Based Integration

By registering llm.call as a cross.stream command:

  • Operations run independently of client processes
  • State is managed through the event stream rather than memory
  • Multiple consumers can observe the same operation
  • Persistence is maintained across client restarts

Terminal-Native Workflow

  • Seamlessly integrates with developer command-line workflows
  • Leverages Nushell's powerful data manipulation capabilities
  • Creates composable pipelines between AI outputs and other tools
  • Provides a foundation for custom tooling built around LLM interactions

This approach creates a clean separation between API mechanisms and clients, making it easier to build specialized interfaces while maintaining a centralized conversation store.