Skip to content

simonw/llm-echo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llm-echo

PyPI Changelog Tests License

Debug plugin for LLM. Adds a model which echos its input without hitting an API or executing a local LLM.

Installation

Install this plugin in the same environment as LLM.

llm install llm-echo

Usage

The plugin adds a echo model which simply echos the prompt details back to you as JSON.

llm -m echo prompt -s 'system prompt'

Output:

{
  "prompt": "prompt",
  "system": "system prompt",
  "attachments": [],
  "stream": true,
  "previous": []
}

You can also add one example option like this:

llm -m echo prompt -o example_bool 1

Output:

{
  "prompt": "prompt",
  "system": "",
  "attachments": [],
  "stream": true,
  "previous": [],
  "options": {
    "example_bool": true
  }
}

Tool calling

You can use llm-echo to test tool calling without needing to run prompts through an actual LLM. In your prompt, send something like this:

{
  "prompt": "This will be treated as the prompt",
  "tool_calls": [
    {
      "name": "example",
      "arguments": {
        "input": "Hello, world!"
      }
    }
  ]
}

You can assemble a test that looks like this:

def example(input: str) -> str:
    return f"Example output for {input}"

model = llm.get_model("echo")
chain_response = model.chain(
    json.dumps(
        {
            "tool_calls": [
                {
                    "name": "example",
                    "arguments": {"input": "test"},
                }
            ],
            "prompt": "prompt",
        }
    ),
    system="system",
    tools=[example],
)
responses = list(chain_response.responses())
tool_calls = responses[0].tool_calls()
assert tool_calls == [
    llm.ToolCall(name="example", arguments={"input": "test"}, tool_call_id=None)
]
assert responses[1].prompt.tool_results == [
    llm.models.ToolResult(
        name="example", output="Example output for test", tool_call_id=None
    )
]

Or you can read the JSON from the last response in the chain:

response_info = json.loads(responses[-1].text())

And run assertions against the "tool_results" key, which should look something like this:

{
  "prompt": "",
  "system": "",
  "...": "...",
  "tool_results": [
    {
      "name": "example",
      "output": "Example output for test",
      "tool_call_id": null
    }
  ]
}

Take a look at the test suite for llm-tools-simpleeval for an example of how to write tests against tools.

Raw responses

Sometimes it can be useful to output an exact string, for example if you are testing the --extract option in LLM.

If your prompt is JSON with a "raw" key that string is the only thing that will be returned. For example:

{
  "raw": "This is the raw response"
}

Will return:

This is the raw response

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-echo
python -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

python -m pip install -e '.[test]'

To run the tests:

python -m pytest

About

Debug plugin for LLM providing an echo model

Resources

License

Stars

Watchers

Forks

Sponsor this project

 

Packages

No packages published

Languages