Skip to content

LLM response caching for tests #484

@elroy-bot

Description

@elroy-bot

I would like to have a cached version of the LLM client, which writes output to local files within test flows. This implies converting the existing client into a class that would need to be initialized and incorporated into ElroyContext

The test llm client should still be able to call real endpoints like it does currently, but it should write responses to a fixtures dir in the test dir. This can be checked into version control (not automated, user can do this), so remote runs can just used cached data rather than re-running llm queries.

This caching mechanism should ONLY affect tests, there should not be caching that affects production

The cache should be written in json files, in a test/fixtures/llm_cache dir

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions