Skip to content

Commit

Permalink
--extract support for templates, closes #681
Browse files Browse the repository at this point in the history
  • Loading branch information
simonw committed Dec 19, 2024
1 parent 67d4a99 commit 000e984
Show file tree
Hide file tree
Showing 4 changed files with 31 additions and 7 deletions.
21 changes: 17 additions & 4 deletions docs/templates.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,11 @@ You can also save default parameters:
llm --system 'Summarize this text in the voice of $voice' \
--model gpt-4 -p voice GlaDOS --save summarize
```
If you add `--extract` the setting to {ref}`extract the first fenced code block <usage-extract-fenced-code>` will be persisted in the template.
```bash
llm --system 'write a Python function' --extract --save python-function
llm -t python-function 'reverse a string'
```
## Using a template

You can execute a named template using the `-t/--template` option:
Expand Down Expand Up @@ -100,7 +105,7 @@ curl -s 'https://til.simonwillison.net/macos/imovie-slides-and-audio' | \
Output:
> In a fantastical steampunk world, Simon Willison decided to merge an old MP3 recording with slides from the talk using iMovie. After exporting the slides as images and importing them into iMovie, he had to disable the default Ken Burns effect using the "Crop" tool. Then, Simon manually synchronized the audio by adjusting the duration of each image. Finally, he published the masterpiece to YouTube, with the whimsical magic of steampunk-infused illustrations leaving his viewers in awe.

## System templates
### System templates

When working with models that support system prompts (such as `gpt-3.5-turbo` and `gpt-4`) you can set a system prompt using a `system:` key like so:

Expand All @@ -116,7 +121,7 @@ system: You speak like an excitable Victorian adventurer
prompt: 'Summarize this: $input'
```

## Additional template variables
### Additional template variables

Templates that work against the user's normal input (content that is either piped to the tool via standard input or passed as a command-line argument) use just the `$input` variable.

Expand Down Expand Up @@ -157,7 +162,7 @@ I got this:
> My previous test subject seemed to have learned something new about iMovie. They exported keynote slides as individual images [...] Quite impressive for a human.

(prompt-default-parameters)=
## Specifying default parameters
### Specifying default parameters

You can also specify default values for parameters, using a `defaults:` key.

Expand Down Expand Up @@ -185,7 +190,15 @@ I got this:

> Text, summarize in Yoda's voice, I will: "Hmm, young padawan. Summary of this text, you seek. Hmmm. ...

## Setting a default model for a template
### Configuring code extraction

To configure the {ref}`extract first fenced code block <usage-extract-fenced-code>` setting for the template, add this:

```yaml
extract: true
```

### Setting a default model for a template

Templates executed using `llm -t template-name` will execute using the default model that the user has configured for the tool - or `gpt-3.5-turbo` if they have not configured their own default.

Expand Down
9 changes: 6 additions & 3 deletions llm/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -262,9 +262,6 @@ def prompt(

model_aliases = get_model_aliases()

if extract:
no_stream = True

def read_prompt():
nonlocal prompt

Expand Down Expand Up @@ -319,6 +316,8 @@ def read_prompt():
to_save["system"] = system
if param:
to_save["defaults"] = dict(param)
if extract:
to_save["extract"] = True
path.write_text(
yaml.dump(
to_save,
Expand All @@ -335,6 +334,7 @@ def read_prompt():
if system:
raise click.ClickException("Cannot use -t/--template and --system together")
template_obj = load_template(template)
extract = template_obj.extract
prompt = read_prompt()
try:
prompt, system = template_obj.evaluate(prompt, params)
Expand All @@ -343,6 +343,9 @@ def read_prompt():
if model_id is None and template_obj.model:
model_id = template_obj.model

if extract:
no_stream = True

conversation = None
if conversation_id or _continue:
# Load the conversation - loads most recent if no ID provided
Expand Down
2 changes: 2 additions & 0 deletions llm/templates.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ class Template(BaseModel):
system: Optional[str] = None
model: Optional[str] = None
defaults: Optional[Dict[str, Any]] = None
# Should first fenced code block be extracted?
extract: Optional[bool] = None

class Config:
extra = "forbid"
Expand Down
6 changes: 6 additions & 0 deletions tests/test_templates.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,12 @@ def test_templates_list(templates_path, args):
{"prompt": "Say hello as $name", "defaults": {"name": "default-name"}},
None,
),
# -x/--extract should be persisted:
(
["--system", "write python", "--extract"],
{"system": "write python", "extract": True},
None,
),
),
)
def test_templates_prompt_save(templates_path, args, expected_prompt, expected_error):
Expand Down

0 comments on commit 000e984

Please sign in to comment.