Version 0.9.7 adds dynamic directives, a better rewrite interface, streaming support to the gptel request API, and more flexible model/backend configuration
Breaking changes
gptel-rewrite-menu
has been obsoleted. Use gptel-rewrite
instead.
Backends
- Add support for OpenAI's o1-preview and o1-mini
- Add support for Anthropic's Claude 3.5 Haiku
- Add support for xAI (contributed by @WuuBoLin)
- Add support for Novita AI (contributed by @jasonhp)
Notable new features and UI changes
gptel's directives (see gptel-directives
) can now be dynamic, and include more than the system message. You can "pre-fill" a conversation with canned user/LLM messages. Directives can now be functions that dynamically generate the system message and conversation history based on the current context. This paves the way for fully flexible task-specific templates, which the UI does not yet support in full. This design was suggested by @meain. (#375)
gptel's rewrite interface has been reworked. If using a streaming endpoint, the rewritten text is streamed in as a preview placed over the original. In all cases, clicking on the preview brings up a dispatch you can use to easily diff, ediff, merge, accept or reject the changes (4ae9c1b), and you can configure gptel to run one of these actions automatically. See the README for examples. This design was suggested by @meain. (#375)
gptel-abort
, used to cancel requests in progress, now works across the board, including when not using Curl or with gptel-rewrite
(7277c00).
The gptel-request
API now explicitly supports streaming responses (7277c00), making it easy to write your own helpers or features with streaming support. The API also supports gptel-abort
to stop and clean up responses.
You can now unset the system message -- different from setting it to an empty string. gptel will also automatically disable the system message when using models that don't support it (0a2c07a).
Support for including PDFs with requests to Anthropic models has been added. (These queries are cached, so you pay only 10% of the token cost of the PDF in follow-up queries.) Note that document support (PDFs etc) for Gemini models has been available since v0.9.5. (0f173ba, #459)
When defining a gptel model or backend, you can specify arbitrary parameters to be sent with each request. This includes the (many) API options across all APIs that gptel does not yet provide explicit support for (bcbbe67). This feature was suggested by @tillydray (#471).
New transient command option to easily remove all included context chunks (a844612), suggested by @metachip and @gavinhughes.
Bug fixes and other news
- Pressing
RET
on included files in the context inspector buffer now pops up the file correctly. - API keys are stripped of whitespace before sending.
- Multiple UI, backend and prompt construction bugs have been fixed.
Pull requests
- Remove chatgpt-arcana from the list of alternatives by @CarlQLange in #431
- gptel-anthropic: Add upgraded Claude 3.5 Sonnet model by @benthamite in #436
- docs: add Novita AI in README by @jasonhp in #448
- gptel-curl: don't try converting CR-LF on Windows by @ssafar in #456
- README: Add support for xAI by @WuuBoLin in #466
- fix typo by @tillydray in #469
- fix(private-gpt): convert model name from symbol to string in request by @rosenstrauch in #470
- Update
gptel--anthropic-models
by @benthamite in #483 - Added a section tell new users how to select a backend explicitly by @blais in #467
New Contributors
- @CarlQLange made their first contribution in #431
- @jasonhp made their first contribution in #448
- @ssafar made their first contribution in #456
- @WuuBoLin made their first contribution in #466
- @tillydray made their first contribution in #469
- @rosenstrauch made their first contribution in #470
- @blais made their first contribution in #467
Full Changelog: v0.9.6...v0.9.7