GPTel is a simple Large Language Model chat client for Emacs, with support for multiple models/backends.
LLM Backend | Supports | Requires |
---|---|---|
ChatGPT | ✓ | API key |
Azure | ✓ | Deployment and API key |
Ollama | ✓ | Ollama running locally |
GPT4All | ✓ | GPT4All running locally |
Gemini | ✓ | API key |
PrivateGPT | Planned | - |
Llama.cpp | Planned | - |
General usage: (YouTube Demo)
intro-demo.mp4
intro-demo-2.mp4
Multi-LLM support demo:
gptel-multi.mp4
- It’s async and fast, streams responses.
- Interact with LLMs from anywhere in Emacs (any buffer, shell, minibuffer, wherever)
- LLM responses are in Markdown or Org markup.
- Supports conversations and multiple independent sessions.
- Save chats as regular Markdown/Org/Text files and resume them later.
- You can go back and edit your previous prompts or LLM responses when continuing a conversation. These will be fed back to the model.
GPTel uses Curl if available, but falls back to url-retrieve to work without external dependencies.
- Installation
- Setup
- Usage
- FAQ
- Additional Configuration
- The gptel API
- Alternatives
- Breaking Changes
- Acknowledgments
GPTel is on MELPA. Ensure that MELPA is in your list of sources, then install gptel with M-x package-install⏎
gptel
.
(Optional: Install markdown-mode
.)
Clone or download this repository and run M-x package-install-file⏎
on the repository directory.
Installing the markdown-mode
package is optional.
In packages.el
(package! gptel)
In config.el
(use-package! gptel
:config
(setq! gptel-api-key "your key"))
After installation with M-x package-install⏎
gptel
- Add
gptel
todotspacemacs-additional-packages
- Add
(require 'gptel)
todotspacemacs/user-config
Procure an OpenAI API key.
Optional: Set gptel-api-key
to the key. Alternatively, you may choose a more secure method such as:
- Storing in
~/.authinfo
. By default, “api.openai.com” is used as HOST and “apikey” as USER.machine api.openai.com login apikey password TOKEN
- Setting it to a function that returns the key.
Register a backend with
(gptel-make-azure
"Azure-1" ;Name, whatever you'd like
:protocol "https" ;optional -- https is the default
:host "YOUR_RESOURCE_NAME.openai.azure.com"
:endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15" ;or equivalent
:stream t ;Enable streaming responses
:key #'gptel-api-key
:models '("gpt-3.5-turbo" "gpt-4"))
Refer to the documentation of gptel-make-azure
to set more parameters.
You can pick this backend from the transient menu when using gptel. (See usage)
If you want it to be the default, set it as the default value of gptel-backend
:
(setq-default gptel-backend
(gptel-make-azure
"Azure-1"
...))
Register a backend with
(gptel-make-gpt4all
"GPT4All" ;Name of your choosing
:protocol "http"
:host "localhost:4891" ;Where it's running
:models '("mistral-7b-openorca.Q4_0.gguf")) ;Available models
These are the required parameters, refer to the documentation of gptel-make-gpt4all
for more.
You can pick this backend from the transient menu when using gptel (see usage), or set this as the default value of gptel-backend
. Additionally you may want to increase the response token size since GPT4All uses very short (often truncated) responses by default:
;; OPTIONAL configuration
(setq-default gptel-model "mistral-7b-openorca.Q4_0.gguf" ;Pick your default model
gptel-backend (gptel-make-gpt4all "GPT4All" :protocol ...))
(setq-default gptel-max-tokens 500)
Register a backend with
(gptel-make-ollama
"Ollama" ;Any name of your choosing
:host "localhost:11434" ;Where it's running
:models '("mistral:latest") ;Installed models
:stream t) ;Stream responses
These are the required parameters, refer to the documentation of gptel-make-ollama
for more.
You can pick this backend from the transient menu when using gptel (see Usage), or set this as the default value of gptel-backend
:
;; OPTIONAL configuration
(setq-default gptel-model "mistral:latest" ;Pick your default model
gptel-backend (gptel-make-ollama "Ollama" :host ...))
Register a backend with
;; :key can be a function that returns the API key.
(gptel-make-gemini
"Gemini"
:key "YOUR_GEMINI_API_KEY"
:stream t)
These are the required parameters, refer to the documentation of gptel-make-gemini
for more.
You can pick this backend from the transient menu when using gptel (see Usage), or set this as the default value of gptel-backend
:
;; OPTIONAL configuration
(setq-default gptel-model "gemini-pro" ;Pick your default model
gptel-backend (gptel-make-gemini "Gemini" :host ...))
(This is also a video demo showing various uses of gptel.)
Command | Description |
---|---|
gptel-send | Send conversation up to (point) , or selection if region is active. Works anywhere in Emacs. |
gptel | Create a new dedicated chat buffer. Not required to use gptel. |
C-u gptel-send | Transient menu for preferenes, input/output redirection etc. |
gptel-menu | (Same) |
gptel-set-topic | (Org-mode only) Limit conversation context to an Org heading |
- Call
M-x gptel-send
to send the text up to the cursor. The response will be inserted below. Continue the conversation by typing below the response. - If a region is selected, the conversation will be limited to its contents.
- Call
M-x gptel-send
with a prefix argument to
- set chat parameters (GPT model, directives etc) for this buffer,
- to read the prompt from elsewhere or redirect the response elsewhere,
- or to replace the prompt with the response.
With a region selected, you can also rewrite prose or refactor code from here:
Code:
Prose:
- Run
M-x gptel
to start or switch to the chat buffer. It will ask you for the key if you skipped the previous step. Run it with a prefix-arg (C-u M-x gptel
) to start a new session. - In the gptel buffer, send your prompt with
M-x gptel-send
, bound toC-c RET
. - Set chat parameters (LLM provider, model, directives etc) for the session by calling
gptel-send
with a prefix argument (C-u C-c RET
):
That’s it. You can go back and edit previous prompts and responses if you want.
The default mode is markdown-mode
if available, else text-mode
. You can set gptel-default-mode
to org-mode
if desired.
Saving the file will save the state of the conversation as well. To resume the chat, open the file and turn on gptel-mode
before editing the buffer.
To be minimally annoying, GPTel does not move the cursor by default. Add the following to your configuration to enable auto-scrolling.
(add-hook 'gptel-post-stream-hook 'gptel-auto-scroll)
To be minimally annoying, GPTel does not move the cursor by default. Add the following to your configuration to move the cursor:
(add-hook 'gptel-post-response-hook 'gptel-end-of-response)
You can also call gptel-end-of-response
as a command at any time.
Customize gptel-prompt-prefix-alist
and gptel-response-prefix-alist
. You can set a different pair for each major-mode.
Other Emacs clients for LLMs prescribe the format of the interaction (a comint shell, org-babel blocks, etc). I wanted:
- Something that is as free-form as possible: query the model using any text in any buffer, and redirect the response as required. Using a dedicated
gptel
buffer just adds some visual flair to the interaction. - Integration with org-mode, not using a walled-off org-babel block, but as regular text. This way the model can generate code blocks that I can run.
Connection options | |
---|---|
gptel-use-curl | Use Curl (default), fallback to Emacs’ built-in url . |
gptel-proxy | Proxy server for requests, passed to curl via --proxy . |
gptel-api-key | Variable/function that returns the API key for the active backend. |
LLM options | (Note: not supported uniformly across LLMs) |
gptel-backend | Default LLM Backend. |
gptel-model | Default model to use, depends on the backend. |
gptel-stream | Enable streaming responses, if the backend supports it. |
gptel-directives | Alist of system directives, can switch on the fly. |
gptel-max-tokens | Maximum token count (in query + response). |
gptel-temperature | Randomness in response text, 0 to 2. |
Chat UI options | |
gptel-default-mode | Major mode for dedicated chat buffers. |
gptel-prompt-prefix-alist | Text inserted before queries. |
gptel-response-prefix-alist | Text inserted before responses. |
gptel-use-header-line | Display status messages in header-line (default) or minibuffer |
GPTel’s default usage pattern is simple, and will stay this way: Read input in any buffer and insert the response below it. Some custom behavior is possible with the transient menu (C-u M-x gptel-send
).
For more programmable usage, gptel provides a general gptel-request
function that accepts a custom prompt and a callback to act on the response. You can use this to build custom workflows not supported by gptel-send
. See the documentation of gptel-request
, and the wiki for examples.
These are packages that depend on GPTel to provide additional functionality
- gptel-extensions: Extra utility functions for GPTel.
- ai-blog.el: Streamline generation of blog posts in Hugo.
Other Emacs clients for LLMs include
- chatgpt-shell: comint-shell based interaction with ChatGPT. Also supports DALL-E, executable code blocks in the responses, and more.
- org-ai: Interaction through special
#+begin_ai ... #+end_ai
Org-mode blocks. Also supports DALL-E, querying ChatGPT with the contents of project files, and more.
There are several more: chatgpt-arcana, leafy-mode, chat.el
- Possible breakage, see #120: If streaming responses stop working for you after upgrading to v0.5, try reinstalling gptel and deleting its native comp eln cache in
native-comp-eln-load-path
. - The user option
gptel-host
is deprecated. If the defaults don’t work for you, usegptel-make-openai
(which see) to customize server settings. gptel-api-key-from-auth-source
now searches for the API key using the host address for the active LLM backend, i.e. “api.openai.com” when using ChatGPT. You may need to update your~/.authinfo
.
- Alexis Gallagher and Diego Alvarez for fixing a nasty multi-byte bug with
url-retrieve
. - Jonas Bernoulli for the Transient library.