Skip to content

yantar92/gptel

 
 

Repository files navigation

GPTel: A simple LLM client for Emacs

https://melpa.org/packages/gptel-badge.svg

GPTel is a simple Large Language Model chat client for Emacs, with support for multiple models and backends.

LLM BackendSupportsRequires
ChatGPTAPI key
AzureDeployment and API key
OllamaOllama running locally
GPT4AllGPT4All running locally
GeminiAPI key
Llama.cppLlama.cpp running locally
LlamafileLocal Llamafile server
Kagi FastGPTAPI key
Kagi SummarizerAPI key
together.aiAPI key
AnyscaleAPI key

General usage: (YouTube Demo)

intro-demo.mp4
intro-demo-2.mp4

Multi-LLM support demo:

gptel-multi.mp4
  • It’s async and fast, streams responses.
  • Interact with LLMs from anywhere in Emacs (any buffer, shell, minibuffer, wherever)
  • LLM responses are in Markdown or Org markup.
  • Supports conversations and multiple independent sessions.
  • Save chats as regular Markdown/Org/Text files and resume them later.
  • You can go back and edit your previous prompts or LLM responses when continuing a conversation. These will be fed back to the model.
  • Don’t like gptel’s workflow? Use it to create your own for any supported model/backend with a simple API.

GPTel uses Curl if available, but falls back to url-retrieve to work without external dependencies.

Contents

Installation

GPTel is on MELPA. Ensure that MELPA is in your list of sources, then install gptel with M-x package-install⏎ gptel.

(Optional: Install markdown-mode.)

Straight

(straight-use-package 'gptel)

Installing the markdown-mode package is optional.

Manual

Clone or download this repository and run M-x package-install-file⏎ on the repository directory.

Installing the markdown-mode package is optional.

Doom Emacs

In packages.el

(package! gptel)

In config.el

(use-package! gptel
 :config
 (setq! gptel-api-key "your key"))

“your key” can be the API key itself, or (safer) a function that returns the key. Setting gptel-api-key is optional, you will be asked for a key if it’s not found.

Spacemacs

After installation with M-x package-install⏎ gptel

  • Add gptel to dotspacemacs-additional-packages
  • Add (require 'gptel) to dotspacemacs/user-config

Setup

ChatGPT

Procure an OpenAI API key.

Optional: Set gptel-api-key to the key. Alternatively, you may choose a more secure method such as:

  • Storing in ~/.authinfo. By default, “api.openai.com” is used as HOST and “apikey” as USER.
    machine api.openai.com login apikey password TOKEN
        
  • Setting it to a function that returns the key.

Other LLM backends

Azure

Register a backend with

(gptel-make-azure "Azure-1"             ;Name, whatever you'd like
  :protocol "https"                     ;Optional -- https is the default
  :host "YOUR_RESOURCE_NAME.openai.azure.com"
  :endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15" ;or equivalent
  :stream t                             ;Enable streaming responses
  :key #'gptel-api-key
  :models '("gpt-3.5-turbo" "gpt-4"))

Refer to the documentation of gptel-make-azure to set more parameters.

You can pick this backend from the menu when using gptel. (see Usage)

If you want it to be the default, set it as the default value of gptel-backend:

(setq-default gptel-backend (gptel-make-azure "Azure-1" ...)
              gptel-model "gpt-3.5-turbo")

GPT4All

Register a backend with

(gptel-make-gpt4all "GPT4All"           ;Name of your choosing
 :protocol "http"                       
 :host "localhost:4891"                 ;Where it's running
 :models '("mistral-7b-openorca.Q4_0.gguf")) ;Available models

These are the required parameters, refer to the documentation of gptel-make-gpt4all for more.

You can pick this backend from the menu when using gptel (see Usage), or set this as the default value of gptel-backend. Additionally you may want to increase the response token size since GPT4All uses very short (often truncated) responses by default:

;; OPTIONAL configuration
(setq-default gptel-model "mistral-7b-openorca.Q4_0.gguf" ;Pick your default model
              gptel-backend (gptel-make-gpt4all "GPT4All" :protocol ...))
(setq-default gptel-max-tokens 500)

Ollama

Register a backend with

(gptel-make-ollama "Ollama"             ;Any name of your choosing
  :host "localhost:11434"               ;Where it's running
  :stream t                             ;Stream responses
  :models '("mistral:latest"))          ;List of models

These are the required parameters, refer to the documentation of gptel-make-ollama for more.

You can pick this backend from the menu when using gptel (see Usage), or set this as the default value of gptel-backend:

;; OPTIONAL configuration
(setq-default gptel-model "mistral:latest" ;Pick your default model
              gptel-backend (gptel-make-ollama "Ollama" :host ...))

Gemini

Register a backend with

;; :key can be a function that returns the API key.
(gptel-make-gemini "Gemini"
  :key "YOUR_GEMINI_API_KEY"
  :stream t)

These are the required parameters, refer to the documentation of gptel-make-gemini for more.

You can pick this backend from the menu when using gptel (see Usage), or set this as the default value of gptel-backend:

;; OPTIONAL configuration
(setq-default gptel-model "gemini-pro" ;Pick your default model
              gptel-backend (gptel-make-gemini "Gemini" :host ...))

Llama.cpp or Llamafile

(If using a llamafile, run a server llamafile instead of a “command-line llamafile”, and a model that supports text generation.)

Register a backend with

;; Llama.cpp offers an OpenAI compatible API
(gptel-make-openai "llama-cpp"          ;Any name
  :stream t                             ;Stream responses
  :protocol "http"
  :host "localhost:8000"                ;Llama.cpp server location
  :models '("test"))                    ;Any names, doesn't matter for Llama

These are the required parameters, refer to the documentation of gptel-make-openai for more.

You can pick this backend from the menu when using gptel (see Usage), or set this as the default value of gptel-backend:

;; OPTIONAL configuration
(setq-default gptel-backend (gptel-make-openai "llama-cpp" ...)
              gptel-model   "test")

Kagi (FastGPT & Summarizer)

Kagi’s FastGPT model and the Universal Summarizer are both supported. A couple of notes:

  1. Universal Summarizer: If there is a URL at point, the summarizer will summarize the contents of the URL. Otherwise the context sent to the model is the same as always: the buffer text upto point, or the contents of the region if the region is active.
  2. Kagi models do not support multi-turn conversations, interactions are “one-shot”. They also do not support streaming responses.

Register a backend with

(gptel-make-kagi "Kagi"                    ;any name
  :key "YOUR_KAGI_API_KEY")                ;can be a function that returns the key

These are the required parameters, refer to the documentation of gptel-make-kagi for more.

You can pick this backend and the model (fastgpt/summarizer) from the transient menu when using gptel. Alternatively you can set this as the default value of gptel-backend:

;; OPTIONAL configuration
(setq-default gptel-model "fastgpt"
              gptel-backend (gptel-make-kagi "Kagi" :key ...))

The alternatives to fastgpt include summarize:cecil, summarize:agnes, summarize:daphne and summarize:muriel. The difference between the summarizer engines is documented here.

together.ai

Register a backend with

;; Together.ai offers an OpenAI compatible API
(gptel-make-openai "TogetherAI"         ;Any name you want
  :host "api.together.xyz"
  :key "your-api-key"                   ;can be a function that returns the key
  :stream t
  :models '(;; has many more, check together.ai
            "mistralai/Mixtral-8x7B-Instruct-v0.1"
            "codellama/CodeLlama-13b-Instruct-hf"
            "codellama/CodeLlama-34b-Instruct-hf"))

You can pick this backend from the menu when using gptel (see Usage), or set this as the default value of gptel-backend:

;; OPTIONAL configuration
(setq-default gptel-backend (gptel-make-openai "TogetherAI" ...)
              gptel-model   "mistralai/Mixtral-8x7B-Instruct-v0.1")

Anyscale

Register a backend with

;; Anyscale offers an OpenAI compatible API
(gptel-make-openai "Anyscale"           ;Any name you want
  :host "api.endpoints.anyscale.com"
  :key "your-api-key"                   ;can be a function that returns the key
  :models '(;; has many more, check anyscale
            "mistralai/Mixtral-8x7B-Instruct-v0.1"))

You can pick this backend from the menu when using gptel (see Usage), or set this as the default value of gptel-backend:

;; OPTIONAL configuration
(setq-default gptel-backend (gptel-make-openai "Anyscale" ...)
              gptel-model   "mistralai/Mixtral-8x7B-Instruct-v0.1")

Usage

(This is also a video demo showing various uses of gptel.)

CommandDescription
gptel-sendSend conversation up to (point), or selection if region is active. Works anywhere in Emacs.
gptelCreate a new dedicated chat buffer. Not required to use gptel.
C-u gptel-sendTransient menu for preferences, input/output redirection etc.
gptel-menu(Same)
gptel-set-topic(Org-mode only) Limit conversation context to an Org heading

In any buffer:

  1. Call M-x gptel-send to send the text up to the cursor. The response will be inserted below. Continue the conversation by typing below the response.
  2. If a region is selected, the conversation will be limited to its contents.
  3. Call M-x gptel-send with a prefix argument to
  • set chat parameters (GPT model, directives etc) for this buffer,
  • to read the prompt from elsewhere or redirect the response elsewhere,
  • or to replace the prompt with the response.

https://user-images.githubusercontent.com/8607532/230770018-9ce87644-6c17-44af-bd39-8c899303dce1.png

With a region selected, you can also rewrite prose or refactor code from here:

Code:

https://user-images.githubusercontent.com/8607532/230770162-1a5a496c-ee57-4a67-9c95-d45f238544ae.png

Prose:

https://user-images.githubusercontent.com/8607532/230770352-ee6f45a3-a083-4cf0-b13c-619f7710e9ba.png

In a dedicated chat buffer:

  1. Run M-x gptel to start or switch to the chat buffer. It will ask you for the key if you skipped the previous step. Run it with a prefix-arg (C-u M-x gptel) to start a new session.
  2. In the gptel buffer, send your prompt with M-x gptel-send, bound to C-c RET.
  3. Set chat parameters (LLM provider, model, directives etc) for the session by calling gptel-send with a prefix argument (C-u C-c RET):

https://user-images.githubusercontent.com/8607532/224946059-9b918810-ab8b-46a6-b917-549d50c908f2.png

That’s it. You can go back and edit previous prompts and responses if you want.

The default mode is markdown-mode if available, else text-mode. You can set gptel-default-mode to org-mode if desired.

Save and restore your chat sessions

Saving the file will save the state of the conversation as well. To resume the chat, open the file and turn on gptel-mode before editing the buffer.

FAQ

I want the window to scroll automatically as the response is inserted

To be minimally annoying, GPTel does not move the cursor by default. Add the following to your configuration to enable auto-scrolling.

(add-hook 'gptel-post-stream-hook 'gptel-auto-scroll)

I want the cursor to move to the next prompt after the response is inserted

To be minimally annoying, GPTel does not move the cursor by default. Add the following to your configuration to move the cursor:

(add-hook 'gptel-post-response-functions 'gptel-end-of-response)

You can also call gptel-end-of-response as a command at any time.

I want to change the formatting of the prompt and LLM response

For dedicated chat buffers: customize gptel-prompt-prefix-alist and gptel-response-prefix-alist. You can set a different pair for each major-mode.

Anywhere in Emacs: Use gptel-pre-response-hook and gptel-post-response-functions, which see.

I want the transient menu options to be saved so I only need to set them once

Any model options you set are saved for the current buffer. But the redirection options in the menu are set for the next query only:

https://github.com/karthink/gptel/assets/8607532/2ecc6be9-aa52-4287-a739-ba06e1369ec2

You can make them persistent across this Emacs session by pressing C-x C-s:

https://github.com/karthink/gptel/assets/8607532/b8bcb6ad-c974-41e1-9336-fdba0098a2fe

(You can also cycle through presets you’ve saved with C-x p and C-x n.)

Now these will be enabled whenever you send a query from the transient menu. If you want to use these options without invoking the transient menu, you can use a keyboard macro:

;; Replace with your key to invoke the transient menu:
(keymap-global-set "<f6>" "C-u C-c <return> <return>")

See this comment by Tianshu Wang for an Elisp solution.

I want to use gptel in a way that’s not supported by gptel-send or the options menu

GPTel’s default usage pattern is simple, and will stay this way: Read input in any buffer and insert the response below it. Some custom behavior is possible with the transient menu (C-u M-x gptel-send).

For more programmable usage, gptel provides a general gptel-request function that accepts a custom prompt and a callback to act on the response. You can use this to build custom workflows not supported by gptel-send. See the documentation of gptel-request, and the wiki for examples.

(Doom Emacs) Sending a query from the gptel menu fails because of a key conflict with Org mode

Doom binds RET in Org mode to +org/dwim-at-point, which appears to conflict with gptel’s transient menu bindings for some reason.

Two solutions:

  • Press C-m instead of the return key.
  • Change the send key from return to a key of your choice:
    (transient-suffix-put 'gptel-menu (kbd "RET") :key "<f8>")
        

Why another LLM client?

Other Emacs clients for LLMs prescribe the format of the interaction (a comint shell, org-babel blocks, etc). I wanted:

  1. Something that is as free-form as possible: query the model using any text in any buffer, and redirect the response as required. Using a dedicated gptel buffer just adds some visual flair to the interaction.
  2. Integration with org-mode, not using a walled-off org-babel block, but as regular text. This way the model can generate code blocks that I can run.

Additional Configuration

Connection options
gptel-use-curlUse Curl (default), fallback to Emacs’ built-in url.
gptel-proxyProxy server for requests, passed to curl via --proxy.
gptel-api-keyVariable/function that returns the API key for the active backend.
LLM options(Note: not supported uniformly across LLMs)
gptel-backendDefault LLM Backend.
gptel-modelDefault model to use, depends on the backend.
gptel-streamEnable streaming responses, if the backend supports it.
gptel-directivesAlist of system directives, can switch on the fly.
gptel-max-tokensMaximum token count (in query + response).
gptel-temperatureRandomness in response text, 0 to 2.
Chat UI options
gptel-default-modeMajor mode for dedicated chat buffers.
gptel-prompt-prefix-alistText inserted before queries.
gptel-response-prefix-alistText inserted before responses.
gptel-use-header-lineDisplay status messages in header-line (default) or minibuffer

Alternatives

Other Emacs clients for LLMs include

  • chatgpt-shell: comint-shell based interaction with ChatGPT. Also supports DALL-E, executable code blocks in the responses, and more.
  • org-ai: Interaction through special #+begin_ai ... #+end_ai Org-mode blocks. Also supports DALL-E, querying ChatGPT with the contents of project files, and more.

There are several more: chatgpt-arcana, leafy-mode, chat.el

Extensions using GPTel

These are packages that depend on GPTel to provide additional functionality

Breaking Changes

  • gptel-post-response-hook has been renamed to gptel-post-response-functions, and functions in this hook are now called with two arguments: the start and end buffer positions of the response. This should make it easy to act on the response text without having to locate it first.
  • Possible breakage, see #120: If streaming responses stop working for you after upgrading to v0.5, try reinstalling gptel and deleting its native comp eln cache in native-comp-eln-load-path.
  • The user option gptel-host is deprecated. If the defaults don’t work for you, use gptel-make-openai (which see) to customize server settings.
  • gptel-api-key-from-auth-source now searches for the API key using the host address for the active LLM backend, i.e. “api.openai.com” when using ChatGPT. You may need to update your ~/.authinfo.

Acknowledgments

About

A simple LLM client for Emacs

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Emacs Lisp 100.0%