work in progress
- Lightweight, repsonsive AI chat plugin that streams AI output to a buffer
- Supports various models and layouts
- Model can be changed mid conversation
- Easily include content from clipboard register using command
/paste
in prompt
- Less complexity than other AI plugins for Neovim. No completion - just a chat interface
- Fun way to learn writing plugins
Dependencies: curl
, plenary.nvim
{
"juliusolson/gpt.nvim",
config = function()
local gpt = require("gpt")
gpt.setup({
model = "gpt-4o-mini",
layout = "float",
provider = "openai",
})
vim.keymap.set("n", "<leader>ai", gpt.open, { silent = true })
end,
dependencies = {
"nvim-lua/plenary.nvim"
},
}
You also need to have an OpenAI/Gemini API key accessible as an env variable
# .bashrc/.bash_profile
export OPENAI_API_KEY="<your-key>"
export GEMINI_API_KEY="<your-key>"
Option | default | constraints |
---|---|---|
model |
gpt-4o-mini |
any openai/ollama/gemini model |
output_width |
100 | int (only for split view) |
prompt_height |
5 | int (only for split view) |
provider |
openai | {openai, ollama, gemini} |
layout |
split | {split, float, fullscreen} |
system_prompt |
n/a |
- Write your prompt in the prompt window, enter normal mode and hit enter to send the prompt
- The answer will be streamed into the output window
- Choose model:
M
in normal mode in prompt window - Select layout:
L
in normal mode in prompt/output window - Close
q
in normal mode in prompt/output window - Open / Go to interface
<leader> ai
(set through plugin conf)
- Toggle chat with persistant state
- Layout options
{split, float}
- Model switching on the fly
- Add whatever is in the clipboard register (
'
register) by using command/paste
in the prompt- The content from the register will replace the command in the prompt text
- Handle api errors
- Handle closing of buffer / window mid-stream
- Support other apis
- More customizable / extensible
- layouts
- keymaps
- commands
- Save conversation to file
- Reset conversation/context
- Remove last message from context