-
Notifications
You must be signed in to change notification settings - Fork 24
Neovim Integrations
This page documents how VectorCode may be used with some other Neovim plugins.
The integrations are implemented under
lua/vectorcode/integrations
.
If you want to customise how vectorcode works with other plugin or try to use it
with a plugin that is not yet supported, feel free to take a look at the code.
This page is updated alongside the
main
branch of VectorCode. Some features may not be in the release yet, but if it's documented here, you can expect it to be in the release in in a few days.
- General Tips
- milanglacier/minuet-ai.nvim
- olimorris/codecompanion.nvim
- nvim-lualine/lualine.nvim
- CopilotC-Nvim/CopilotChat.nvim
- fidget.nvim
- Model Context Protocol (MCP)
When you call VectorCode APIs or integration functions as a part of another plugin's configuration, it's important to make sure that VectorCode is loaded BEFORE the plugin you're trying to use.
For example, in lazy.nvim, it's not
sufficient to simply add VectorCode as a dependency. You'd also wrap the opts
table in a function:
{
"olimorris/codecompanion.nvim",
opts = function()
return your_opts_here
end
}
If you pass a table, instead of a function, as the value for the opts
key,
neovim will try to load the VectorCode components immediately on startup
(potentially even before the plugin is added to the
rtp
).
It's recommended to use the aysnc caching API with minuet-ai because this helps reduce unnecessary query calls. The buffer usually doesn't change very significantly, between edits/keystrokes, so there's no point to retrieving for every completion requests.
See minuet-ai documentation and Prompt Gallery for instructions to modify the prompts to use VectorCode context for completion.
Since codecompanion.nvim v15.0.0, you can use the extension
field to easily
register both VectorCode tool and slash command. This also works if you're on
has-xml-tools
branch.
opts = {
extensions = {
vectorcode = {
opts = {
add_tool = true,
add_slash_command = true,
---@type VectorCode.CodeCompanion.ToolOpts
tool_opts = {
max_num = { chunk = -1, document = -1 },
default_num = { chunk = 50, document = 10 },
include_stderr = false,
use_lsp = false,
auto_submit = { ls = false, query = false },
ls_on_start = false,
no_duplicate = true,
chunk_mode = false,
}
},
},
}
}
This will register both VectorCode tool and slash command to codecompanion. The tool gives the LLM the ability to ask what it needs from VectorCode, and the slash command sends the async cache to the LLM as extra context.
The tool is named @vectorcode
and the slash command is named /vectorcode
.
The tool_opts
will be passed to the make_tool
function for customisation.
See the Tool section for instructions to configure the tool.
If you're on an older release of codecompanion, you'll need to manually register the tool and/or the slash command.
The tool feature requires CodeCompanion.nvim v13.5.0 or above.
The tool gives the LLM ability to "ask for what it needs". The LLM will be generating a search query. Compared to the slash command, the tool makes it possible for the LLM to find out about what you didn't know will be useful.
opts =
{
-- your other codecompanion configs
strategies = {
chat = {
adapter = "your adapter",
tools = {
vectorcode = {
description = "Run VectorCode to retrieve the project context.",
callback = require("vectorcode.integrations").codecompanion.chat.make_tool(
---@type VectorCode.CodeCompanion.ToolOpts
{
-- your options goes here
}
),
}
},
},
},
}
After this, you can use @vectorcode
command in the chat buffer, so that the
LLM will query the codebase using keywords that it feels appropriate. The
command runs async and may take some time (depending on the size of your indexed
codebase and the performance of your machine), but there's a Tool processing ...
virtual text indicator which disappears when the retrieval is complete.
This tool gives the LLM access to (the equivalent of) the ls
and query
command. The former allows it to see the indexed projects, and the later allows
it to query from a specific project (defaults to the current project root,
marked by .vectorcode
or .git
).
Depending on the intelligence of the model, you may need to tell it to use the
tool: using the @vectorcode tool to retrieve 5 files from the repo, explain how reranker works in this project
.
You can also tell the LLM to query from another project (given that it's been
indexed), as long as the LLM is given the project root for that repo.
Some LLM may be reluctant to call the ls
command for a list of projects.
In this case, you can tell it to "use the ls
command to see other
projects" or set the ls_on_start
option to true
so that it sees a list of
indexed projects when it's given this tool. You can also just manually type the
path to the project in the prompt, if you don't want to send a list of your local
projects to the LLM provider for privacy concerns.
The make_tool
function optionally accepts a table with the following keys to
configure the tool:
-
max_num
: integer, the maximum number of files to retrieve. Default:-1
(unlimited). When set to a positive integer, any excessive documents will be discarded. This is useful if you want to avoid saturating the context window of your LLM; -
default_num
: integer, the default number of files to retrieve. Default:10
. The system prompt will ask the LLM to retrievedefault_num
files by default. You can of course ask it to retrieve more or less than this in the chat buffer, but remember that this does not supersedemax_num
; -
include_stderr
: boolean, when set tofalse
, thestderr
content will not be sent to the LLM. This helps to suppress warnings from the embedding engine (if any), and potentially saves context window. Default:false
; -
use_lsp
: whether to use the LSP backend to run the queries. Default:true
ifasync_backend
is set to"lsp"
insetup()
. Otherwise, it'll befalse
; -
auto_submit
:{string:boolean}
, whether to automatically submit the result of a command execution when a tool call is ready. The keys are the command type (query
orls
) and the values are booleans. Setting the value totrue
will block user input when CodeCompanion.nvim is running the tool. The result will then be automatically be submitted when the tool finishes, and the chat will be unblocked after the LLM response. Default:{ ls = false, query = false }
; -
ls_on_start
: boolean (defaultfalse
), when set totrue
, the tool will callvectorcode ls
and send a list of indexed projects to the LLM; -
no_duplicate
: boolean, whether the query calls should use the--exclude
flag to exclude files that has been retrieved and provided in the previous turns of the current chat. This helps saving tokens and increase the chance of retrieving the correct files when the previous retrievals fail to do so. Default:true
.
The slash command adds the retrieval results in the async cache to the prompt. This works for a buffer that has been registered, and provides the LLM with knowledges related to the current file.
opts =
{
-- your other codecompanion configs
strategies = {
chat = {
adapter = "your adapter",
slash_commands = {
-- add the vectorcode command here.
codebase = require("vectorcode.integrations").codecompanion.chat.make_slash_command(),
},
},
},
}
You can pass
component_cb
to the make_slash_command
function to customise how the retrieval results are
organised in the prompt.
A lualine
component that shows the status of the async job and the number of
cached retrieval results.
tabline = {
lualine_y = {
require("vectorcode.integrations").lualine(opts)
}
}
opts
is a table with the following configuration option:
-
show_job_count
: boolean, whether to show the number of running jobs for the buffer. Default:false
.
This will, however, start VectorCode when lualine starts (which usually means when neovim starts). If this bothers you, you can use the following snippet:
tabline = {
lualine_y = {
{
function()
return require("vectorcode.integrations").lualine(opts)[1]()
end,
cond = function()
if package.loaded["vectorcode"] == nil then
return false
else
return require("vectorcode.integrations").lualine(opts).cond()
end
end,
},
}
}
This will further delay the loading of VectorCode to the moment you (or one of your plugins that actually retrieves context from VectorCode) load VectorCode.
CopilotC-Nvim/CopilotChat.nvim is a Neovim plugin that provides an interface to GitHub Copilot Chat. VectorCode integration enriches the conversations by providing relevant repository context.
VectorCode offers a dedicated integration with CopilotChat.nvim that provides contextual information about your codebase to enhance Copilot's responses. Add this to your CopilotChat configuration:
local vectorcode_ctx = require('vectorcode.integrations.copilotchat').make_context_provider({
prompt_header = "Here are relevant files from the repository:", -- Customize header text
prompt_footer = "\nConsider this context when answering:", -- Customize footer text
skip_empty = true, -- Skip adding context when no files are retrieved
})
require('CopilotChat').setup({
-- Your other CopilotChat options...
contexts = {
-- Add the VectorCode context provider
vectorcode = vectorcode_ctx,
},
-- Enable VectorCode context in your prompts
prompts = {
Explain = {
prompt = "Explain the following code in detail:\n$input",
context = {"selection", "vectorcode"}, -- Add vectorcode to the context
},
-- Other prompts...
}
})
The make_context_provider
function accepts these options:
-
prompt_header
: Text that appears before the code context (default: "The following are relevant files from the repository. Use them as extra context for helping with code completion and understanding:") -
prompt_footer
: Text that appears after the code context (default: "\nExplain and provide a strategy with examples about: \n") -
skip_empty
: Whether to skip adding context when no files are retrieved (default: true) -
format_file
: A function that formats each retrieved file (takes a file result object and returns formatted string)
- Register your buffers with VectorCode (
:VectorCode register
) to enable context fetching - Create different prompt templates with or without VectorCode context depending on your needs
- For large codebases, consider adjusting the number of retrieved documents using
n_query
when registering buffers
The integration includes caching to avoid sending duplicate context to the LLM, which helps reduce token usage when asking multiple questions about the same codebase.
You can configure VectorCode to be part of your sticky prompts, ensuring every conversation includes relevant codebase context automatically:
require('CopilotChat').setup({
-- Your other CopilotChat options...
sticky = {
"Using the model $claude-3.7-sonnet-thought",
"#vectorcode", -- Automatically includes repository context in every conversation
},
})
This configuration will include both the model specification and repository context in every conversation with CopilotChat.
If you're using LSP mode, there will be a notification when there's a pending request for queries. As long as the LSP backend is working, no special configuration is needed for this.
The Python package contains an optional mcp
dependency group. After installing
this, you can use the MCP server with any MCP client. For example, to use it
with mcphub.nvim, simply add this
server in the JSON config:
{
"mcpServers": {
"vectorcode-mcp-server": {
"command": "vectorcode-mcp-server",
"args": []
}
}
}