-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Including the contents of the open buffers in the context when $buffers is in the prompt #48
base: main
Are you sure you want to change the base?
Including the contents of the open buffers in the context when $buffers is in the prompt #48
Conversation
…s is included in the prompt
sorry for the arbitrary reformatting. This is what I'm getting whenever I save with |
if buf ~= 0 and name ~= "" and vim.loop.fs_stat(name) ~= nil then | ||
if not vim.api.nvim_buf_is_loaded(buf) then | ||
-- read the file from disk | ||
local file, err = io.open(name, "r") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we just get the buffer content from Neovim?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This check is specifically looking to see if the buffer is actually loaded. This scenario happens if you have multiple buffers open when you close neovim, and then you re-open neovim and restart your session. This loads up all the same buffers you had before, but until you actually use a buffer it's not loaded into memory, so the context on those files was empty.
Thank you, @kjjuno! I'm not sure if it wouldn't be better to only take the current buffer into account. In general, there are many open buffers and local LLMs don't have a big context window. Maybe for now, you could choose a similar approach as in #50 and do it programmatically outside of Thanks again and best regards, |
@David-Kunz That makes sense to me. It seems to depend quite a bit on which model you choose, and the hardware you have powering ollama. I have an M2 max with 32gb RAM and I've been running codellama:34b. That configuration seems to accept multiple files of context pretty well, though I certainly haven't done a lot testing to see exactly where the limit is on the size of the context. I would be happy to follow something like #50 to handle multiple buffers of context. But I like the suggestion of including the current buffer for the
|
Thank you, @kjjuno . I think we shouldn't touch the |
You can sort of get around a lot of this by dynamically creating a prompt in lua code then making a temporary prompt. I created a function that will use the In this vein, you could iterate over all loaded buffers, or do whatever you want really to create your prompt. This is done simply like so: function custom_thing() {
local g = require('gen')
g.prompts["tmp"] = { prompt = "Any string here, you can use nvim api to get the current buffer's content for example." }
vim.cmd("Gen tmp")
g.prompts["tmp"] = nil
} Note: You may want to strip out any of the Gen plugin's keywords from your file like |
Would love this feature (anything that will increase the context for the LLM). Maybe individual files in the future too! |
Hi, Yes, it's possible to dynamically set the prompts, I think for now this should be the way to go. I'm a bit partial about the context, as usually this should be decoupled form the prompt and defined outside, example: Bad: prompt1 = "Simplify this text: $buffer"
prompt2 = "Simplify this text: $input"
prompt3 = "Simplify this text: $text"
-- ... Better: prompt = "Simplify this text: $context"
invoke(prompt, "buffer")
invoke(prompt, "input")
invoke(prompt, "text") I have to think more about this. |
This allows you to specify
$buffers
in the chat prompt. This will load the contents of each open buffer in the context of the prompt.