Skip to content

Commit c98301c

Browse files
committed
Add support for copilot extension agents
https://docs.github.com/en/copilot/building-copilot-extensions/about-building-copilot-extensions - Change @buffer and @Buffers to #buffer and #buffers - Add support for @agent agent selection - Add support for config.agent for specifying default agent - Add :CopilotChatAgents for listing agents (and showing selected agent) - Remove :CopilotChatModel, instead show which model is selected in :CopilotChatModels - Remove early errors from curl so we can actually get response body for the error - Add info to README about models, agents and contexts Closes #466 Signed-off-by: Tomas Slusny <slusnucky@gmail.com>
1 parent a1d97c7 commit c98301c

File tree

5 files changed

+308
-117
lines changed

5 files changed

+308
-117
lines changed

README.md

Lines changed: 38 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ Verify "[Copilot chat in the IDE](https://github.com/settings/copilot)" is enabl
110110
- `:CopilotChatLoad <name>?` - Load chat history from file
111111
- `:CopilotChatDebugInfo` - Show debug information
112112
- `:CopilotChatModels` - View and select available models. This is reset when a new instance is made. Please set your model in `init.lua` for persistence.
113-
- `:CopilotChatModel` - View the currently selected model.
113+
- `:CopilotChatAgents` - View and select available agents. This is reset when a new instance is made. Please set your agent in `init.lua` for persistence.
114114

115115
#### Commands coming from default prompts
116116

@@ -122,6 +122,39 @@ Verify "[Copilot chat in the IDE](https://github.com/settings/copilot)" is enabl
122122
- `:CopilotChatTests` - Please generate tests for my code
123123
- `:CopilotChatCommit` - Write commit message for the change with commitizen convention
124124

125+
### Models, Agents and Contexts
126+
127+
#### Models
128+
129+
You can list available models with `:CopilotChatModels` command. Model determines the AI model used for the chat.
130+
Default models are:
131+
132+
- `gpt-4o` - This is the default Copilot Chat model. It is a versatile, multimodal model that excels in both text and image processing and is designed to provide fast, reliable responses. It also has superior performance in non-English languages. Gpt-4o is hosted on Azure.
133+
- `claude-3.5-sonnet` - This model excels at coding tasks across the entire software development lifecycle, from initial design to bug fixes, maintenance to optimizations. GitHub Copilot uses Claude 3.5 Sonnet hosted on Amazon Web Services.
134+
- `o1-preview` - This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the gpt-4o model. You can make 10 requests to this model per day. o1-preview is hosted on Azure.
135+
- `o1-mini` - This is the faster version of the o1-preview model, balancing the use of complex reasoning with the need for faster responses. It is best suited for code generation and small context operations. You can make 50 requests to this model per day. o1-mini is hosted on Azure.
136+
137+
For more information about models, see [here](https://docs.github.com/en/copilot/using-github-copilot/asking-github-copilot-questions-in-your-ide#ai-models-for-copilot-chat)
138+
You can use more models from [here](https://github.com/marketplace/models) by using `@models` agent from [here](https://github.com/marketplace/models-github) (example: `@models Using Mistral-small, what is 1 + 11`)
139+
140+
#### Agents
141+
142+
Agents are used to determine the AI agent used for the chat. You can list available agents with `:CopilotChatAgents` command.
143+
You can set the agent in the prompt by using `@` followed by the agent name.
144+
Default "noop" agent is `copilot`.
145+
146+
For more information about extension agents, see [here](https://docs.github.com/en/copilot/using-github-copilot/using-extensions-to-integrate-external-tools-with-copilot-chat)
147+
You can install more agents from [here](https://github.com/marketplace?type=apps&copilot_app=true)
148+
149+
#### Contexts
150+
151+
Contexts are used to determine the context of the chat.
152+
You can set the context in the prompt by using `#` followed by the context name.
153+
Supported contexts are:
154+
155+
- `buffers` - Includes all open buffers in chat context
156+
- `buffer` - Includes only the current buffer in chat context
157+
125158
### API
126159

127160
```lua
@@ -202,8 +235,10 @@ Also see [here](/lua/CopilotChat/config.lua):
202235
allow_insecure = false, -- Allow insecure server connections
203236

204237
system_prompt = prompts.COPILOT_INSTRUCTIONS, -- System prompt to use
205-
model = 'gpt-4o', -- GPT model to use, see ':CopilotChatModels' for available models
206-
temperature = 0.1, -- GPT temperature
238+
model = 'gpt-4o', -- Default model to use, see ':CopilotChatModels' for available models
239+
agent = 'copilot', -- Default agent to use, see ':CopilotChatAgents' for available agents (can be specified manually in prompt via @).
240+
context = nil, -- Default context to use, 'buffers', 'buffer' or none (can be specified manually in prompt via #).
241+
temperature = 0.1, -- GPT result temperature
207242

208243
question_header = '## User ', -- Header to use for user questions
209244
answer_header = '## Copilot ', -- Header to use for AI answers
@@ -218,7 +253,6 @@ Also see [here](/lua/CopilotChat/config.lua):
218253
clear_chat_on_new_prompt = false, -- Clears chat on every new prompt
219254
highlight_selection = true, -- Highlight selection in the source buffer when in the chat window
220255

221-
context = nil, -- Default context to use, 'buffers', 'buffer' or none (can be specified manually in prompt via @).
222256
history_path = vim.fn.stdpath('data') .. '/copilotchat_history', -- Default path to stored history
223257
callback = nil, -- Callback to use when ask response is received
224258

lua/CopilotChat/config.lua

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -69,6 +69,8 @@ local select = require('CopilotChat.select')
6969
---@field allow_insecure boolean?
7070
---@field system_prompt string?
7171
---@field model string?
72+
---@field agent string?
73+
---@field context string?
7274
---@field temperature number?
7375
---@field question_header string?
7476
---@field answer_header string?
@@ -80,7 +82,6 @@ local select = require('CopilotChat.select')
8082
---@field auto_insert_mode boolean?
8183
---@field clear_chat_on_new_prompt boolean?
8284
---@field highlight_selection boolean?
83-
---@field context string?
8485
---@field history_path string?
8586
---@field callback fun(response: string, source: CopilotChat.config.source)?
8687
---@field selection nil|fun(source: CopilotChat.config.source):CopilotChat.config.selection?
@@ -94,8 +95,10 @@ return {
9495
allow_insecure = false, -- Allow insecure server connections
9596

9697
system_prompt = prompts.COPILOT_INSTRUCTIONS, -- System prompt to use
97-
model = 'gpt-4o', -- GPT model to use, see ':CopilotChatModels' for available models
98-
temperature = 0.1, -- GPT temperature
98+
model = 'gpt-4o', -- Default model to use, see ':CopilotChatModels' for available models
99+
agent = 'copilot', -- Default agent to use, see ':CopilotChatAgents' for available agents (can be specified manually in prompt via @).
100+
context = nil, -- Default context to use, 'buffers', 'buffer' or none (can be specified manually in prompt via #).
101+
temperature = 0.1, -- GPT result temperature
99102

100103
question_header = '## User ', -- Header to use for user questions
101104
answer_header = '## Copilot ', -- Header to use for AI answers
@@ -110,7 +113,6 @@ return {
110113
clear_chat_on_new_prompt = false, -- Clears chat on every new prompt
111114
highlight_selection = true, -- Highlight selection
112115

113-
context = nil, -- Default context to use, 'buffers', 'buffer' or none (can be specified manually in prompt via @).
114116
history_path = vim.fn.stdpath('data') .. '/copilotchat_history', -- Default path to stored history
115117
callback = nil, -- Callback to use when ask response is received
116118

lua/CopilotChat/copilot.lua

Lines changed: 109 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@
1313
---@field end_row number?
1414
---@field system_prompt string?
1515
---@field model string?
16+
---@field agent string?
1617
---@field temperature number?
1718
---@field on_progress nil|fun(response: string):nil
1819

@@ -29,6 +30,7 @@
2930
---@field load fun(self: CopilotChat.Copilot, name: string, path: string):table
3031
---@field running fun(self: CopilotChat.Copilot):boolean
3132
---@field list_models fun(self: CopilotChat.Copilot):table
33+
---@field list_agents fun(self: CopilotChat.Copilot):table
3234

3335
local async = require('plenary.async')
3436
local log = require('plenary.log')
@@ -340,6 +342,7 @@ local Copilot = class(function(self, proxy, allow_insecure)
340342
self.sessionid = nil
341343
self.machineid = machine_id()
342344
self.models = nil
345+
self.agents = nil
343346
self.claude_enabled = false
344347
self.current_job = nil
345348
self.request_args = {
@@ -362,9 +365,6 @@ local Copilot = class(function(self, proxy, allow_insecure)
362365
'--no-keepalive', -- Don't reuse connections
363366
'--tcp-nodelay', -- Disable Nagle's algorithm for faster streaming
364367
'--no-buffer', -- Disable output buffering for streaming
365-
'--fail', -- Return error on HTTP errors (4xx, 5xx)
366-
'--silent', -- Don't show progress meter
367-
'--show-error', -- Show errors even when silent
368368
},
369369
}
370370
end)
@@ -461,6 +461,39 @@ function Copilot:fetch_models()
461461
return out
462462
end
463463

464+
function Copilot:fetch_agents()
465+
if self.agents then
466+
return self.agents
467+
end
468+
469+
local response, err = curl_get(
470+
'https://api.githubcopilot.com/agents',
471+
vim.tbl_extend('force', self.request_args, {
472+
headers = self:authenticate(),
473+
})
474+
)
475+
476+
if err then
477+
error(err)
478+
end
479+
480+
if response.status ~= 200 then
481+
error('Failed to fetch agents: ' .. tostring(response.status))
482+
end
483+
484+
local agents = vim.json.decode(response.body)['agents']
485+
local out = {}
486+
for _, agent in ipairs(agents) do
487+
out[agent['slug']] = agent
488+
end
489+
490+
out['copilot'] = { name = 'Copilot', default = true }
491+
492+
log.info('Agents fetched')
493+
self.agents = out
494+
return out
495+
end
496+
464497
function Copilot:enable_claude()
465498
if self.claude_enabled then
466499
return true
@@ -510,6 +543,7 @@ function Copilot:ask(prompt, opts)
510543
local selection = opts.selection or {}
511544
local system_prompt = opts.system_prompt or prompts.COPILOT_INSTRUCTIONS
512545
local model = opts.model or 'gpt-4o-2024-05-13'
546+
local agent = opts.agent or 'copilot'
513547
local temperature = opts.temperature or 0.1
514548
local on_progress = opts.on_progress
515549
local job_id = uuid()
@@ -522,10 +556,21 @@ function Copilot:ask(prompt, opts)
522556
log.debug('Filename: ' .. filename)
523557
log.debug('Filetype: ' .. filetype)
524558
log.debug('Model: ' .. model)
559+
log.debug('Agent: ' .. agent)
525560
log.debug('Temperature: ' .. temperature)
526561

527562
local models = self:fetch_models()
528-
local capabilities = models[model] and models[model].capabilities
563+
local agents = self:fetch_agents()
564+
local agent_config = agents[agent]
565+
if not agent_config then
566+
error('Agent not found: ' .. agent)
567+
end
568+
local model_config = models[model]
569+
if not model_config then
570+
error('Model not found: ' .. model)
571+
end
572+
573+
local capabilities = model_config.capabilities
529574
local max_tokens = capabilities.limits.max_prompt_tokens -- FIXME: Is max_prompt_tokens the right limit?
530575
local max_output_tokens = capabilities.limits.max_output_tokens
531576
local tokenizer = capabilities.tokenizer
@@ -582,6 +627,7 @@ function Copilot:ask(prompt, opts)
582627
local errored = false
583628
local finished = false
584629
local full_response = ''
630+
local full_references = ''
585631

586632
local function finish_stream(err, job)
587633
if err then
@@ -631,6 +677,22 @@ function Copilot:ask(prompt, opts)
631677
return
632678
end
633679

680+
if content.copilot_references then
681+
for _, reference in ipairs(content.copilot_references) do
682+
local metadata = reference.metadata
683+
if metadata and metadata.display_name and metadata.display_url then
684+
full_references = full_references
685+
.. '\n'
686+
.. '['
687+
.. metadata.display_name
688+
.. ']'
689+
.. '('
690+
.. metadata.display_url
691+
.. ')'
692+
end
693+
end
694+
end
695+
634696
if not content.choices or #content.choices == 0 then
635697
return
636698
end
@@ -668,8 +730,13 @@ function Copilot:ask(prompt, opts)
668730
self:enable_claude()
669731
end
670732

733+
local url = 'https://api.githubcopilot.com/chat/completions'
734+
if not agent_config.default then
735+
url = 'https://api.githubcopilot.com/agents/' .. agent .. '?chat'
736+
end
737+
671738
local response, err = curl_post(
672-
'https://api.githubcopilot.com/chat/completions',
739+
url,
673740
vim.tbl_extend('force', self.request_args, {
674741
headers = self:authenticate(),
675742
body = temp_file(body),
@@ -694,6 +761,25 @@ function Copilot:ask(prompt, opts)
694761
end
695762

696763
if response.status ~= 200 then
764+
if response.status == 401 then
765+
local ok, content = pcall(vim.json.decode, response.body, {
766+
luanil = {
767+
object = true,
768+
array = true,
769+
},
770+
})
771+
772+
if ok and content.authorize_url then
773+
error(
774+
'Failed to authenticate. Visit following url to authorize '
775+
.. content.slug
776+
.. ':\n'
777+
.. content.authorize_url
778+
)
779+
return
780+
end
781+
end
782+
697783
error('Failed to get response: ' .. tostring(response.status) .. '\n' .. response.body)
698784
return
699785
end
@@ -708,6 +794,14 @@ function Copilot:ask(prompt, opts)
708794
return
709795
end
710796

797+
if full_references ~= '' then
798+
full_references = '\n\n**`References:`**' .. full_references
799+
full_response = full_response .. full_references
800+
if on_progress then
801+
on_progress(full_references)
802+
end
803+
end
804+
711805
log.trace('Full response: ' .. full_response)
712806
log.debug('Last message: ' .. vim.inspect(last_message))
713807

@@ -727,10 +821,10 @@ function Copilot:ask(prompt, opts)
727821
end
728822

729823
--- List available models
824+
---@return table
730825
function Copilot:list_models()
731826
local models = self:fetch_models()
732827

733-
-- Group models by version and shortest ID
734828
local version_map = {}
735829
for id, model in pairs(models) do
736830
local version = model.version
@@ -739,10 +833,18 @@ function Copilot:list_models()
739833
end
740834
end
741835

742-
-- Map to IDs and sort
743836
local result = vim.tbl_values(version_map)
744837
table.sort(result)
838+
return result
839+
end
745840

841+
--- List available agents
842+
---@return table
843+
function Copilot:list_agents()
844+
local agents = self:fetch_agents()
845+
846+
local result = vim.tbl_keys(agents)
847+
table.sort(result)
746848
return result
747849
end
748850

0 commit comments

Comments
 (0)