-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathtodo
69 lines (59 loc) · 3.36 KB
/
todo
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
> written in norg ahhh syntax
* THINGS TO COPY FROM OTHER PROJS
- (x) how to calculate context range that should be passed to LLM?
-- by AST
- (x) should I debounce edit events? (I think I should, then how?)
* THINGS TO IMPLEMENT FOR MVP
- (x) generate diff on textchange
- (x) generate context range on textchange
- (x) count token from text
- (x) get TS token from cursor, get text of it
- (x) from (line,col) range to line-only range
- (x) compute edits from oritinal lines and predicted lines
- `compute_edits(old, new, offset) -> Edit[]`
- `Edit: { value: string, range: { { int, int }, { int, int } } }`
- `offset` is offset from original document
- (x) test this
- (x) apply list of edits to editable_range (easy, just use `nvim_buf_set_lines`)
- (x) calculate edit offsets while applying those
- (x) ability to show predicted lines in virtual text
- (x) ability to send http request and parse json response
- (x) put extmarks (eol virttext for now) representing predicted edit position
- (x) add api to accept predicted edits. (this won't apply it, but show the edit)
- (x) test everything is working with fake server
- (_) block additional predictions on accepting previous predictions.
make "accepting mode" flag and check it before running request_edit_predictions
to make "accepting mode" a thing, each prediction should be group of edits
- (x) use `zeta.Prediction` class.
`vim.b.edit_prediction` will be `zeta.Prediction?` type
- (x) `request_predict_completion` will be called from textchange/modechange
events. these are tracked from direct autocmds and not buffer attached process.
As we have modechange event, textchange inside insert mode can be ignored by
default.
`request_predict_completion` will receive recent events from `state` and
generated prompts from `prompt`
It will create a request body and pass it to client.
- (x) `api.accept` will be called from normal mode, move cursor to first edit,
start chain of confirmations. After confirmations, no matter if user had
skipped ("No") some edits, prediction will be cleared out from buffer.
- ( ) listen to general events like fileopen, fileclose, filerename
- ( ) mock `vim.lsp.util.rename()` to listen to `filerename` events.
let user to do the mocking from config
- (_) gather text change events by mode changes (send when user is back to normal mode)
- (x) refactor event listener
- (x) make sure debouncing won't loose some events
- (x) reduce the event count
- ( ) move hard coded constants to config table
- ( ) make things more stable!
* ROADMAP
- ( ) [sense.nvim] as a dependency
- ( ) generate inline diff of predicted lines from original lines
ability to segment line-edit to multiple word edits.
We can still show each on virtual-line as preview.
- ( ) ability to scroll while in confirm window
Neovim is a modal editor, and have quite different editor events than other GUI based
editors. It will make sense if we gather Neovim-specific datas and train model for it.
g4dn.xlarge instance in AWS cost about $400 a month (when running 24h)
One example of difference between modal editor and GUI based editor is when deleting a line.
If user deletes some text in insert mode, there is really low possibility that they are going
to delete a line because they can just use `dd` instead.