Skip to content

feat: add local mode support#39

Open
hazre wants to merge 2 commits intosweepai:mainfrom
hazre:feat/local-mode
Open

feat: add local mode support#39
hazre wants to merge 2 commits intosweepai:mainfrom
hazre:feat/local-mode

Conversation

@hazre
Copy link

@hazre hazre commented Feb 5, 2026

closes #26

This adds a local mode so the extension can talk to an OpenAI‑compatible server (like llama-cpp) instead of the hosted Sweep API. The request path is now shared for both modes, with a small local adapter that builds the prompt and parses OpenAI‑style completions. Activation now includes a quick mode chooser, and inline edits use a debounced + cancellable request flow to keep things responsive (since local llms are slower on consumer gpus)

I've ran my llama-server with this command:

llama-server -m 'sweep-next-edit-1.5b.q8_0.v2.gguf' -c 8192 --port 8080 -ngl 99 -v

And it seems to work fine on my end.

@pvaret
Copy link

pvaret commented Feb 12, 2026

Thank you for this PR, excited to see it land! Would you consider making the model name configurable as well, for convenience?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support for local sweepai/sweep-next-edit-1.5B model planned?

2 participants