Hi TWIX team,
I’d love to contribute a ollama integration for local LLM inference. This would let users run models locally, keeping their data private and avoiding external APIs.
I’ve started a branch with an Ollama-compatible interface, the code is ready as well (note, the normal functions are not working as expected, once you fix them I can test it out fully), I wanted to check:
- Is anyone else working on this?
- Would you be open to a PR adding Ollama as a local LLM backend?
- Are there any contribution guidelines or design preferences I should follow?
I think this could be a great privacy-focused option for users. Let me know if I should move ahead with a PR.
Thanks!