Description
We want to integrate LLMs as part of Livebook itself. There are at least four distinct levels this can happen:
-
Code completion (may or may not need a LLM) (options: Codeium, Copilot, fine-tuned Repl.it model)
-
Chat-based (which also includes selecting code and asking to document it as well as explaining exceptions) (options: Codeium, Copilot, OpenAI, Claude)
-
Semantic-search over all installed packages in the runtime (may or may not need a LLM) (options: Codeium, Copilot, OpenAI, Claude) -
Function completion (we can allow kino/smart-cells to register functions which we hook into prompts, similar to HF Agents) (options: OpenAI)
This is a meta-issue and we are currently doing proof of concepts on different areas. There is no clear decision yet. We will most likely allow users to "bring their own LLM" for most of these categories, especially 2-3-4 (either commercial or self-hosted).