-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Alpaca and Llama models #8
Comments
Ideally the model configuration stuff should be abstracted out to something like langchain, it's too bad there no TS port for it yet. |
Yes there is, we already use it in Autodoc. https://github.com/hwchase17/langchainjs |
Maybe consider use https://github.com/rustformers/llama-rs in that case? |
have you seen this project? https://github.com/microsoft/semantic-kernel Semantic Kernel (SK) is a lightweight SDK enabling integration of AI Large Language Models (LLMs) with conventional programming languages. The SK extensible programming model combines natural language semantic functions, traditional code native functions, and embeddings-based memory unlocking new potential and adding value to applications with AI. SK supports prompt templating, function chaining, vectorized memory, and intelligent planning capabilities out of the box. |
ollama supports openai api |
Autodoc is currently reliant on OpenAI for access to cutting-edge language models. Going forward, we would like to support models running locally or at providers other than OpenAI, like Llama, or Alpaca. This gives developers more control over how their code is indexed, and allows indexing of private code that cannot be shared with OpenAI.
This is a big undertaking that will be an on-going process. A few thoughts for someone who wants to get starting hacking on this.
This issue is high priority. If you're interesting in working on it, please reach out.
The text was updated successfully, but these errors were encountered: