Using the official OpenAI library for .NET to access local running LLMs ans SLMs using Ollama or llamafiles
In this repository, we will use the official OpenAI library for .NET to create a simple console application. This application will implement a Shakespearean chat using either OpenAI’s models, locally running LLMs or SLMs with Ollama or locally running LLMs or SLMs as llamafile.
You need to create an OpenAI account on this website. You have to pay to use the API so make sure that you add your payment information. After that, you can create an API Key here for further use.
If you want to run local LLMs or SLMs simply install Ollama and download a desired model.
If you want to run local LLMs or SLM get a llamafile
from the official GitHub Repository. On Windows you need to replace the file extension .llamafile
with .exe
to be able to execute the file.
Just run the app and follow the steps displayed on the screen.
Here you can see the console application in action:
The first screenshot shows the host selection.
Depending on your selection, you will be asked for the required parameters. In this case we need to enter the OpenAI key.
Next we need to select the OpenAI model.
Finally you can start chatting with William Shakespeare.
If you select Ollama (Local LLM) as host, you need to specify the downloaded Ollama model.
In my case I’ve downloaded the famous Phi-3 model provided by Microsoft.
Finally you can start chatting with William Shakespeare.
If you select Llamafile (Local LLM) as host, you can start directly chatting with Shakespeare.
If you are more interested into details, please see the following posts on medium.com or in my personal blog:
- Using the official OpenAI library for .NET to access llamafile LLMs
- Using the official OpenAI library for .NET to access local running LLMs ans SLMs
- Using Ollama to run local LLMs on your computer
- Creating a copilot using OpenAI and/or Azure OpenAI
- Einrichtung von OpenAI
- Einrichtung von Azure OpenAI