Listed in Ollama Community Integrations
Most of the time, C++ libraries are written to be used in C#.
But have you ever seen a C#-written library for C++ — that even talks to LLM models via Ollama?
If not — congratulations! You've just found one.
OllamaPlusPlus is a lightweight, very simple way to use Ollama from C++, powered by C# and a native bridge.
OllamaPlusPlus is a lightweight and incredibly simple library for using Ollama from C++.
The low-level communication with Ollama is implemented in C#, while the native bridge is written in C++.
This means you can talk to LLM models from C++ using just a few lines of code:
#include "OllamaPlusPlus.h"
InitOllama("OllamaNET.dll", "deepseek-r1:7b");
auto response = PromptOllama("When was GitHub created?");
std::cout << response;
FreeOllama(response);
Note
The base logic for communicating with Ollama is written in C#, taking into account the needs of a C++ interface. The C++ part acts as a native bridge. If you ever want to modify the internals — either on the C# or C++ side — you'll only need basic knowledge of both. In total, the entire library is just around 120 lines of code across both languages.
It works with any model downloaded via the Ollama CLI.
Requirement: Download and install Ollama first.
- Download the latest
.zip
archive from the Releases page - Extract it in your C++ project
- Open Demo.cpp to see example of usage
OllamaPlusPlus is a very simple C++ library, so it contains only three methods to communicate with Ollama. Here you see what you can do with this library:
To initialize Ollama and load OllamaNET.dll library, use the InitOllama()
method:
InitOllama("OllamaNET.dll", "deepseek-r1:7b");
This method has signature:
void InitOllama(const char* path, const char* modelName);
path
– Path to theOllamaNET.dll
modelName
– Name of the model you want to use
Note
If OllamaNET.dll
is in the same folder as your .cpp
file, you can just write "OllamaNET.dll"
. Otherwise, specify the full path like: "C:\Users\YourName\Desktop\OllamaNET.dll"
To send a prompt to the model and receive a response, use the PromptOllama()
method.
auto response = PromptOllama("When was GitHub created?");
This method has signature:
const char* PromptOllama(const char* prompt);
prompt
– The prompt to send to the model
It returns the LLM response.
To free the allocated memory and unload the library, use the FreeOllama()
method.
FreeOllama(response);
This method has signature:
void FreeOllama(const char* response);
response
– The response returned byPromptOllama()
This project is licensed under the MIT License.
See LICENSE for full terms.
HardCodeDev
💬 Got feedback, found a bug, or want to contribute? Open an issue or fork the repo on GitHub!