Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 8 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,12 @@
# Cellm
Cellm is an Excel extension that lets you use Large Language Models (LLMs) like ChatGPT in cell formulas.

- [Example](#example)
- [Getting Started](#getting-started)
- [Usage](#usage)
- [Use Cases](#use-cases)
- [Run Models Locally](#run-models-locally)
- [Dos and Don'ts](#dos-and-donts)
- [Why did you make Cellm?](#why-did-you-make-cellm)
- [License](#license)

## What is Cellm?
Similar to Excel's `=SUM()` function that outputs the sum of a range of numbers, Cellm's `=PROMPT()` function outputs the AI response to a range of text.
Expand All @@ -18,7 +17,7 @@ For example, you can write `=PROMPT(A1:A10, "Extract all person names mentioned
This extension does one thing and one thing well.

- Calls LLMs in formulas and returns short answers suitable for cells.
- Supports models from Anthropic, OpenAI, and Google as well as other providers that mirrors one of these APIs, e.g. local Ollama, llama.cpp or vLLM servers.
- Supports models from Anthropic, Mistral, OpenAI, and Google as well as locally hosted models via Llamafiles, Ollama, or vLLM.

## Example
Say you're reviewing medical studies and need to quickly identify papers relevant to your research. Here's how Cellm can help with this task:
Expand Down Expand Up @@ -52,7 +51,7 @@ Cellm must be built from source and installed via Excel. Follow the steps below.
- [Docker](https://www.docker.com/products/docker-desktop/) (optional)
- A GPU and [NVIDIA CUDA Toolkit 12.4](https://developer.nvidia.com/cuda-downloads) or higher (optional)

You can run small models with Llamafile without docker or a GPU. For Ollama and vLLM docker compose files in this repository you will need docker, and for higher quality models you will need a GPU.
To get started, you can run small models with Llamafile on your CPU. Cellm can automatically download and run these models for you. For Ollama and vLLM you will need docker, and for higher quality models you will need a GPU.

### Build

Expand All @@ -75,7 +74,7 @@ You can run small models with Llamafile without docker or a GPU. For Ollama and
}
```

Cellm uses Anthropic as the default model provider. You can also use models from OpenAI, Google, or run models locally. See the `appsettings.Local.*.json` files for examples.
Cellm uses Anthropic as the default model provider. You can also use models from OpenAI, Mistral, Google, or run models locally. See the `appsettings.Local.*.json` files for examples.

4. Install dependencies:
```cmd
Expand Down Expand Up @@ -129,7 +128,7 @@ PROMPTWITH(providerAndModel: string or cell, cells: range, [instruction: range |
Allows you to specify the model as the first argument.

- **providerAndModel (Required)**: A string on the form "provider/model".
- Default: anthropic/claude-3-5-sonnet-20240620
- Example: anthropic/claude-3-5-sonnet-20240620

Example usage:

Expand All @@ -145,7 +144,8 @@ Cellm is useful for repetitive tasks on both structured and unstructured data. H
Use classification prompts to quickly categorize large volumes of e.g. open-ended survey responses.

2. **Model Comparison**
Make a sheet with user queries in column A and different models in row 1. Write this prompt in the cell B2:

Make a sheet with user queries in the first column and provider/model pairs in the first row. Write this prompt in the cell B2:
```excell
=PROMPTWITH(B$1,$A2,"Answer the question in column A")
```
Expand Down Expand Up @@ -247,7 +247,7 @@ Don't:
## Why did you make Cellm?
My girlfriend was writing a systematic review paper. She had to compare 7.500 papers against inclusion and exclusion criterias. I told her this was a great use case for LLMs but quickly realized that individually copying 7.500 papers in and out of chat windows was a total pain. This sparked the idea to make an AI tool to automate repetitive tasks for people like her who would rather avoid programming.

I think Cellm is really cool because it enables everyone to automate repetitive tasks with AI to a level that was previously available only to programmers. She still did her analysis manually, of couse, because she cares about scientific integrity.
I think Cellm is really cool because it enables everyone to automate repetitive tasks with AI to a level that was previously available only to programmers. My girlfriend still did her analysis manually, of couse, because she cares about scientific integrity.

## License

Expand Down
10 changes: 5 additions & 5 deletions src/Cellm/Models/Llamafile/AsyncLazy.cs
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,15 @@ public sealed class AsyncLazy<T>
/// <summary>
/// The underlying lazy task.
/// </summary>
private readonly Lazy<Task<T>> instance;
private readonly Lazy<Task<T>> _instance;

/// <summary>
/// Initializes a new instance of the <see cref="AsyncLazy<T>"/> class.
/// </summary>
/// <param name="factory">The delegate that is invoked on a background thread to produce the value when it is needed.</param>
public AsyncLazy(Func<T> factory)
{
instance = new Lazy<Task<T>>(() => Task.Run(factory));
_instance = new Lazy<Task<T>>(() => Task.Run(factory));
}

/// <summary>
Expand All @@ -26,22 +26,22 @@ public AsyncLazy(Func<T> factory)
/// <param name="factory">The asynchronous delegate that is invoked on a background thread to produce the value when it is needed.</param>
public AsyncLazy(Func<Task<T>> factory)
{
instance = new Lazy<Task<T>>(() => Task.Run(factory));
_instance = new Lazy<Task<T>>(() => Task.Run(factory));
}

/// <summary>
/// Asynchronous infrastructure support. This method permits instances of <see cref="AsyncLazy<T>"/> to be awaited.
/// </summary>
public TaskAwaiter<T> GetAwaiter()
{
return instance.Value.GetAwaiter();
return _instance.Value.GetAwaiter();
}

/// <summary>
/// Starts the asynchronous initialization, if it has not already started.
/// </summary>
public void Start()
{
_ = instance.Value;
_ = _instance.Value;
}
}
17 changes: 8 additions & 9 deletions src/Cellm/Models/Llamafile/LlamafileClient.cs
Original file line number Diff line number Diff line change
Expand Up @@ -24,14 +24,14 @@ internal class LlamafileClient : IClient
public LlamafileClient(IOptions<CellmConfiguration> cellmConfiguration,
IOptions<LlamafileConfiguration> llamafileConfiguration,
IOptions<OpenAiConfiguration> openAiConfiguration,
IClientFactory clientFactory,
IClient openAiClient,
HttpClient httpClient,
LLamafileProcessManager llamafileProcessManager)
{
_cellmConfiguration = cellmConfiguration.Value;
_llamafileConfiguration = llamafileConfiguration.Value;
_openAiConfiguration = openAiConfiguration.Value;
_openAiClient = clientFactory.GetClient("openai");
_openAiClient = openAiClient;
_httpClient = httpClient;
_llamafileProcessManager = llamafileProcessManager;

Expand All @@ -56,7 +56,7 @@ public async Task<Prompt> Send(Prompt prompt, string? provider, string? model)
await _llamafilePath;
await _llamafileModelPath;
await _llamafileProcess;
return await _openAiClient.Send(prompt, provider, model);
return await _openAiClient.Send(prompt, provider ?? "Llamafile", model ?? _llamafileConfiguration.DefaultModel);
}

private async Task<Process> StartProcess()
Expand All @@ -75,12 +75,10 @@ private async Task<Process> StartProcess()
processStartInfo.Arguments += $"-ngl {_llamafileConfiguration.GpuLayers} ";
}

var process = Process.Start(processStartInfo) ?? throw new CellmException("Failed to start Llamafile server");
var process = Process.Start(processStartInfo) ?? throw new CellmException("Failed to run Llamafile");

try
{
Thread.Sleep(5000);
// await WaitForLlamafile(process);
_llamafileProcessManager.AssignProcessToCellm(process);
return process;
}
Expand All @@ -94,7 +92,7 @@ private async Task<Process> StartProcess()
private static async Task<string> DownloadFile(Uri uri, string filename, HttpClient httpClient)
{
var filePath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), nameof(Cellm), filename);
Directory.CreateDirectory(Path.GetDirectoryName(filePath) ?? throw new CellmException("Failed to create Llamafile path"));
Directory.CreateDirectory(Path.GetDirectoryName(filePath) ?? throw new CellmException("Failed to create Llamafile folder"));

if (File.Exists(filePath))
{
Expand All @@ -115,7 +113,7 @@ private static async Task<string> DownloadFile(Uri uri, string filename, HttpCli
using (var httpStream = await response.Content.ReadAsStreamAsync())
{

await httpStream.CopyToAsync(fileStream).ConfigureAwait(false);
await httpStream.CopyToAsync(fileStream);
}

File.Move(filePathPart, filePath);
Expand Down Expand Up @@ -157,7 +155,8 @@ private async Task WaitForLlamafile(Process process)
}

process.Kill();
throw new CellmException("Timeout waiting for Llamafile server to be ready");

throw new CellmException("Timeout waiting for Llamafile server to start");
}
}

10 changes: 10 additions & 0 deletions src/Cellm/appsettings.Local.Mistral.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
{
"OpenAiConfiguration": {
"BaseAddress": "https://api.mistral.ai",
"DefaultModel": "mistral-small-latest",
"ApiKey": "YOUR_MISTRAL_APIKEY"
},
"CellmConfiguration": {
"DefaultModelProvider": "OpenAI"
}
}