Skip to content

LzDreamTeam/AI-summator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Summator

This is a CLI application written in Go that uses a local LLM (phi3 via Ollama) to sum two numbers. It demonstrates how to integrate Go with local LLMs using LangChain for Go (langchaingo).

Prerequisites

  1. Go: Ensure you have Go installed (version 1.25.3 or later recommended).
  2. Ollama: You need to have Ollama installed and running.
  3. Phi3 Model: GitHub Codespaces have limited resources, and we found this model to be a good compromise between accuracy and resourse needs. Pull the required model:
    ollama pull phi3

Installation

  1. Clone the repository (if applicable) or navigate to the project directory.
  2. Install dependencies:
    go mod tidy
  3. Build the application:
    go build -o ai-summator main.go

Usage

Run the built binary with two numeric arguments:

./ai-summator 5 3

Example output:

Result: 8.000000

Floating point numbers are supported:

./ai-summator 1.5 2.7

Testing

The project includes both unit tests and integration tests.

To run all tests:

go test -v ./...

Note: The integration tests require Ollama to be running and the phi3 model to be available. If Ollama is not reachable, the integration test will fail.

Project Structure

  • main.go: Entry point for the CLI.
  • summator/: Contains the core logic and tests.
    • summator.go: Implementation of the summator using langchaingo.
    • summator_test.go: Unit tests with mocked LLM.
    • integration_test.go: Integration tests against a real Ollama instance.

DevContainer / GitHub Codespaces

This project includes a DevContainer configuration. You can open this project in GitHub Codespaces or VS Code with the Dev Containers extension.

The DevContainer is configured to:

  1. Install Go.
  2. Install Ollama.
  3. Automatically start the Ollama server.
  4. Pull the phi3 model during the creation phase.

Note: Running LLMs in a cloud environment (like standard Codespaces) might be slow due to lack of GPU acceleration.

About

Funny vibe-coded example of using LLM to calculate a sum of two numbers

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •