Skip to content

NaN•Web LLiMo is a laconic java•script way to build apps with LLM Chat API.

License

Notifications You must be signed in to change notification settings

Mitriyweb/llimo.app

 
 

Repository files navigation

@nan0web/llimo.app

LLiMo is a language model‑powered CLI assistant for software development and content generation. It uses the AI SDK to integrate with models from OpenAI, Cerebras, Hugging Face, and OpenRouter.

Description

LLiMo provides a seamless way to:

  • Chat with AI models via llimo chat (interactive or batch mode).
  • Pack project files into prompts for model input.
  • Unpack AI responses (markdown with file checklists) back to your filesystem.
  • Test mode for simulating responses from log files without API calls.
  • Model management with automatic selection and caching.

Core components:

  • AI / TestAI — Wrappers for AI providers and test simulation.
  • Chat — Manages chat history and file persistence.
  • ModelInfo / ModelProvider — Handles model metadata and selection.
  • CLI tools: llimo chat, llimo pack, llimo unpack.

Supports commands like @bash, @get, @ls, @rm, @summary, @validate in responses.

Installation

How to install with npm?

npm install @nan0web/llimo.app

How to install with pnpm?

pnpm add @nan0web/llimo.app

How to install with yarn?

yarn add @nan0web/llimo.app

Start an interactive chat with your input file.

How to start an interactive chat?

import { AI, Chat } from '@nan0web/llimo.app'
const ai = new AI()
const chat = new Chat({ id: "test-chat" })
// Simulate loading existing chat or initializing new
chat.add({ role: "user", content: "Hello, AI!" })
const model = new ModelInfo({ id: "openai/gpt-4o", provider: "openrouter" })
// Stream response (in real use, handle async iteration)
const result = ai.streamText(model, chat.messages)

Usage

Basic Chat

@todo fix the error result is not async iterable

How to use test mode for simulation?

import { TestAI, Chat } from '@nan0web/llimo.app'
const ai = new TestAI()
const chat = new Chat({ id: "test-simulation" })
// Load from test files - result is not async iterable
const result = await ai.streamText("test-model", chat.messages, { cwd: ".", step: 1 })
console.info("Simulation mode using test files")
const stream = result.textStream
for await (const chunk of stream) {
	console.info(String(chunk))
}

CLI Commands

LLiMo provides several CLI entry points.

How to run the main chat CLI?

// Run interactively with a prompt file
import { runSpawn } from '@nan0web/test'
const { code, text } = await runSpawn("npx", ["llimo", "chat", "me.md"])
console.info("Running: npx llimo chat me.md")
// Expect code 0 for successful chat start

How to pack files into a prompt?

import { runSpawn } from '@nan0web/test'
// Pack markdown with file checklist
const { code, text } = await runSpawn("npx", ["llimo", "pack", "checklist.md", ">prompt.md"])
console.info("Running: npx llimo pack checklist.md > prompt.md")
// Outputs packed prompt to stdout or file

How to unpack AI response markdown?

import { runSpawn } from '@nan0web/test'
// Unpack to filesystem from AI output
await runSpawn("npx", ["llimo", "unpack", "response.md"])
console.info("Running: npx llimo unpack response.md")
// Extracts files and runs commands (@bash, etc.)

Model Management

Load and select models dynamically.

How to load models from providers?

import { ModelProvider } from '@nan0web/llimo.app'
const provider = new ModelProvider()
// Fetch available models
const models = await provider.getAll()
console.info("Loaded models:", models.size)

Advanced: Bash Scripts in Responses

LLiMo supports executing bash commands via @bash in AI responses. This allows the AI to run shell scripts directly in the project context.

Example: Installing Dependencies

To run a simple installation script:

#### [Install deps](@bash)
```bash
pnpm install
When unpacked, this executes `pnpm install` and logs output to the chat.

#### Example: Custom Script for Testing
For more complex scripts, like running tests:
pnpm test
echo "Tests complete. Check results above."
Output (stdout/stderr) is captured and saved in the chat history.

#### Bash Script Guidelines
- Use standard bash syntax inside ````bash ... ````.
- Scripts run in the project root (cwd).
- Multi-line scripts are supported.
- Always include error handling if needed, e.g., `set -e` to stop on errors.
- For safety, avoid destructive commands; the AI should confirm via chat if needed.

Note: Scripts are executed after unpacking, and their output is appended to the prompt for context in the next iteration.

Bash scripts in responses
```js
// Example in code: Simulate unpacking a @bash response
console.info("Running: npx llimo unpack response-with-bash.md")
// Expected: Extracts files, runs bash, captures output

Single Binary Usage

All tools are now unified under the llimo binary for simpler usage:

npx llimo chat me.md                    # Interactive chat
npx llimo models --filter "id~gpt"       # List models
npx llimo pack checklist.md > prompt.md  # Pack files
npx llimo unpack response.md             # Unpack response

This replaces separate scripts like llimo-chat, llimo-models. Run npx llimo --help for full options.

Unified CLI usage

API

AI & TestAI

Main wrappers for AI interactions.

  • streamText(model, messages) — Streams from the provider.
  • generateText(modelId, messages) — Non-streaming generation.
  • TestAI simulates from log files for offline testing.

Chat

  • add(message) — Adds to message history.
  • load() / save() — Persist/load chat state.
  • getTokensCount() — Estimates tokens in messages.

ModelProvider

  • getAll() — Loads models from providers (cached).

ModelInfo

  • Properties: id, provider, context_length, pricing, etc.

Usage & Pricing

  • Usage tracks token consumption.
  • Pricing.calc(usage) — Computes cost.

All exported classes should pass basic test to ensure API examples work

import { AI, Chat, ModelProvider, ModelInfo, Usage, Architecture, Pricing } from '@nan0web/llimo.app'

Contributing

How to contribute? - check here

License

How to license? - ISC LICENSE file.

About

NaN•Web LLiMo is a laconic java•script way to build apps with LLM Chat API.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • JavaScript 100.0%