Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
a16e5d9
draft provider sdk
fogfish Jul 22, 2025
96d0c88
core data types with unified content
fogfish Jul 25, 2025
c2785ec
remove cached reply if decoding is failed
fogfish Jul 25, 2025
8daa020
enable testing for bedrock/llm/llama
fogfish Jul 25, 2025
4781e8e
remove unused example
fogfish Jul 25, 2025
c0aa661
update
fogfish Jul 25, 2025
4973579
set go1.23 as default
fogfish Jul 25, 2025
b1f4eb1
fix go requirement
fogfish Jul 25, 2025
d6d18d4
fix go version
fogfish Jul 25, 2025
a2f74cc
unit test for titan encoder
fogfish Jul 25, 2025
9b27c2d
add decoder unit test for titan
fogfish Jul 25, 2025
260067d
unit tests for provider abstraction
fogfish Jul 25, 2025
cb51cc3
update license
fogfish Jul 25, 2025
67143d4
update licese
fogfish Jul 25, 2025
097917c
invoke api for nova
fogfish Jul 25, 2025
c18c458
add support for converse api
fogfish Jul 25, 2025
7d34c66
unit tests fore decoder
fogfish Jul 25, 2025
55d056e
update license
fogfish Jul 25, 2025
96a18bd
openai text embeddings models
fogfish Jul 26, 2025
e614ff8
enable openai provider in testing
fogfish Jul 26, 2025
648c70f
add unit test for decoder
fogfish Jul 26, 2025
0ed2a61
rename `model` -> `foundation`
fogfish Jul 26, 2025
ea4cc1c
add gpt foundation model
fogfish Jul 26, 2025
ebc964f
add decoder unit test for gpt family
fogfish Jul 26, 2025
f35e0b8
update license
fogfish Jul 26, 2025
ebb7166
enable dimension as mandatory arg for embedding
fogfish Jul 26, 2025
4bd98e8
enable autoconfig
fogfish Jul 27, 2025
44d04ed
enable autoconfig
fogfish Jul 27, 2025
3b10545
update license info
fogfish Jul 27, 2025
1dff582
setup only defined inference params
fogfish Jul 27, 2025
eefea7e
remove unused functions
fogfish Jul 27, 2025
9ee4a39
set version v0.10.0 for chatter library
fogfish Jul 27, 2025
51f5cb3
enable bedrock batch inference and migrate aws cdk
fogfish Jul 27, 2025
8cd8960
update autoconfig deps
fogfish Jul 27, 2025
6d157a0
remove outdated implementation
fogfish Jul 27, 2025
d506b02
update README with link to bedrock documentation
fogfish Jul 27, 2025
9492e31
disable ci/cd for legacy features
fogfish Jul 27, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
module: [".", "llm/autoconfig", "llm/bedrock", "llm/bedrockbatch", "llm/converse", "llm/openai"]
module: ["."]

steps:
- uses: actions/setup-go@v5
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/check.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
module: [".", "llm/autoconfig", "llm/bedrock", "llm/bedrockbatch", "llm/converse", "llm/openai"]
module: [".", "provider/autoconfig", "provider/bedrock", "provider/openai"]


steps:
Expand Down
256 changes: 190 additions & 66 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
<p align="center">
<h3 align="center">chatter</h3>
<p align="center"><strong>adapter over LLMs interface</strong></p>
<p align="center"><strong>a universal toolkit for working with LLMs</strong></p>

<p align="center">
<!-- Build Status -->
Expand Down Expand Up @@ -32,19 +32,19 @@
<img src="https://img.shields.io/badge/doc-chatter-007d9c?logo=go&logoColor=white&style=platic" />
</a></td>
<td>
chatter types
Core types and helper utilities
</td></tr>
<!-- Module bedrock -->
<!-- Provider bedrock -->
<tr><td><a href=".">
<img src="https://img.shields.io/github/v/tag/kshard/chatter?label=version&filter=llm/bedrock/*"/>
<img src="https://img.shields.io/github/v/tag/kshard/chatter?label=version&filter=provider/bedrock/*"/>
</a></td>
<td><a href="https://pkg.go.dev/github.com/kshard/chatter/llm/bedrock">
<td><a href="https://pkg.go.dev/github.com/kshard/chatter/provider/bedrock">
<img src="https://img.shields.io/badge/doc-bedrock-007d9c?logo=go&logoColor=white&style=platic" />
</a></td>
<td>
AWS Bedrock LLMs
AWS Bedrock AI models
</td></tr>
<!-- Module bedrock batch -->
<!-- Module bedrock batch
<tr><td><a href=".">
<img src="https://img.shields.io/github/v/tag/kshard/chatter?label=version&filter=llm/bedrockbatch/*"/>
</a></td>
Expand All @@ -54,102 +54,151 @@
<td>
AWS Bedrock Batch Inference
</td></tr>
<!-- Module openai -->
-->
<!-- Provider openai -->
<tr><td><a href=".">
<img src="https://img.shields.io/github/v/tag/kshard/chatter?label=version&filter=llm/openai/*"/>
<img src="https://img.shields.io/github/v/tag/kshard/chatter?label=version&filter=provider/openai/*"/>
</a></td>
<td><a href="https://pkg.go.dev/github.com/kshard/chatter/llm/openai">
<td><a href="https://pkg.go.dev/github.com/kshard/chatter/provider/openai">
<img src="https://img.shields.io/badge/doc-openai-007d9c?logo=go&logoColor=white&style=platic" />
</a></td>
<td>
OpenAI and compatible LLMs
OpenAI models (+ compatible API)
</td></tr>
</tbody>
</table>
</p>

---

`chatter` is an adapter library that integrates with popular Large Language Models (LLMs) and hosting solutions optimized for text generation. It supports AWS Bedrock, OpenAI, and OpenAI-compatible models, providing a unified interface for seamless interaction.
> Abstract LLMs. Switch backends. Stay consistent.

In addition to model integration, `chatter` uses composable utilities such as caching, rate-limiting, and more, enabling efficient and scalable AI-powered applications.
Large Language Models (LLMs) APIs are clunky and tightly coupled to specific provider. There are no consistent and extensible interface that works with all models.

`chatter` is an adapter that integrates with popular Large Language Models (LLMs) and hosting solutions under umbrella of a unified interface. Portability is the primary problem addressed by this library.

## Inspiration

> A good prompt has 4 key elements: Role, Task, Requirements, Instructions.
["Are You AI Ready? Investigating AI Tools in Higher Education – Student Guide"](https://ucddublin.pressbooks.pub/StudentResourcev1_od/chapter/the-structure-of-a-good-prompt/)
## Quick Start

In the research community, there was an attempt for making [standardized taxonomy of prompts](https://aclanthology.org/2023.findings-emnlp.946.pdf) for large language models (LLMs) to solve complex tasks. It encourages the community to adopt the TELeR taxonomy to achieve meaningful comparisons among LLMs, facilitating more accurate conclusions and helping the community achieve consensus on state-of-the-art LLM performance more efficiently.
```go
package main

import (
"context"
"fmt"

The library addresses the LLMs comparisons by
* Creating generic trait to "interact" with LLMs;
* Enabling prompt definition into [seven distinct levels](https://aclanthology.org/2023.findings-emnlp.946.pdf);
* Supporting variety of LLMs.
"github.com/kshard/chatter"
"github.com/kshard/chatter/provider/openai/llm/gpt"
)

```go
type Chatter interface {
Prompt(context.Context, []fmt.Stringer, ...func(*Options)) (string, error)
func main() {
llm := gpt.Must(gpt.New("gpt-4o",
openai.WithSecret("sk-your-access-key"),
))

reply, err := llm.Prompt(context.Background(),
[]chatter.Message{
chatter.Text("Enumerate rainbow colors.")
},
)
if err != nil {
panic(err)
}

fmt.Printf("==> (%+v)\n%s\n", llm.Usage(), reply)
}
```

## Getting started
## What is the library about?

The latest version of the library is available at `main` branch of this repository. All development, including new features and bug fixes, take place on the `main` branch using forking and pull requests as described in contribution guidelines. The stable version is available via Golang modules.
From application perspective, Large Language Models (LLMs) are non-deterministic functions `ƒ: Prompt ⟼ Output`, which generate output based on input prompt. Originally, LLMs were created as human-centric assistants for working with unstructured text.

```go
package main
Recently, they have evolved to support rich content (images, video, audio) and, more importantly, to enable machine-to-machine interaction — for example in RAG pipelines and agent systems. These use cases require more structured formats for both prompts and outputs.

import (
"context"
"fmt"
However, the fast-moving AI landscape created fragmented ecosystem: OpenAI, Anthropic, Meta, Google and others — each providing models with different APIs and formats, often incompatible. This makes integration and switching between providers difficult in real applications.

"github.com/kshard/chatter"
"github.com/kshard/chatter/llm/bedrock"
)
This library (`chatter`) introduces a high-level abstraction over non-deterministic LLM functions to standardize access to popular models in Go (Golang). It allows developers to switch providers, run A/B testing, or refactor pipelines with minimal changes to application code.

func main() {
assistant, err := bedrock.New(
bedrock.WithLLM(bedrock.LLAMA3_0_8B_INSTRUCT),
)
if err != nil {
panic(err)
}

var prompt chatter.Prompt
prompt.WithTask("Extract keywords from the text: %s", /* ... */)

reply, err := assistant.Prompt(context.Background(), prompt.ToSeq())
if err != nil {
panic(err)
}

fmt.Printf("==> (%d)\n%s\n", assistant.ConsumedTokens(), reply)
}
The library abstracts LLMs as an I/O protocol, with a prompt Encoder and a reply Decoder. This way, implementing a new provider becomes a mechanical task — just following the specification.

```mermaid
%%{init: {'theme':'neutral'}}%%
graph LR
A[chatter.Prompt]
B[chatter.Reply]
A --> E
D --> B
subgraph adapter
E[Encoder]
D[Decoder]
G((LLM I/O))
E --> G
G --> D
end
```

**Using LM Studio** `openai` module is also enables consumption of OpenAI compatible interfaces (e.g. LM Studio). It only requires explicit configuration of the host where model is hosted.
To be fair, AWS Bedrock Converse API is the first semi-standardized effort to unify access to multiple LLMs. But it only works inside AWS ecosystem. You cannot combine models from OpenAI, Google, and others easily.

This library provides that missing flexibility.

Please note, this library is not about Agents, RAGs, or similar high-level concepts. It is a pure low-level interface to use LLMs as non-deterministic functions.


## Getting Started

The latest version of the library is available at `main` branch of this repository. All development, including new features and bug fixes, take place on the `main` branch using forking and pull requests as described in contribution guidelines. The stable version is available via Golang modules.

The library is organized into multiple submodules for better dependency control and natural development process.

Core data types are defined in the root module: `github.com/kshard/chatter`. This module defines how prompts are structured and how results are passed back to the application.

All LLM adapters are following the structure:
```go
import (
"github.com/kshard/chatter/llm/openai"
)
import "github.com/kshard/chatter/{provider}/{capability}/{family}"
```

For a list of supported `provider`, see the [provider](./provider/) folder.

Each provider module encapsulates access to various **capabilities** — distinct categories of AI services such as:
* `embedding` — vector embedding service, which transforms text into numerical representations for search, clustering, or similarity tasks.
* `foundation` — interface for general-purpose large language model capabilities, including chat and text completion.

Within each capability, implementations are further organized by **model families**, which group related models API characteristics.

Thus, the overall module hierarchy reflects this layered design:

assistant, err := openai.New(
openai.WithHost("http://localhost:1234"),
openai.WithLLM(openai.LLM("gemma-3-27b-it")),
)
```
provider/ # AI service provider (e.g., openai, mistral)
├─ capability/ # Service category (e.g., embedding, llm)
│ └─ family/ # Model family (e.g., gpt, claude, text2vec)
```

For example:
* `github.com/kshard/chatter/bedrock/llm/converse` implements [AWS Bedrock Converse API](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html)
* `github.com/kshard/chatter/openai/llm/gpt` implements [OpenAI Chat Completition](https://platform.openai.com/docs/api-reference/chat) for GPT models

In addition to model adapters, the library includes composable utilities (in `github.com/kshard/chatter/aio`) for common tasks like caching, rate limiting, and more - helping to build efficient and scalable AI applications.


### LLM I/O

`Chatter` is the main interface for interacting with all supported models. It takes a list of messages representing the conversation history and returns the LLM's reply.

**Using AWS Bedrock Inference Profiles** See the [explanation about usage of models with inference profile](https://repost.aws/questions/QUEU82wbYVQk2oU4eNwyiong/bedrock-api-invocation-error-on-demand-throughput-isn-s-supported)
Both `Message` and `Reply` are built from `Content` blocks — this allows flexible structure for text, images, or other modalities.

```go
bedrock.New(
bedrock.WithLLM("us." + bedrock.LLAMA3_0_8B_INSTRUCT),
)
type Chatter interface {
Usage() Usage
Prompt(context.Context, []chatter.Message, ...chatter.Opt) (*chatter.Reply, error)
}
```

## Prompt
### Prompt

> A good prompt has 4 key elements: Role, Task, Requirements, Instructions.
["Are You AI Ready? Investigating AI Tools in Higher Education – Student Guide"](https://ucddublin.pressbooks.pub/StudentResourcev1_od/chapter/the-structure-of-a-good-prompt/)

In the research community, there was an attempt for making [standardized taxonomy of prompts](https://aclanthology.org/2023.findings-emnlp.946.pdf) for large language models (LLMs) to solve complex tasks. It encourages the community to adopt the TELeR taxonomy to achieve meaningful comparisons among LLMs, facilitating more accurate conclusions and helping the community achieve consensus on state-of-the-art LLM performance more efficiently.

Package `chatter` provides utilities for creating and managing structured prompts for language models.

Expand Down Expand Up @@ -208,6 +257,77 @@ prompt.WithInput(
)
```

### Reply

TBD.


## Advanced Usage


### Autoconfig: Model Initialization from External Configuration

This library includes an `autoconfig` provider that offers a simple interface for creating models from external configuration. It is particularly useful when developing command-line applications or scripts where hardcoding model details is undesirable.

By default, `autoconfig` reads configuration from your `~/.netrc` file, allowing you to specify the provider, model, and any provider- or model-specific options in a centralized, reusable way.

```go
import (
"github.com/kshard/chatter/provider/autoconfig"
)

model, err := autoconfig.FromNetRC("myservice")
```

#### `.netrc` Format

Your `~/.netrc` file must include at least the `provider` and `model` fields under a named service entry. For example:

```
machine myservice
provider provider:bedrock/foundation/converse
model us.anthropic.claude-3-7-sonnet-20250219-v1:0
```

* `provider` specifies the full path to the provider's capability (e.g., `provider:bedrock/foundation/converse`). The path ressembles import path of providers implemented by this library
* `model` specifies the exact model name as recognized by the provider


Each provider and model family may support additional options. These can also be added under the same `machine` entry and will be passed into the corresponding provider implementation.

```
region // used by Bedrock providers
host // used by OpenAI providers
secret // used by OpenAI providers
timeout // used by OpenAI providers
dimensions // used by embedding families
```

### LM Studio

The `openai` provider supports any service with OpenAI compatible API, for example LM Studio. You need to set the model host address manually in configuration.

```go
import (
"github.com/kshard/chatter/provider/openai/llm/gpt"
)

assistant, err := gpt.New("gemma-3-27b-it",
openai.WithHost("http://localhost:1234"),
)
```

### AWS Bedrock Inference Profile

See the [explanation about usage of models with inference profile](https://repost.aws/questions/QUEU82wbYVQk2oU4eNwyiong/bedrock-api-invocation-error-on-demand-throughput-isn-s-supported)

```go
import (
"github.com/kshard/chatter/provider/bedrock/llm/converse"
)

converse.New("us.anthropic.claude-3-7-sonnet-20250219-v1:0")
```

## How To Contribute

Expand All @@ -219,7 +339,7 @@ The library is [MIT](LICENSE) licensed and accepts contributions via GitHub pull
4. Push to the branch (`git push origin my-new-feature`)
5. Create new Pull Request

The build and testing process requires [Go](https://golang.org) version 1.21 or later.
The build and testing process requires [Go](https://golang.org) version 1.23 or later.

**build** and **test** library.

Expand All @@ -229,6 +349,10 @@ cd chatter
go test ./...
```

### API documentation
* [AWS Bedrock API Params & Models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html)
* [AWS Bedrock Foundation Models](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html)

### commit message

The commit message helps us to write a good release note, speed-up review process. The message should address two question what changed and why. The project follows the template defined by chapter [Contributing to a Project](http://git-scm.com/book/ch5-2.html) of Git book.
Expand Down
Loading
Loading