π BETA RELEASE - The v1beta API is now stable and recommended for all new projects. While still in beta, the core APIs are working well and ready for testing. We continue to refine features and welcome feedback and contributions!
π API Versioning Plan:
- Current (v0.x):
v1betapackage is the recommended API (formerlyvnext)- v1.0 Release:
v1betawill become the primaryv1package- Legacy APIs: Both
coreandcore/vnextpackages will be removed in v1.0
Robust Go framework for building intelligent multi-agent AI systems
The most productive way to build AI agents in Go. AgenticGoKit provides a unified, streaming-first API for creating intelligent agents with built-in workflow orchestration, tool integration, and memory management. Start with simple single agents and scale to complex multi-agent workflows.
- v1beta APIs: Modern, streaming-first agent interface with comprehensive error handling
- Multimodal Support: Native support for images, audio, and video inputs alongside text
- Real-time Streaming: Watch your agents think and respond in real-time
- Multi-Agent Workflows: Sequential, parallel, DAG, and loop orchestration patterns
- Multiple LLM Providers: Seamlessly switch between OpenAI, Ollama, Azure OpenAI, HuggingFace, and more
- High Performance: Compiled Go binaries with minimal overhead
- Batteries Included: Built-in memory and RAG by default (zero config needed, swappable with pgvector/custom)
- Rich Integrations: Memory providers, tool discovery, MCP protocol support
- Active Development: Beta status with stable core APIs and ongoing improvements
Start building immediately with the modern v1beta API:
package main
import (
"context"
"fmt"
"log"
"time"
"github.com/agenticgokit/agenticgokit/v1beta"
)
func main() {
// Create a chat agent with Ollama
agent, err := v1beta.NewBuilder("ChatAgent").
WithConfig(&v1beta.Config{
Name: "ChatAgent",
SystemPrompt: "You are a helpful assistant",
LLM: v1beta.LLMConfig{
Provider: "ollama",
Model: "gemma3:1b",
BaseURL: "http://localhost:11434",
},
}).
Build()
if err != nil {
log.Fatal(err)
}
// Basic execution
result, err := agent.Run(context.Background(), "Explain Go channels in 50 words")
if err != nil {
log.Fatal(err)
}
fmt.Println("Response:", result.Content)
}Note: The
agentcliscaffolding tool is being deprecated and will be replaced by theagkCLI in a future release.
AgenticGoKit handles the complexities of building AI systems so you can focus on logic.
Orchestrate multiple agents using robust patterns. Pass data between agents, handle errors, and manage state automatically.
- Patterns: Sequential, Parallel, DAG, Loop.
- Example: Sequential Workflow Demo
Built from the ground up for streaming. Receive tokens and tool updates as they happen, suitable for real-time UI experiences.
- Example: Streaming Workflow
π§ Memory & RAG
Batteries Included: Agents come with valid memory out-of-the-box (chromem embedded vector DB).
- Features: Chat history preservation, semantic search, and document ingestion.
- Configurable: Swap the default with
pgvectoror custom providers easily.
ποΈ Multimodal Input
Native support for Images, Audio, and Video inputs. Works seamlessly with models like GPT-4 Vision, Gemini Pro Vision, etc.
π οΈ Tool Integration
Extend agents with tools using standard Go functions or the Model Context Protocol (MCP) for standardized tool discovery.
AgenticGoKit works with all major LLM providers out of the box:
| Provider | Model Examples | Use Case |
|---|---|---|
| OpenAI | GPT-4, GPT-4 Vision, GPT-3.5-turbo | Production-grade conversational and multimodal AI |
| Azure OpenAI | GPT-4, GPT-3.5-turbo | Enterprise deployments with Azure |
| Ollama | Llama 3, Gemma, Mistral, Phi | Local development and privacy-focused apps |
| HuggingFace | Llama-2, Mistral, Falcon | Open-source model experimentation |
| OpenRouter | Multiple models | Access to various providers via single API |
| BentoML | Any model packaged as Bento | Self-hosted ML models with production features |
| MLFlow | Models via MLFlow AI Gateway | ML model deployment and management |
| vLLM | Llama-2, Mistral, etc. | High-throughput LLM serving with PagedAttention |
| Custom | Any OpenAI-compatible API | Bring your own provider |
- Getting Started - Build your first agent
- API Reference - Comprehensive API docs
- Memory & RAG - Deep dive into memory systems
- Story Writer Chat v2 - Complete Real-time collaborative writing app
- Ollama Quickstart - Local LLM development
- MCP Integration - Using Model Context Protocol
- HuggingFace Quickstart - Using HF Inference Endpoints
- BentoML Quickstart - Self-hosted ML models
- MLFlow Gateway Demo - MLFlow AI Gateway integration
- vLLM Quickstart - High-throughput inference
- Recommended: Use
v1betapackage for all new projects - Import Path:
github.com/agenticgokit/agenticgokit/v1beta - Stability: Beta - Core APIs are stable and functional, suitable for testing and development
- Status: APIs may evolve based on feedback before v1.0 release
- Note:
v1betais the evolution of the formercore/vnextpackage
What's Changing:
v1betapackage will become the primaryv1API- Legacy
coreandcore/vnextpackages will be removed entirely - Clean, stable API with semantic versioning guarantees
Migration Path:
- If you're using
v1betaorvnext: Minimal changes (import path update only) - If you're using
core: Migrate tov1betanow to prepare core/vnextusers:vnexthas been renamed tov1beta- update imports
Timeline:
- v0.x (Current):
v1betastabilization and testing - v1.0 (Planned):
v1betaβv1, removecorepackage
The v1beta package represents our next-generation API design:
- β Streaming-first architecture
- β Unified builder pattern
- β Better error handling
- β Workflow composition
- β Stable core APIs (beta status)
β οΈ Minor changes possible before v1.0
By using v1beta today, you're getting access to the latest features and helping shape the v1.0 release with your feedback.
- Website: www.agenticgokit.com
- Documentation: docs.agenticgokit.com
- Examples: examples/
- Discussions: GitHub Discussions
- Issues: GitHub Issues
We welcome contributions! See docs/contributors/ContributorGuide.md for getting started.
Apache 2.0 - see LICENSE for details.