⚠️ ALPHA RELEASE - This project is in active development. APIs may change and features are still being stabilized. Use in production at your own risk. We welcome feedback and contributions!
Production-ready Go framework for building intelligent multi-agent AI systems
The most productive way to build AI agents in Go. AgenticGoKit provides a unified, streaming-first API for creating intelligent agents with built-in workflow orchestration, tool integration, and memory management. Start with simple single agents and scale to complex multi-agent workflows.
- vnext APIs: Modern, streaming-first agent interface with comprehensive error handling
 - Real-time Streaming: Watch your agents think and respond in real-time
 - Multi-Agent Workflows: Sequential, parallel, and DAG orchestration patterns
 - High Performance: Compiled Go binaries with minimal overhead
 - Rich Integrations: Memory providers, tool discovery, MCP protocol support
 - Zero Dependencies: Works with OpenAI, Ollama, Azure OpenAI out of the box
 
Start building immediately with the modern vnext API:
package main
import (
    "context"
    "fmt"
    "log"
    
    "github.com/kunalkushwaha/agenticgokit/core/vnext"
)
func main() {
    // Create a chat agent with Ollama
    agent, err := vnext.NewChatAgent("assistant", 
        vnext.WithLLM("ollama", "gemma3:1b", "http://localhost:11434"),
    )
    if err != nil {
        log.Fatal(err)
    }
    // Basic execution
    result, err := agent.Run(context.Background(), "Explain Go channels in 50 words")
    if err != nil {
        log.Fatal(err)
    }
    
    fmt.Println("Response:", result.Content)
    fmt.Printf("Duration: %.2fs | Tokens: %d\n", result.Duration.Seconds(), result.TokensUsed)
}Generate complete projects with agentcli:
# Install CLI
go install github.com/kunalkushwaha/agenticgokit/cmd/agentcli@latest
# Create project with scaffolding
agentcli create my-agent --template basic
cd my-agent
# Set up environment 
export AZURE_OPENAI_API_KEY=your-key
export AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
# Run generated code
go run .Watch your multi-agent workflows execute in real-time:
package main
import (
    "context"
    "fmt" 
    "log"
    
    "github.com/kunalkushwaha/agenticgokit/core/vnext"
)
func main() {
    // Create specialized agents
    researcher, _ := vnext.NewResearchAgent("researcher")
    analyzer, _ := vnext.NewDataAgent("analyzer")
    
    // Build workflow
    workflow, _ := vnext.NewSequentialWorkflow(&vnext.WorkflowConfig{
        Timeout: 300 * time.Second,
    })
    workflow.AddStep(vnext.WorkflowStep{Name: "research", Agent: researcher})
    workflow.AddStep(vnext.WorkflowStep{Name: "analyze", Agent: analyzer})
    
    // Execute with streaming
    stream, _ := workflow.RunStream(context.Background(), "Research Go best practices")
    
    for chunk := range stream.Chunks() {
        if chunk.Type == vnext.ChunkTypeDelta {
            fmt.Print(chunk.Delta) // Real-time token streaming!
        }
    }
    
    result, _ := stream.Wait()
    fmt.Printf("\nComplete: %s\n", result.Content)
}- Unified Agent Interface: Single API for all agent operations
 - Real-time Streaming: Watch tokens generate in real-time
 - Multi-Agent Workflows: Sequential, parallel, DAG orchestration
 - Memory & RAG: Built-in persistence and retrieval
 - Tool Integration: MCP protocol, function calling
 
- CLI Scaffolding: Generate complete projects with 
agentcli create - Multiple Templates: Chat, research, RAG, workflow patterns
 - Configuration: TOML-based with environment overrides
 - Visualization: Auto-generated Mermaid workflow diagrams
 
Start coding immediately with clean, modern APIs:
import "github.com/kunalkushwaha/agenticgokit/core/vnext"
// Single agent
agent, _ := vnext.NewChatAgent("bot")
result, _ := agent.Run(ctx, "Hello world")
// Streaming agent  
stream, _ := agent.RunStream(ctx, "Write a story")
for chunk := range stream.Chunks() { /* real-time output */ }
// Multi-agent workflow
workflow, _ := vnext.NewSequentialWorkflow(config)
stream, _ := workflow.RunStream(ctx, input) Generate complete projects with configuration:
my-agent/                 # Generated by agentcli create
├── main.go              # Entry point with vnext APIs
├── agentflow.toml       # Configuration 
├── go.mod               # Dependencies
├── agents/              # Custom agent implementations
└── docs/                # Generated diagrams
# agentflow.toml - Configuration for generated projects
[orchestration]
mode = "sequential" 
timeout_seconds = 300
[llm]
provider = "ollama"
model = "gemma3:1b"
[memory]
provider = "local"
enable_rag = true// Basic chat agent
agent, _ := vnext.NewChatAgent("helper")
// With custom configuration  
agent, _ := vnext.NewChatAgent("helper",
    vnext.WithLLM("ollama", "gemma3:1b", "http://localhost:11434"),
    vnext.WithMemory(vnext.EnableRAG()),
    vnext.WithTools("web_search", "calculator"),
)
// Workflow orchestration
workflow, _ := vnext.NewParallelWorkflow(config)
workflow.AddStep(vnext.WorkflowStep{Name: "research", Agent: researchAgent})
workflow.AddStep(vnext.WorkflowStep{Name: "fact-check", Agent: factChecker}) agentcli create my-bot --template basic           # Simple chat agent
agentcli create research-team --template research  # Multi-agent research
agentcli create kb-system --template rag-system   # Knowledge base + RAG
agentcli create workflow --template chat-system   # Conversational workflows// Works with any LLM provider
vnext.WithLLM("openai", "gpt-4", "")                    // OpenAI
vnext.WithLLM("azure", "gpt-4", "https://your.azure")  // Azure OpenAI  
vnext.WithLLM("ollama", "gemma3:1b", "http://localhost:11434") // Local OllamaWorking Examples (examples/)
- vnext Streaming Workflow - Real-time multi-agent workflows
 - Simple Agent - Basic agent setup
 - Multi-Agent Collaboration - Team coordination
 - RAG Knowledge Base - Memory & retrieval
 - MCP Tool Integration - Dynamic tool discovery
 
- vnext API Reference - Complete API documentation
 - Workflow Streaming Guide - Real-time execution
 - Migration Guide - Upgrade from legacy APIs
 - Performance Benchmarks - Overhead analysis
 
# Clone and build
git clone https://github.com/kunalkushwaha/agenticgokit.git
cd agenticgokit
make build
# Run tests
make test
# Run examples
cd examples/01-simple-agent
go run .- Website: www.agenticgokit.com
 - Documentation: docs.agenticgokit.com
 - Examples: examples/
 - Discussions: GitHub Discussions
 - Issues: GitHub Issues
 
We welcome contributions! See docs/contributors/ContributorGuide.md for getting started.
Apache 2.0 - see LICENSE for details.