Abbreviated as
lango, 中文:懒狗
Website: http://lango.rpcx.io
🔀 Forked from paulnegz/langgraphgo - Enhanced with streaming, visualization, observability, and production-ready features.
This fork aims for feature parity with the Python LangGraph library, adding support for parallel execution, persistence, advanced state management, pre-built agents, and human-in-the-loop workflows.
Real-world applications built with LangGraphGo:
| Insight | NoteX |
|---|---|
![]() |
![]() |
Insight - An AI-powered knowledge management and insight generation platform that uses LangGraphGo to build intelligent analysis workflows, helping users extract key insights from massive amounts of information.
NoteX - An intelligent note-taking and knowledge organization tool that leverages AI for automatic categorization, tag extraction, and content association, making knowledge management more efficient.
go get github.com/smallnest/langgraphgoNote: This repository uses Git submodules for the showcases directory. When cloning, use one of the following methods:
# Method 1: Clone with submodules
git clone --recurse-submodules https://github.com/smallnest/langgraphgo
# Method 2: Clone first, then initialize submodules
git clone https://github.com/smallnest/langgraphgo
cd langgraphgo
git submodule update --init --recursive-
Core Runtime:
- Parallel Execution: Concurrent node execution (fan-out) with thread-safe state merging.
- Runtime Configuration: Propagate callbacks, tags, and metadata via
RunnableConfig. - Generic Types: Type-safe state management with generic StateGraph implementations.
- LangChain Compatible: Works seamlessly with
langchaingo.
-
Persistence & Reliability:
- Checkpointers: Redis, Postgres, SQLite, and File implementations for durable state.
- File Checkpointing: Lightweight file-based checkpointing without external dependencies.
- State Recovery: Pause and resume execution from checkpoints.
-
Advanced Capabilities:
- State Schema: Granular state updates with custom reducers (e.g.,
AppendReducer). - Smart Messages: Intelligent message merging with ID-based upserts (
AddMessages). - Command API: Dynamic control flow and state updates directly from nodes.
- Ephemeral Channels: Temporary state values that clear automatically after each step.
- Subgraphs: Compose complex agents by nesting graphs within graphs.
- Enhanced Streaming: Real-time event streaming with multiple modes (
updates,values,messages). - Pre-built Agents: Ready-to-use
ReAct,CreateAgent, andSupervisoragent factories. - Programmatic Tool Calling (PTC): LLM generates code that calls tools programmatically, reducing latency and token usage by 10x.
- State Schema: Granular state updates with custom reducers (e.g.,
-
Developer Experience:
- Visualization: Export graphs to Mermaid, DOT, and ASCII with conditional edge support.
- Human-in-the-loop (HITL): Interrupt execution, inspect state, edit history (
UpdateState), and resume. - Observability: Built-in tracing and metrics support.
- Tools: Integrated
TavilyandExasearch tools.
package main
import (
"context"
"fmt"
"log"
"github.com/smallnest/langgraphgo/graph"
"github.com/tmc/langchaingo/llms"
"github.com/tmc/langchaingo/llms/openai"
)
func main() {
ctx := context.Background()
model, _ := openai.New()
// 1. Create Graph
g := graph.NewStateGraph[map[string]any]()
// 2. Add Nodes
g.AddNode("generate", "generate", func(ctx context.Context, state map[string]any) (map[string]any, error) {
input, ok := state["input"].(string)
if !ok {
return nil, fmt.Errorf("invalid input")
}
response, err := model.Call(ctx, input)
if err != nil {
return nil, err
}
state["output"] = response
return state, nil
})
// 3. Define Edges
g.AddEdge("generate", graph.END)
g.SetEntryPoint("generate")
// 4. Compile
runnable, _ := g.Compile()
// 5. Invoke
initialState := map[string]any{
"input": "Hello, LangGraphGo!",
}
result, _ := runnable.Invoke(ctx, initialState)
fmt.Println(result)
}This project includes 90+ comprehensive examples organized into categories:
- ReAct Agent - Reason and Action agent using tools
- RAG Pipeline - Complete retrieval-augmented generation
- Chat Agent - Multi-turn conversation with session management
- Supervisor - Multi-agent orchestration
- Tree of Thoughts - Search-based reasoning with multiple solution paths
- Planning Agent - Dynamic workflow plan creation
- PEV Agent - Plan-Execute-Verify with self-correction
- Reflection Agent - Iterative improvement through self-reflection
- Mental Loop - Simulator-in-the-loop for safe action testing
- Reflexive Metacognitive Agent - Self-aware agent with explicit capabilities model
- Basic Concepts - Simple LLM integration, LangChain compatibility
- State Management - State schema, custom reducers, smart messages
- Graph Structure - Conditional routing, subgraphs, generics
- Parallel Execution - Fan-out/fan-in with state merging
- Streaming & Events - Real-time updates, listeners, logging
- Persistence - Checkpointing with file, memory, databases
- Human-in-the-Loop - Interrupts, approval, time travel
- Pre-built Agents - ReAct, Supervisor, Chat, Planning agents
- Programmatic Tool Calling - PTC for 10x latency reduction
- Memory - Buffer, sliding window, summarization strategies
- RAG - Vector stores, GraphRAG with FalkorDB
- Tools & Integrations - Search tools, GoSkills, MCP
LangGraphGo automatically executes nodes in parallel when they share the same starting node. Results are merged using the graph's state merger or schema.
g.AddEdge("start", "branch_a")
g.AddEdge("start", "branch_b")
// branch_a and branch_b run concurrentlyPause execution to allow for human approval or input.
config := &graph.Config{
InterruptBefore: []string{"human_review"},
}
// Execution stops before "human_review" node
state, err := runnable.InvokeWithConfig(ctx, input, config)
// Resume execution
resumeConfig := &graph.Config{
ResumeFrom: []string{"human_review"},
}
runnable.InvokeWithConfig(ctx, state, resumeConfig)Quickly create complex agents using factory functions.
// Create a ReAct agent
agent, err := prebuilt.CreateReactAgent(model, tools)
// Create an agent with options
agent, err := prebuilt.CreateAgent(model, tools, prebuilt.WithSystemMessage("System prompt"))
// Create a Supervisor agent
supervisor, err := prebuilt.CreateSupervisor(model, agents)Generate code that calls tools directly, reducing API round-trips and token usage.
// Create a PTC agent
agent, err := ptc.CreatePTCAgent(ptc.PTCAgentConfig{
Model: model,
Tools: toolList,
Language: ptc.LanguagePython, // or ptc.LanguageGo
ExecutionMode: ptc.ModeDirect, // Subprocess (default) or ModeServer
MaxIterations: 10,
})
// LLM generates code that calls tools programmatically
result, err := agent.Invoke(ctx, initialState)See the PTC README for detailed documentation.
exporter := runnable.GetGraph()
fmt.Println(exporter.DrawMermaid()) // Generates Mermaid flowchart- Graph Operations: ~14-94μs depending on format
- Tracing Overhead: ~4μs per execution
- Event Processing: 1000+ events/second
- Streaming Latency: <100ms
go test ./... -vThis project is open for contributions! if you are interested in being a contributor please create feature issues first, then submit PRs..
MIT License - see original repository for details.


