The Bloat Moat! A lightweight, production-ready library for LLM communication and agentic workflows. Born from the need for something simpler than LangChain, Fence gives you powerful LLM orchestration without the heavyweight dependencies.
Think of it as the Swiss Army knife for LLM interactionsβsharp, reliable, and it won't weigh down your backpack (or your Docker image).
The short answer: By accident.
The slightly longer answer: LangChain used to be (is?) a pretty big package with a ton of dependencies. Great for PoCs, but in production? Not so much.
- π It's BIG. Takes up serious space (problematic in Lambda, containers, edge environments)
- π It's COMPLEX. Overwhelming for new users, hard to debug in production
- π₯ It BREAKS. Frequent breaking changes, version jumps that made us cry
As a result, many developers (especially those in large production environments) started building lightweight, custom solutions that favor stability and robustness over feature bloat.
We started building basic components from scratch for our Bedrock-heavy production environment. First came the Link class (wink wink), then templates, then agents... and before we knew it, we had a miniature package that was actually fun to use.
Fence strikes the perfect balance between convenience and flexibility.
Note: Fence isn't trying to replace LangChain for complex PoCs. But if you want a simple, lightweight, production-ready package that's easy to understand and extend, you're in the right place.
pip install fence-llmThat's it. Seriously. No 500MB of transitive dependencies.
from fence.links import Link
from fence.templates.string import StringTemplate
from fence.models.openai import GPT4omini
# Create a link
link = Link(
model=GPT4omini(),
template=StringTemplate("Write a haiku about {topic}"),
name='haiku_generator'
)
# Run it
output = link.run(topic='fencing')['state']
print(output)Output:
[2024-10-04 17:45:15] [βΉοΈ INFO] [links.run:203] Executing <haiku_generator> Link
Blades flash in the light,
En garde, the dance begins now,
Touchβvictory's mine.
Much wow. Very poetry. π
Fence is built around a few core concepts that work together beautifully:
Uniform interface across AWS Bedrock (Claude, Nova), OpenAI (GPT-4o), Anthropic, Google Gemini, Ollama, and Mistral. Switch models with a single line change.
π See all supported models β
Composable building blocks that combine models, templates, and parsers. Chain them together for complex workflows.
π Learn about Links & Chains β
The crown jewel! Production-ready agents using the ReAct pattern:
Agent- Classic ReAct with tool use and multi-level delegationBedrockAgent- Native Bedrock tool calling with streamingChatAgent- Conversational agents for multi-agent systems
π Dive into Agents β
First-class support for the Model Context Protocol. Connect to MCP servers and automatically expose their tools to your agents.
π Explore MCP Integration β
Build collaborative agent systems with RoundTable where multiple agents discuss and solve problems together.
π Build Multi-Agent Systems β
Persistent and ephemeral memory backends (DynamoDB, SQLite, in-memory) for stateful conversations.
π Configure Memory β
Custom tool creation, built-in tools, retry logic, parallelization, output parsers, logging callbacks, and benchmarking.
π Explore Tools & Utilities β
- Models - All supported LLM providers and how to use them
- Links & Chains - Building blocks for LLM workflows
- Agents - ReAct agents, tool use, and delegation
- MCP Integration - Model Context Protocol support
- Multi-Agent Systems - RoundTable and collaborative agents
- Memory - Persistent and ephemeral memory backends
- Tools & Utilities - Custom tools, parsers, and helpers
from fence.agents import Agent
from fence.models.openai import GPT4omini
from fence.tools.math import CalculatorTool
agent = Agent(
identifier="math_wizard",
model=GPT4omini(source="demo"),
tools=[CalculatorTool()],
)
result = agent.run("What is 1337 * 42 + 999?")
print(result) # Agent thinks, uses calculator, and answers!from fence.agents.bedrock import BedrockAgent
from fence.mcp.client import MCPClient
from fence.models.bedrock import Claude37Sonnet
# Connect to MCP server
mcp_client = MCPClient(
transport_type="streamable_http",
url="https://your-mcp-server.com/mcp"
)
# Create agent with MCP tools
agent = BedrockAgent(
identifier="mcp_agent",
model=Claude37Sonnet(region="us-east-1"),
mcp_clients=[mcp_client], # Tools auto-registered!
)
result = agent.run("Search for customer data")from fence.troupe import RoundTable
from fence.agents import ChatAgent
from fence.models.openai import GPT4omini
# Create specialized agents
detective = ChatAgent(
identifier="Detective",
model=GPT4omini(source="roundtable"),
profile="You are a sharp detective."
)
scientist = ChatAgent(
identifier="Scientist",
model=GPT4omini(source="roundtable"),
profile="You are a forensic scientist."
)
# Let them collaborate
round_table = RoundTable(agents=[detective, scientist])
transcript = round_table.run(
prompt="A painting was stolen. Let's investigate!",
max_rounds=3
)More examples:
- π Jupyter Notebooks - Interactive tutorials
- π¬ Demo Scripts - Runnable examples
We welcome contributions! Whether it's:
- π Bug fixes
- β¨ New features (especially new model providers!)
- π Documentation improvements
- π§ͺ More tests
- π¨ Better examples
Check out CONTRIBUTING.md for guidelines.
MIT License - see LICENSE.txt for details.
Inspired by LangChain, built for production, made with β€οΈ by developers who got tired of dependency hell.
Now go build something awesome! π
