A modular framework for building agentic AI systems with advanced LLM orchestration, tool/function calling, and OpenRouter integration. Designed for researchers and developers to rapidly prototype and deploy AI agents with structured outputs, async support, and extensible toolkits.
- OpenRouter & Anthropic patterns: Out-of-the-box support for OpenRouter and Anthropic-style agent design.
- Tool/function calling: Register Python functions as tools for LLMs to call (OpenAI-compatible schema).
- Structured outputs: Use Pydantic schemas to enforce structured, type-safe LLM responses.
- Async support: Fully asynchronous agent execution for scalable workflows.
- File support: Agents can process and extract data from files.
- Advanced logging: Colorful, context-aware logging (with planned lineage and usage summaries).
- CI pipeline: Continuous integration for reliability.
- Extensible toolkit: Easily add your own tools and response schemas.
- Linter included: Code quality enforced.
- Examples: Prompt chaining, file upload, orchestrator-worker, and more.
pip3 install -e .- The
-eflag is recommended for development. - Requires Python 3.7+.
- Dependencies:
python-json-logger,openai,pydantic,dotenv(seerequirements.txt).
- API Key: Set your OpenRouter API key as an environment variable:
export OPEN_ROUTER_API_KEY=your-api-key-here - (Optional) Toolkits: If using tool calling, ensure your toolkit modules are imported so functions are registered.
Here's a minimal example of creating and running an agent:
from agentic_ai import AIAgent
from pydantic import BaseModel
class Word(BaseModel):
guessed_word: str
async def run_example():
LLMAgent = AIAgent(
agent_name="WordGuesser",
sys_instructions="You are a player of the famous wordle game. Explain what you do at each step",
response_schema=Word,
tools=[]
)
message = "Guess a 7-letter word. Topic of word is: programming."
response = await LLMAgent.prompt(message=message)
print(response.parsed_response.guessed_word)- See
examples/for more advanced use cases (tool calling, prompt chaining, file upload, orchestrator-worker, etc). - To run an example:
python3 -m examples.01-no-tools.run
The framework includes a comprehensive test suite that validates examples across multiple LLM backends.
Test all examples with all supported backends:
python3 -m tests.run_examplesTest specific backend:
# Test with Ollama
python3 -m tests.run_examples --backend ollama --model qwen3:8b
# Test with OpenRouter
python3 -m tests.run_examples --backend openrouter --model google/gemini-2.5-pro
# Test with OpenAI
python3 -m tests.run_examples --backend openai --model gpt-4o-mini- ollama: Local models (default: qwen3:8b)
- openrouter: Cloud models via OpenRouter (default: google/gemini-2.5-pro)
- openai: OpenAI models (requires OPENAI_API_KEY)
The test suite validates:
- Basic agent functionality (no tools)
- Tool calling (hybrid, parallel, sequential)
- Prompt chaining workflows
- Orchestrator-worker patterns
- File processing capabilities
- Structured output validation
- Multi-backend compatibility
The framework's tool calling logic is based on the following flow:
- OpenRouter support
- Anthropic design patterns examples
- File support
- Linter
- CI pipeline
- Advanced LLM params
- Async support
- Docs
- Logger (usage summary, agent lineage)
- Memory mechanisms
- Fault tolerance (retry on certain exceptions)
- Prompt caching
- Testing framework
- Ollama interface
- Benchmarking
- IMPRESS use case integration
- Support for DeepAgent
MIT License. See LICENSE for details.
- Fine-tuning models (e.g., Qwen) for tool calling