CUGA (ConfigUrable Generalist Agent) is an open-sourceΒ generalist agent framework from IBM Research, purpose-built for enterprise automation. Designed for developers, CUGA combines and improves the best of foundational agentic patterns such as ReAct, CodeAct, and Planner-Executor β into a modular architecture enabling trustworthy, policy-aware, and composable automation across web interfaces, APIs, and custom enterprise systems.
CUGA achieves state-of-the-art performance on leading benchmarks:
- π₯ #1 on AppWorld β a benchmark with 750 real-world tasks across 457 APIs, and
- π₯ #2 on WebArena β a complex benchmark for autonomous web agents across application domains.
- Complex task execution: State of the art results across Web and APIs.
- Flexible tool integrations: CUGA works across REST APIs via OpenAPI specs, MCP servers, and custom connectors.
- Composable agent architecture: CUGA itself can be exposed as a tool to other agents, enabling nested reasoning and multi-agent collaboration.
- Configurable reasoning modes: Choose between fast heuristics or deep planning depending on your taskβs complexity and latency needs.
- Policy-aware instructions (Experimental): CUGA components can be configured with policy-aware instructions to improve alignment of the agent behavior.
- Save & Reuse (Experimental): CUGA captures and reuses successful execution paths, enabling consistent and faster behavior across repeated tasks.
Explore the Roadmap to see whatβs ahead, or join the π€ Call for the Community to get involved.
Watch CUGA seamlessly combine web and API operations in a single workflow:
Example Task: get top account by revenue from digital sales, then add it to current page
demo_1.mp4
Would you like to test this? (Advanced Demo)
Experience CUGA's hybrid capabilities by combining API calls with web interactions:
-
Switch to hybrid mode:
# Edit ./src/cuga/settings.toml and change: mode = 'hybrid' # under [advanced_features] section
-
Install browser API support:
- Installs playwright browser API and Chromium browser
- The
playwrightinstaller should already be included after installing with Quick Start
playwright install chromium
-
Start the demo:
cuga start demo
-
Enable the browser extension:
- Click the extension puzzle icon in your browser
- Toggle the CUGA extension to activate it
- This will open the CUGA side panel
-
Open the test application:
- Navigate to: Sales app
-
Try the hybrid task:
get top account by revenue from digital sales then add it to current page
π― What you'll see: CUGA will fetch data from the Digital Sales API and then interact with the web page to add the account information directly to the current page - demonstrating seamless API-to-web workflow integration!
Watch CUGA pause for human approval during critical decision points:
Example Task: get best accounts
demo_2.mp4
Would you like to try this? (HITL Demo)
Experience CUGA's Human-in-the-Loop capabilities where the agent pauses for human approval at key decision points:
-
Enable HITL mode:
# Edit ./src/cuga/settings.toml and ensure: api_planner_hitl = true # under [advanced_features] section
-
Start the demo:
cuga start demo
-
Try the HITL task:
get best accounts
π― What you'll see: CUGA will pause at critical decision points, showing you the planned actions and waiting for your approval before proceeding.
π Prerequisites (click to expand)
- Python 3.12+ - Download here
- uv package manager - Installation guide
π§ Optional: Local Digital Sales API Setup (only if remote endpoint fails)
The demo comes pre-configured with the Digital Sales API β π API Docs
Only follow these steps if you encounter issues with the remote Digital Sales endpoint:
# Start the Digital Sales API locally on port 8000
uv run digital_sales_openapi
# Then update ./src/cuga/backend/tools_env/registry/config/mcp_servers.yaml to use localhost:
# Change the digital_sales URL from the remote endpoint to:
# http://localhost:8000# In terminal, clone the repository and navigate into it
git clone https://github.com/cuga-project/cuga-agent.git
cd cuga-agent
# 1. Create and activate virtual environment
uv venv --python=3.12 && source .venv/bin/activate
# 2. Install dependencies
uv sync
# 3. Set up environment variables
# Create .env file with your API keys
echo "OPENAI_API_KEY=your-openai-api-key-here" > .env
# 4. Start the demo
cuga start demo
# Chrome will open automatically at https://localhost:8005
# then try sending your task to CUGA: 'get top account by revenue from digital sales'
π€ LLM Configuration - Advanced Options
---Refer to: .env.example for detailed examples.
CUGA supports multiple LLM providers with flexible configuration options. You can configure models through TOML files or override specific settings using environment variables.
## Supported Platforms
- **OpenAI** - GPT models via OpenAI API (also supports LiteLLM via base URL override)
- **IBM WatsonX** - IBM's enterprise LLM platform
- **Azure OpenAI** - Microsoft's Azure OpenAI service
- **RITS** - Internal IBM research platform
## Configuration Priority
1. **Environment Variables** (highest priority)
2. **TOML Configuration** (medium priority)
3. **Default Values** (lowest priority)
### Option 1: OpenAI π
**Setup Instructions:**
1. Create an account at [platform.openai.com](https://platform.openai.com)
2. Generate an API key from your [API keys page](https://platform.openai.com/api-keys)
3. Add to your `.env` file:
```env
# OpenAI Configuration
OPENAI_API_KEY=sk-...your-key-here...
AGENT_SETTING_CONFIG="settings.openai.toml"
# Optional overrides
MODEL_NAME=gpt-4o # Override model name
OPENAI_BASE_URL=https://api.openai.com/v1 # Override base URL
OPENAI_API_VERSION=2024-08-06 # Override API version
Default Values:
- Model:
gpt-4o - API Version: OpenAI's default API Version
- Base URL: OpenAI's default endpoint
Setup Instructions:
-
Access IBM WatsonX
-
Create a project and get your credentials:
- Project ID
- API Key
- Region/URL
-
Add to your
.envfile:# WatsonX Configuration WATSONX_API_KEY=your-watsonx-api-key WATSONX_PROJECT_ID=your-project-id WATSONX_URL=https://us-south.ml.cloud.ibm.com # or your region AGENT_SETTING_CONFIG="settings.watsonx.toml" # Optional override MODEL_NAME=meta-llama/llama-4-maverick-17b-128e-instruct-fp8 # Override model for all agents
Default Values:
- Model:
meta-llama/llama-4-maverick-17b-128e-instruct-fp8
Setup Instructions:
- Add to your
.envfile:AGENT_SETTING_CONFIG="settings.azure.toml" # Default config uses ETE AZURE_OPENAI_API_KEY="<your azure apikey>" AZURE_OPENAI_ENDPOINT="<your azure endpoint>" OPENAI_API_VERSION="2024-08-01-preview"
CUGA supports LiteLLM through the OpenAI configuration by overriding the base URL:
-
Add to your
.envfile:# LiteLLM Configuration (using OpenAI settings) OPENAI_API_KEY=your-api-key AGENT_SETTING_CONFIG="settings.openai.toml" # Override for LiteLLM MODEL_NAME=Azure/gpt-4o # Override model name OPENAI_BASE_URL=https://your-litellm-endpoint.com # Override base URL OPENAI_API_VERSION=2024-08-06 # Override API version
CUGA uses TOML configuration files located in src/cuga/configurations/models/:
settings.openai.toml- OpenAI configuration (also supports LiteLLM via base URL override)settings.watsonx.toml- WatsonX configurationsettings.azure.toml- Azure OpenAI configuration
Each file contains agent-specific model settings that can be overridden by environment variables.
π Running with a secure code sandbox
Cuga supports isolated code execution using Docker/Podman containers for enhanced security.
-
Install container runtime: Download and install Rancher Desktop or Docker.
-
Install sandbox dependencies:
uv sync --group sandbox
-
Start with remote sandbox enabled:
cuga start demo --sandbox
This automatically configures Cuga to use Docker/Podman for code execution instead of local execution.
-
Test your sandbox setup (optional):
# Test local sandbox (default) cuga test-sandbox # Test remote sandbox with Docker/Podman cuga test-sandbox --remote
You should see the output:
('test succeeded\n', {})
Note: Without the --sandbox flag, Cuga uses local Python execution (default), which is faster but provides less isolation.
βοΈ Reasoning modes - Switch between Fast/Balanced/Accurate modes
| Mode | File | Description |
|---|---|---|
fast |
./configurations/modes/fast.toml |
Optimized for speed |
balanced |
./configurations/modes/balanced.toml |
Balance between speed and precision (default) |
accurate |
./configurations/modes/accurate.toml |
Optimized for precision |
custom |
./configurations/modes/custom.toml |
User-defined settings |
configurations/
βββ modes/fast.toml
βββ modes/balanced.toml
βββ modes/accurate.toml
βββ modes/custom.toml
Edit settings.toml:
[features]
cuga_mode = "fast" # or "balanced" or "accurate" or "custom"Documentation: ./docs/flags.html
π― Task Mode Configuration - Switch between API/Web/Hybrid modes
| Mode | Description |
|---|---|
api |
API-only mode - executes API tasks (default) |
web |
Web-only mode - executes web tasks using browser extension |
hybrid |
Hybrid mode - executes both API tasks and web tasks using browser extension |
- Opens tasks in a regular web browser
- Best for API/Tools-focused workflows and testing
- Interface inside a browser extension (available next to browser)
- Optimized for web-specific tasks and interactions
- Direct access to web page content and controls
- Opens inside browser extension like web mode
- Can execute both API/Tools tasks and web page tasks simultaneously
- Starts from configurable URL defined in
demo_mode.start_url - Most versatile mode for complex workflows combining web and API operations
Edit ./src/cuga/settings.toml:
[demo_mode]
start_url = "https://opensource-demo.orangehrmlive.com/web/index.php/auth/login" # Starting URL for hybrid mode
[advanced_features]
mode = 'api' # 'api', 'web', or 'hybrid'π Special Instructions Configuration
Each .md file contains specialized instructions that are automatically integrated into the CUGA's internal prompts when that component is active. Simply edit the markdown files to customize behavior for each node type.
Available instruction sets: answer, api_planner, code_agent, plan_controller, reflection, shortlister, task_decomposition
configurations/
βββ instructions/
βββ instructions.toml
βββ default/
β βββ answer.md
β βββ api_planner.md
β βββ code_agent.md
β βββ plan_controller.md
β βββ reflection.md
β βββ shortlister.md
β βββ task_decomposition.md
βββ [other instruction sets]/
Edit configurations/instructions/instructions.toml:
[instructions]
instruction_set = "default" # or any instruction set aboveπΎ Save & Reuse
β’ Change ./src/cuga/settings.toml: cuga_mode = "save_reuse_fast"
β’ Run: cuga start demo
β’ First run: get top account by revenue
- This is a new flow (first time)
- Wait for task to finish
- Approve to save the workflow
- Provide another example to help generalization of flow e.g.
get top 2 accounts by revenue
β’ Flow now will be saved:
- May take some time
- Flow will be successfully saved
β’ Verify reuse: get top 4 accounts by revenue
- Should run faster using saved workflow
π§ Adding Tools: Comprehensive Examples
CUGA supports three types of tool integrations. Each approach has its own use cases and benefits:
| Tool Type | Best For | Configuration | Runtime Loading |
|---|---|---|---|
| OpenAPI | REST APIs, existing services | mcp_servers.yaml |
β Build |
| MCP | Custom protocols, complex integrations | mcp_servers.yaml |
β Build |
| LangChain | Python functions, rapid prototyping | Direct import | β Runtime |
- Tool Registry: ./src/cuga/backend/tools_env/registry/README.md
- Comprehensive example with different tools + MCP: [./docs/examples/cuga_with_runtime_tools/README.md](Adding Tools)
- CUGA as MCP: ./docs/examples/cuga_as_mcp/README.md
The test suite covers various execution modes across different scenarios:
| Scenario | Fast Mode | Balanced Mode | Accurate Mode | Save & Reuse Mode |
|---|---|---|---|---|
| Find VP Sales High-Value Accounts | β | β | β | - |
| Get top account by revenue | β | β | β | β |
| List my accounts | β | β | β | - |
Unit Tests
- Variables Manager: Core functionality, metadata handling, singleton pattern, reset operations
- Value Preview: Intelligent truncation, nested structure preservation, length-aware formatting
Integration Tests
- API Response Handling: Error cases, validation, timeout scenarios, parameter extraction
- Registry Services: OpenAPI integration, MCP server functionality, mixed service configurations
- Tool Environment: Service loading, parameter handling, function calling, isolation testing
Focused suites:
./src/scripts/run_tests.shFor information on how to evaluate, see the CUGA Evaluation Documentation
- π Example applications
- π§ Contact: CUGA Team
- Alon Oved
- Asaf Adi
- Avi Yaeli
- Harold Ship
- Ido Levy
- Nir Mashkif
- Offer Akrabi
- Sami Marreed
- Segev Shlomov
- Yinon Goldshtein
CUGA is open source because we believe trustworthy enterprise agents must be built together.
Here's how you can help:
- Share use cases β Show us how you'd use CUGA in real workflows.
- Request features β Suggest capabilities that would make it more useful.
- Report bugs β Help improve stability by filing clear, reproducible reports.
All contributions are welcome through GitHub Issues - whether it's sharing use cases, requesting features, or reporting bugs!
Amongst other, weβre exploring the following directions:
- Policy support: procedural SOPs, domain knowledge, input/output guards, context- and tool-based constraints
- Performance improvements: dynamic reasoning strategies that adapt to task complexity
Please follow the contribution guide in CONTRIBUTING.md.