This repository hosts a comprehensive, interactive survey of the AI agent orchestration landscape as of early 2026. It serves as a centralized resource for developers, researchers, and enterprises to explore the rapidly evolving ecosystem of agent frameworks, benchmarking tools, and datasets.
The project features a responsive web interface that categorizes tools into key segments:
- Code-First Frameworks: Libraries designed for developers building complex, multi-agent systems (e.g., LangGraph, AutoGen).
- Visual / Low-Code Platforms: Tools enabling non-technical users to design agent workflows (e.g., n8n, Flowise).
- Enterprise Solutions: Managed services focusing on security, scale, and compliance (e.g., Azure AI Agent Service).
- Benchmarking: Essential utilities for evaluating agent performance, reliability, and safety.
Contributions are welcome.
- Add a Framework: If a tool is missing, please submit a Pull Request adding it to the
rawDataarray inindex.html. Ensure all fields (philosophy, proficiency, setup, etc.) are completed. - Update Data: Correct outdated information regarding features, pricing models, or documentation links.
The web page can be improved in various ways. For now, it's just a quick single-page HTML.
- Enhance the UI: Improvements to the visualization, filtering logic, or design are encouraged.
- Improve Code Structure: Separate logic from data (move CSS and JS to separate files).
- Add Examples: Find good examples and add them in a way the fits the design.
This project is open-source and available under the MIT License.