Documentation | Website & Demos | Blog | Chinese Blog | CAMEL-AI
CRAB is a framework for building LLM agent benchmark environments in a Python-centric way.
🌐 Cross-platform and Multi-environment
- Create build agent environments that support various deployment options including in-memory, Docker-hosted, virtual machines, or distributed physical machines, provided they are accessible via Python functions.
- Let the agent access all the environments in the same time through a unified interface.
⚙ ️Easy-to-use Configuration
- Add a new action by simply adding a
@action
decorator on a Python function. - Define the environment by integrating several actions together.
📐 Novel Benchmarking Suite
- Define tasks and the corresponding evaluators in an intuitive Python-native way.
- Introduce a novel graph evaluator method providing fine-grained metrics.
- Python 3.10 or newer
pip install crab-framework[client]
All datasets and experiment code are in crab-benchmark-v0 directory. Please carefully read the benchmark tutorial before using our benchmark.
export OPENAI_API_KEY=<your api key>
python examples/single_env.py
python examples/multi_env.py
Please cite our paper if you use anything related in your work:
@misc{xu2024crab,
title={CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents},
author={Tianqi Xu and Linyao Chen and Dai-Jie Wu and Yanjun Chen and Zecheng Zhang and Xiang Yao and Zhiqiang Xie and Yongchao Chen and Shilong Liu and Bochen Qian and Philip Torr and Bernard Ghanem and Guohao Li},
year={2024},
eprint={2407.01511},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2407.01511},
}