CausalBench is a transparent, fair, and easy-to-use causal benchmarking platform, aiming to (a) enable the advancement of research in causal learning by facilitating scientific collaboration in novel algorithms, datasets, and metrics and (b) promote scientific objectivity, reproducibility, fairness, and awareness of bias in causal learning research. CausalBench provides services for downloading and exploring benchmarking data, algorithms, models, metrics, and benchmark results.
- Introducing CausalBench: A Flexible Benchmark Framework for Causal Analysis and Machine Learning
- CausalBench: Causal Learning Research Streamlined (Tutorial)
Please cite CausalBench using:
@inproceedings{10.1145/3627673.3679218,
author = {Kapki\c{c}, Ahmet and Mandal, Pratanu and Wan, Shu and Sheth, Paras and Gorantla, Abhinav and Choi, Yoonhyuk and Liu, Huan and Candan, K. Sel\c{c}uk},
title = {Introducing CausalBench: A Flexible Benchmark Framework for Causal Analysis and Machine Learning},
year = {2024},
isbn = {9798400704369},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3627673.3679218},
doi = {10.1145/3627673.3679218},
abstract = {While witnessing the exceptional success of machine learning (ML) technologies in many applications, users are starting to notice a critical shortcoming of ML: correlation is a poor substitute for causation. The conventional way to discover causal relationships is to use randomized controlled experiments (RCT); in many situations, however, these are impractical or sometimes unethical. Causal learning from observational data offers a promising alternative. While being relatively recent, causal learning aims to go far beyond conventional machine learning, yet several major challenges remain. Unfortunately, advances are hampered due to the lack of unified benchmark datasets, algorithms, metrics, and evaluation service interfaces for causal learning. In this paper, we introduce CausalBench, a transparent, fair, and easy-to-use evaluation platform, aiming to (a) enable the advancement of research in causal learning by facilitating scientific collaboration in novel algorithms, datasets, and metrics and (b) promote scientific objectivity, reproducibility, fairness, and awareness of bias in causal learning research. CausalBench provides services for benchmarking data, algorithms, models, and metrics, impacting the needs of a broad of scientific and engineering disciplines.},
booktitle = {Proceedings of the 33rd ACM International Conference on Information and Knowledge Management},
pages = {5220–5224},
numpages = {5},
keywords = {benchmark, causality, dataset, machine learning, metric, model},
location = {Boise, ID, USA},
series = {CIKM '24}
}
![]() |
K. Selcuk Candan and Huan Liu. 2023. NSF OAC Grant # 2311716: Elements: CausalBench: A Cyberinfrastructure for Causal-Learning Benchmarking for Efficacy, Reproducibility, and Scientific Collaboration. https://www.nsf.gov/awardsearch/showAward?AWD_ID=2311716 |
![]() |
Zheng O’Neill and K. Selcuk Candan and Teresa Wu and Jin Wen and Christina Rosan. 2022. NSF OISE Grant # 2230748: PIRE: Building Decarbonization via AI-empowered District Heat Pump Systems. https://www.nsf.gov/awardsearch/showAward?AWD_ID=2230748 |
![]() |
Nina H Fefferman and K. Selcuk Candan and Sadie J Ryan and Lydia Bourouiba and Shelby N Wilson. 2024. NSF DBI Grant # 2412115: PIPP Phase II: Analysis and Prediction of Pandemic Expansion (APPEX). https://www.nsf.gov/awardsearch/showAward?AWD_ID=2412115 |
![]() |
K. Selcuk Candan and Huan Liu and Tianfang Xu and Theodore Pavlic and Giuseppe Mascaro and Ross Maciejewski and Rebecca Muenich and Amber Wutich and Jorge Sefair. 2021. USACE Grant # GR40695: Designing nature to enhance resilience of built infrastructure in western US landscapes, Reproducibility, and Scientific Collaboration. https://experts.azregents.edu/en/projects/designing-nature-to-enhance-resilience-of-built-infrastructure-in |


