Experimental research framework for running AI benchmarks at scale
-
Updated
Oct 29, 2025 - Elixir
Experimental research framework for running AI benchmarks at scale
AI Firewall and guardrails for LLM-based Elixir applications
Explainable AI (XAI) tools for the Crucible framework
Data validation and quality library for ML pipelines in Elixir
Request hedging for tail latency reduction in distributed systems
Advanced telemetry collection and analysis for AI research
Structured causal reasoning chain logging for LLM transparency
Dataset management and caching for AI research benchmarks
Fairness and bias detection library for Elixir AI/ML systems
Adversarial testing and robustness evaluation for the Crucible framework
CrucibleFramework: A scientific platform for LLM reliability research on the BEAM
Statistical testing and analysis framework for AI research
Multi-model ensemble voting strategies for LLM reliability
Interactive Phoenix LiveView demonstrations of the Crucible Framework - showcasing ensemble voting, request hedging, statistical analysis, and more with mock LLMs
Add a description, image, and links to the ensemble-methods topic page so that developers can more easily learn about it.
To associate your repository with the ensemble-methods topic, visit your repo's landing page and select "manage topics."