Skip to content
#

bias-reduction

Here are 25 public repositories matching this topic...

A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

  • Updated Oct 16, 2025
  • Python

WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!

  • Updated Jul 29, 2025
  • Python

Nexus.ai is a secure, vendor-neutral AI orchestration engine. It lets multiple LLMs and web search debate a question, then ranks and reconciles outputs to reduce bias and surface the best supported answer (with citations/media). All I/O is encrypted (AES-256), every step is logged for auditability, and the project ships without API keys by default.

  • Updated Oct 22, 2025
  • TypeScript

Improve this page

Add a description, image, and links to the bias-reduction topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the bias-reduction topic, visit your repo's landing page and select "manage topics."

Learn more