Sifaka is an open-source framework that adds reflection and reliability to large language model (LLM) applications.
-
Updated
Dec 4, 2025 - Python
Sifaka is an open-source framework that adds reflection and reliability to large language model (LLM) applications.
This project integrates Hyperledger Fabric with machine learning to enhance transparency and trust in data-driven workflows. It outlines a blockchain-based strategy for data traceability, model auditability, and secure ML deployment across consortium networks.
A long-form article and practical framework for designing machine learning systems that warn instead of decide. Covers regimes vs decimals, levers over labels, reversible alerts, anti-coercion UI patterns, auditability, and the “Warning Card” template, so ML preserves human agency while staying useful under uncertainty.
A framework that makes AI research transparent, traceable, and independently verifiable.
Governance beneath the model. Custody before trust. Open for audit. Constitutional Grammar for Multi-Model AI Federations, Firmware Specification • Zero-Touch Alignment • Public Release v1.0
An ethics-first, transparent AI reasoning assistant. Built to be self-hosted by anyone.
Proof of Human Intent (PoHI) - Cryptographically verifiable human approval for AI-driven development
BLUX-cA — Clarity Agent core of the BLUX ecosystem. A constitutional, audit-driven AI helm that interprets intent, enforces governance, and routes tasks safely across models, tools, and agents.
A principal-level framework for governing AI-assisted decisions with accountability, auditability, and risk controls.
Self-auditing governance framework that turns contradictions into verifiable, adaptive intelligence.
Digital Native Institutions and the National Service Unit: a formal, falsifiable architecture for protocol-governed institutional facts and next-generation public administration.
Governance, architecture, and epistemic framework for the Aurora Workflow Orchestration ecosystem (AWO, CRI-CORE, and scientific case studies).
Not new AI, but accountable and auditable AI
Winmem keeps Solana projects alive without maintainers.
omphalOS turns strategic trade-and-technology analyses into tamper-evident, "run packets" for inspector, counsel, and oversight review.
Reference implementation of the Spiral–HDAG–Coupling architecture. It combines a verifiable ledger, a tensor-based Hyperdimensional DAG, and Time Information Crystals to provide a new kind of memory layer for Machine Learning. With integrated Zero-Knowledge ML, the system enables trustworthy, auditable, and privacy-preserving AI pipelines.
An OSS, developer-focused Consent Management Platform sample which includes functionality for granular data types, proxy consent, age-specific flows, revocation/updates, auditability, extensibility. Demo site included below.
完全ローカル×思考可視化×三権分立の軽量AIコア。ネット/外部DBなし。state保存/復元、4D意味距離(geo/strict)、署名付き役割カード、𝒢実装。
Agentic RFP response system orchestrating Sales, Technical, and Pricing agents with human-in-the-loop governance for fast, auditable enterprise responses.
A governed system for translating applied AI research into auditable, decision-ready artifacts.
Add a description, image, and links to the auditability topic page so that developers can more easily learn about it.
To associate your repository with the auditability topic, visit your repo's landing page and select "manage topics."