Experimental OpenCode-first orchestration plugin inspired by the Tang Dynasty's Three Departments and Six Ministries: draft, review, dispatch, execute, and audit.
-
Updated
Mar 20, 2026 - TypeScript
Experimental OpenCode-first orchestration plugin inspired by the Tang Dynasty's Three Departments and Six Ministries: draft, review, dispatch, execute, and audit.
Proof of Human Intent (PoHI) - Cryptographically verifiable human approval for AI-driven development
A long-form article and practical framework for designing machine learning systems that warn instead of decide. Covers regimes vs decimals, levers over labels, reversible alerts, anti-coercion UI patterns, auditability, and the “Warning Card” template, so ML preserves human agency while staying useful under uncertainty.
Sifaka is an open-source framework that adds reflection and reliability to large language model (LLM) applications.
Lean orchestration platform for enterprise AI — where each decision costs hundreds. State machine core, HITL as a first-class state, corrections that accumulate. First use-case being Coding agent. Open research, early stage.
This project integrates Hyperledger Fabric with machine learning to enhance transparency and trust in data-driven workflows. It outlines a blockchain-based strategy for data traceability, model auditability, and secure ML deployment across consortium networks.
ADOS Turn AI into a repeatable, auditable SDLC: ticket → spec → plan → PR → quality gates → release. Agents, templates, skills, scripts, and reference workflows.
Stop Claude Code from doing irreversible damage. Policy-gated execution + receipts so you can ship agents without sweating production.
Governance layer for human–AI collaboration: evidence boundaries, audit artifacts, and change admissibility.
Auditable transparency pack for OddsFlow: verification rules, schemas, sample logs, and versioned notes (not betting tips).
Determinism: Bit-identical outputs under identical inputs, configuration, and execution environment.
Governance beneath the model. Custody before trust. Open for audit. Constitutional Grammar for Multi-Model AI Federations, Firmware Specification • Zero-Touch Alignment • Public Release v1.0
Methodology defining structured workflow topology and reproducibility guarantees for governed research.
SMALL (Schema, Manifest, Artifact, Lineage, Lifecycle) is a formal execution state protocol that makes AI-assisted work legible, deterministic, and resumable by separating durable state from ephemeral execution.
AI that tries to show its work. Transparent, private, and easy to run yourself
Open, verifiable AI-driven football market analytics project for detecting mispriced bookmaker odds.
A neutral protocol for coordinating intent across humans and agents. Goal lifecycle, state, auditability.
🔥 Emergent intelligence in autonomous trading agents through evolutionary algorithms. Testing zero-knowledge learning in cryptocurrency markets. Where intelligence emerges, not designed.
Reference implementation of the Spiral–HDAG–Coupling architecture. It combines a verifiable ledger, a tensor-based Hyperdimensional DAG, and Time Information Crystals to provide a new kind of memory layer for Machine Learning. With integrated Zero-Knowledge ML, the system enables trustworthy, auditable, and privacy-preserving AI pipelines.
Not new AI, but accountable and auditable AI
Add a description, image, and links to the auditability topic page so that developers can more easily learn about it.
To associate your repository with the auditability topic, visit your repo's landing page and select "manage topics."