A cloud native Identity Aware Proxy and Access Control Decision service
-
Updated
Apr 12, 2026 - Go
A cloud native Identity Aware Proxy and Access Control Decision service
Python SDK for AI agent governance - audit trails, policy enforcement, quantum-safe signatures. Works with LangChain, CrewAI, MCP.
Data policy IN, dynamic view OUT: PACE is the Policy As Code Engine. It helps you to programatically create and apply a data policy to a processing platform like Databricks, Snowflake or BigQuery (or plain 'ol Postgres, even!) with definitions imported from Collibra, Datahub, ODD and the like.
Deterministic governance for AI coding agents. Cedar-based policy engine that intercepts every agent action and evaluates it against deterministic rules before execution.
Agent Identity Protocol - Zero-trust security layer for AI agents. Policy enforcement proxy for MCP with Human-in-the-Loop approval, DLP scanning, and audit logging.
Liquefy is a local-first OpenClaw vault system for packing, verifying, searching, and auditing agent runs with domain-aware compression, policy/redaction, and proof artifacts.
A CLI tool for managing GitHub Actions workflows
Headless, OpenAI-compatible AI gateway in Go. Multi-tenant auth, tracing, cost tracking, rate limits, and optional PII redaction. Single binary, self-hosted, audit-ready by design.
Deterministic execution authorization for AI agents
Stop Claude Code from doing irreversible damage. Policy-gated execution + receipts so you can ship agents without sweating production.
System-level security for LLM agents: fine-grained policy enforcement on tool calls to defend against indirect prompt injection
Cordon provides the guardrails for corporate teams adopting code generation tools.
MoralStack is a governance and safety layer for LLM applications. It analyzes user requests before generation, evaluates risk and intent, and decides whether the AI should answer normally, answer safely, or refuse. The goal is to make AI systems more auditable, controllable, and reliable in sensitive or regulated contexts.
Ensuring data usage control on real-time analytics in the FIWARE context
HELM OSS — Open-source core of the HELM Autonomous OS. Policy enforcement, kernel runtime, proof graphs, and audit infrastructure.
🛡️ Community-built integrations, SDKs, and tools for APort - the neutral trust rail for AI agents. Join Hacktoberfest 2025!
AI agent security and governance platform for the full lifecycle. Scan before you ship. Govern and block at runtime. No Azure required.
Runtime governance, risk, and compliance for AI agents
Artificial Intelligence Regulation Interface & Agreements
A governed coding agent CLI built on Claude Code. Runs inside an isolated Docker container with operator-controlled policy baked into the image, per-tool-call audit logging, and QA gates enforced before every session is accepted.
Add a description, image, and links to the policy-enforcement topic page so that developers can more easily learn about it.
To associate your repository with the policy-enforcement topic, visit your repo's landing page and select "manage topics."