Skip to content

Deterministic decision gate for AI/ML systems. Risk-Gate enforces strict, schema-driven admissibility boundaries between AI/LLM intent and real system actions. It provides a fixed, human-owned decision structure with deterministic allow/block outcomes, explicit audit logging, and environment-specific policy via configuration — no ML, no heuristics,

License

Notifications You must be signed in to change notification settings

fractal360/risk-gate-api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Risk Gate API

Why This Exists

Modern systems increasingly place AI or automated components in positions where they can propose actions with real-world impact. What is often missing is a hard, deterministic authority boundary that decides whether an action is even admissible before any optimisation, policy evaluation, or learning takes place.

Risk Gate exists to demonstrate that boundary.

It is intentionally narrow in scope:

AI may propose actions.
This system decides whether those actions are allowed to exist at all.

No inference. No scoring. No optimisation. No learning. If something is not explicitly recognised and permitted, it is blocked by construction.

This project exists as a portfolio-grade demonstration of senior system ownership, governance thinking, and architectural restraint.


What This Demonstrates

This system is designed to evidence the following capabilities:

  • Clear separation of authority layers (schema → decision → policy)
  • Deterministic, code-owned decision making
  • Defensive API design using strict schema enforcement
  • Infrastructure ownership using Terraform as the sole source of truth
  • Operational clarity over feature richness
  • Explicit handling of failure modes and omissions

It deliberately avoids features that would obscure those goals.


Core Invariants (Non-Negotiable)

  • Decisions are fully deterministic
  • All admissible inputs are explicitly enumerated
  • Unknown or invalid values are rejected at the schema layer
  • The decision layer contains no heuristics, ML, or policy tuning
  • Every request produces exactly one audit log entry

These invariants are enforced in code and are not configurable at runtime.


What This System Does

  • Exposes a single decision endpoint: POST /evaluate
  • Accepts structured inputs describing:
    • caller class
    • trust tier
    • action type
    • resource type
    • execution environment
  • Validates inputs using strict Pydantic enums
  • Rejects invalid inputs with HTTP 422 before decision logic runs
  • Applies deterministic decision rules
  • Returns one of:
    • ALLOW
    • BLOCK
  • Emits one structured JSON log line per request
  • Generates and returns a request_id (UUID) for traceability
  • Exposes a health endpoint for infrastructure checks

What This System Explicitly Does Not Do

These exclusions are intentional and enforced:

  • ❌ No machine learning
  • ❌ No heuristics or scoring
  • ❌ No intent inference
  • ❌ No inspection of arbitrary payloads
  • ❌ No optimisation logic
  • ❌ No dynamic or runtime-editable policies

If an action or resource is not explicitly defined, it is blocked by default.


Architectural Principles

Determinism

Given the same validated input, the system will always return the same decision.

Authority Separation

  • Schema layer: defines what is representable
  • Decision layer: defines what is admissible
  • Policy layer (future): may define who is allowed, never what exists

Auditability

Each request produces:

  • a unique request ID
  • a decision (ALLOW or BLOCK)
  • a machine-readable reason code
  • a human-readable reason
  • a UTC timestamp

API Overview

Decision Endpoint

POST /evaluate

Behaviour

  • Invalid enum values → HTTP 422
  • Valid request → HTTP 200 with deterministic decision

Example Request

The following example shows a minimal, valid request.
It is illustrative only — correctness is established by unit tests.

curl -X POST http://<alb_dns_name>/evaluate \
  -H "Content-Type: application/json" \
  -d '{
    "caller_class": "system",
    "trust_tier": "trusted",
    "action": "read",
    "resource": "config",
    "environment": "production"
  }'

A successful request returns a deterministic decision (ALLOW or BLOCK) along with a reason code and a generated request_id.

Health Endpoint

GET /health

Used exclusively by the load balancer target group for health checks.


Deployment Model

  • FastAPI application, containerised
  • Runs on AWS ECS Fargate
  • Single ECS service, desired count = 1
  • Exposed only via an Application Load Balancer (HTTP)
  • No direct public access to tasks
  • Infrastructure defined and owned via Terraform

The ALB is the only external entry point.


Infrastructure Notes

  • Region: eu-west-2
  • Default VPC is used intentionally for clarity
  • Tasks currently require outbound internet access to pull images from ECR
  • assign_public_ip = true is enabled as a deliberate, documented trade-off

Planned hardening (explicitly deferred):

  • NAT gateway or VPC endpoints for ECR
  • TLS termination
  • WAF integration

Repository Structure

app/        Application logic
infra/      Terraform infrastructure
requirements.txt
README.md

Reviewer Orientation

This repository is intentionally small and opinionated.

If you are reviewing this as a hiring manager or senior engineer:

  • Start with Why This Exists and Core Invariants to understand the authority boundaries being demonstrated
  • Treat the Terraform in infra/ as the authoritative description of how the system is deployed
  • Read omitted features (TLS, autoscaling, policy layer) as explicit non-goals, not missing work

This project is designed to demonstrate deterministic control, governance, and system ownership, not feature completeness.

Status

v1 — Deterministic decision layer and infrastructure complete

Deferred by design:

  • Policy layer
  • Autoscaling
  • TLS / HTTPS
  • Network hardening beyond minimum viability
  • Monitoring and alerting

These are omitted to preserve architectural clarity for this demonstration.

About

Deterministic decision gate for AI/ML systems. Risk-Gate enforces strict, schema-driven admissibility boundaries between AI/LLM intent and real system actions. It provides a fixed, human-owned decision structure with deterministic allow/block outcomes, explicit audit logging, and environment-specific policy via configuration — no ML, no heuristics,

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages