-
Notifications
You must be signed in to change notification settings - Fork 66
Description
Track
Reasoning Agents (Azure AI Foundry)
Project Name
terraform-guardian-agents-league
GitHub Username
fleivac101
Repository URL
Project Description
Terraform Guardian is a multi-agent AI governance system designed to enhance DevSecOps practices for Infrastructure as Code (IaC) environments.
The project addresses a critical challenge in modern cloud environments: insecure, misconfigured, or non-compliant Terraform deployments reaching production without proper governance controls.
Terraform Guardian uses a structured multi-agent reasoning architecture to simulate an enterprise-grade AI governance layer:
• Structural Analysis Agent – Extracts resources and identifies configuration patterns from Terraform files.
• Security Policy Agent – Detects misconfigurations such as disabled HTTPS, weak TLS, public exposure, and governance violations.
• Remediation Architect Agent – Automatically generates improved and secure Terraform code following best practices.
• Executive Risk Agent – Produces an executive-level risk score and governance summary for decision-makers.
The system demonstrates how AI agents can collaborate to analyze, secure, remediate, and communicate infrastructure risks in a clear and structured way.
Terraform Guardian showcases the potential of multi-agent systems for automated DevSecOps governance, bridging the gap between technical infrastructure validation and executive risk visibility.
Demo Video or Screenshots
Primary Programming Language
Python
Key Technologies Used
- Python 3.11
- Azure OpenAI (GPT-4o-mini)
- Azure AI Foundry
- Multi-Agent Reasoning Architecture
- Terraform (IaC Governance)
- DevSecOps Governance Patterns
- JSON Executive Risk Scoring Engine
- GitHub Repository Integration
Submission Type
Individual
Team Members
- fleivac101 – AI & DevSecOps Architecture
Submission Requirements
- My project meets the track-specific challenge requirements
- My repository includes a comprehensive README.md with setup instructions
- My code does not contain hardcoded API keys or secrets
- I have included demo materials (video or screenshots)
- My project is my own work with proper attribution for any third-party code
- I agree to the Code of Conduct
- I have read and agree to the Disclaimer
- My submission does NOT contain any confidential, proprietary, or sensitive information
- I confirm I have the rights to submit this content and grant the necessary licenses
Quick Setup Summary
- Clone the repository
- Create a Python virtual environment (python -m venv .venv)
- Activate the virtual environment
- Install dependencies: pip install -r requirements.txt
- Configure environment variables based on .env.example
- Place your Terraform file (sample_insecure.tf) in the project root
- Run: python main.py
Technical Highlights
- Implemented multi-agent orchestration pattern to simulate enterprise-grade AI governance.
- Applied role specialization to reduce hallucination and improve reasoning consistency.
- Enforced structured output contracts for predictable machine-readable governance reports.
- Designed executive-level risk scoring model for non-technical stakeholders.
- Created modular Terraform ingestion logic to support scalable IaC analysis.
Challenges & Learnings
One of the main challenges was designing a multi-agent system that produces coherent and structured outputs instead of fragmented LLM responses.
I learned that defining clear agent roles and responsibility boundaries significantly improves reasoning quality and output consistency.
Another key challenge was translating technical IaC findings into executive-level risk language. This required careful prompt engineering to balance technical depth and governance clarity.
This project reinforced the importance of structured prompting, agent orchestration, and deterministic output design in enterprise AI systems.
Contact Information
Country/Region
COSTA RICA