A powerful, AI-driven tool for analyzing Terraform Plan JSON files to identify security vulnerabilities and generate actionable recommendations. Built for DevSecOps workflows, it leverages Google Gemini AI to provide intelligent insights into cloud infrastructure configurations, ensuring compliance with best practices like CIS Benchmarks.
Infrastructure as Code (IaC) tools like Terraform enable rapid deployment of cloud resources, but misconfigurations can introduce critical security risks.
Cloud Security Analyzer addresses this by applying AI-driven contextual analysis to Terraform Plan (JSON) files before deployment, allowing teams to detect and remediate risks proactively.
It provides:
- π Static Analysis: Scanning Terraform plans for vulnerabilities without executing changes.
- π€ AI-Powered Insights: Detecting nuanced security issues beyond traditional rule-based checks.
- π Comprehensive Reporting: Generating structured JSON outputs and executive-ready HTML reports with severity scores, risk assessments, and remediation guidance.
- π CI/CD Integration: Seamlessly integrating into pipelines for automated infrastructure security reviews.
This tool is particularly valuable for teams deploying to AWS (with roadmap support for Azure and GCP), helping prevent breaches caused by exposed databases, overly permissive security groups, unencrypted storage, and other misconfigurations.
- Prevent security misconfigurations before deployment
- AI-enhanced contextual risk detection
- Executive-ready HTML and JSON reports
- CI/CD native integration
- Built for AWS (Azure & GCP roadmap)
- Designed for scalable DevSecOps environments
- Input: Provide a Terraform Plan JSON file (generated via
terraform plan -out=tfplan.binary && terraform show -json tfplan.binary > plan.json). - Analysis: The tool sends the plan to Google Gemini AI, guided by a specialized prompt template, to identify vulnerabilities, assign severities (CRITICAL, HIGH, MEDIUM, LOW), and suggest fixes.
- Output:
- JSON Report: Structured data for programmatic consumption or further processing.
- HTML Report: Interactive, Tailwind CSS-styled dashboard with executive summaries, vulnerability cards, code recommendations, and references to AWS documentation.
- Metadata Enrichment: Automatically captures execution context like branch, timestamp, Terraform version, and environment for traceability.
The process is fast, typically completing in seconds, and supports both local execution and containerized runs.
The project follows a modular Python architecture for maintainability and extensibility:
core/: Core logic, including theTerraformAnalyzerclass that interfaces with Google Gemini AI for analysis.data/: Data handling modules for loading Terraform plans and prompts.reports/: Report generation, featuring Jinja2 templates for HTML rendering and utility functions for severity classification and formatting.cli/: Command-line interface for user interaction, path resolution, and orchestration.
Key dependencies include google-generativeai for AI integration, jinja2 for templating, and standard libraries for JSON handling. The design emphasizes separation of concerns, making it easy to extend for multi-cloud support or alternative AI models.
- Python 3.12+
- Google Gemini API Key (obtain from Google AI Studio)
- Terraform (for generating plan files)
-
Clone the repository:
git clone https://github.com/wellingtoong/cloud-security-analyzer.git cd cloud-security-analyzer -
Install dependencies:
pip install -r src/requirements.txt
-
Set environment variables:
export GEMINI_API_KEY="your-api-key-here" export GEMINI_MODEL="gemini-2.0-flash" # Optional, defaults to this model export BRANCH_NAME="develop" export RUN_TIMESTAMP="$(date -u +"%Y-%m-%dT%H:%M:%SZ")"
-
Generate a Terraform plan (example for AWS):
cd terraform/examples/aws/vpc-network/env/dev terraform init terraform plan -out=tfplan.binary terraform show -json tfplan.binary > ../../plans/tfplan.json
Run the analyzer on a plan file:
python src/main.py terraform/examples/aws/vpc-network/artifacts/tfplan.json Outputs will be generated in reports_output/:
terraform_security_report.jsonterraform_security_report.html
-
Build the image:
docker build -t cloud-security-analyzer . -
Run the container:
docker run --rm \ -e GEMINI_API_KEY="your-api-key-here" \ -e BRANCH_NAME \ -e RUN_TIMESTAMP \ -e ENVIRONMENT="dev" \ -v "$(pwd):/app" \ terraform-security-analyzer \ terraform/examples/aws/vpc-network/artifacts/tfplan.json
Integrate into your workflow for automated analysis. Example .github/workflows/security-analysis.yml:
name: Security Analysis
on:
pull_request:
paths:
- 'terraform/**'
jobs:
terraform_plan:
runs-on: ubuntu-latest
services:
localstack:
image: localstack/localstack:latest
ports:
- 4566:4566
env:
SERVICES: s3,iam,ec2,sts,logs,events,cloudwatch,lambda,apigateway,ecs,elasticloadbalancing
DEBUG: "0"
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
- name: Terraform init/validate/plan against LocalStack
working-directory: ${{ inputs.terraform_dir }}
env:
AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: test
AWS_DEFAULT_REGION: us-east-1
AWS_ENDPOINT_URL: http://localhost:4566
AWS_EC2_METADATA_DISABLED: "true"
run: |
terraform init -input=false -no-color -backend=false
terraform validate -no-color
terraform plan -input=false -no-color -refresh=false -out=tfplan.binary
terraform show -json tfplan.binary > plan.json
- name: Upload plan.json artifact
uses: actions/upload-artifact@v4
with:
name: terraform-plan-json
path: ${{ inputs.terraform_dir }}/plan.json
if-no-files-found: error
analyze_plan:
runs-on: ubuntu-latest
needs: terraform_plan
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: terraform-plan-json
path: artifacts_in
- run: mkdir -p reports_output
- name: Prepare execution metadata
id: metadata
run: |
echo "BRANCH_NAME=${GITHUB_REF_NAME}" >> $GITHUB_ENV
echo "RUN_TIMESTAMP=$(date -u +"%Y-%m-%dT%H:%M:%SZ")" >> $GITHUB_ENV
- name: Run analyzer (Docker)
env:
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
run: |
docker run --rm \
-e GEMINI_API_KEY="$GEMINI_API_KEY" \
-e BRANCH_NAME="$BRANCH_NAME" \
-e RUN_TIMESTAMP="$RUN_TIMESTAMP" \
-e ENVIRONMENT="develop" \
-v "$GITHUB_WORKSPACE:/repo" \
-v "$GITHUB_WORKSPACE/reports_output:/app/reports_output" \
-w /repo \
wellingtoong/cloud-security-analyzer:1.0.1 \
artifacts_in/plan.json
- name: Upload HTML report artifact
uses: actions/upload-artifact@v4
with:
name: terraform-security-report-html
path: reports_output/terraform_security_report.html
if-no-files-found: errorThis pipeline generates plans, runs analysis, and publishes HTML reports as artifacts for review.
cloud-security-analyzer/
βββ .github/
β βββ workflows/ # CI/CD pipelines
βββ src/
β βββ main.py # Entry point
β βββ requirements.txt # Python dependencies
β βββ terraform_analyzer/
β β βββ cli.py # CLI logic
β β βββ config.py # Configuration management
β β βββ core/
β β β βββ execution_metadata.py # Metadata utility
β β β βββ analyzer.py # AI analysis engine
β β βββ data/
β β β βββ plan_loader.py # Terraform plan loading
β β β βββ prompt_loader.py # Prompt template handling
β β βββ reports/
β β βββ html_renderer.py # HTML generation
β β βββ utils.py # Report utilities
βββ reports/
β βββ templates/ # Jinja2 templates
β βββ examples/ # Sample reports
βββ terraform/
β βββ examples/ # Terraform configurations for testing
βββ prompts/ # AI prompt templates
βββ tests/ # Unit tests
βββ Dockerfile # Container definition
βββ README.md # This file
The items below represent potential roadmap initiatives and planned evolutions of the platform.
They reflect strategic directions and may be progressively implemented in future releases.
- Support for CIS, NIST, ISO 27001 and SOC2 mappings
- Automatic control mapping by compliance framework
- Aggregated security score per environment
- SARIF export for GitHub Advanced Security
- Integration with AWS Security Hub
- Full Azure and GCP support
- Automatic provider detection from Terraform plan
- Cloud-specific AI prompt templates
- Cross-cloud severity normalization
- Smart caching for repeated analyses
- Incremental analysis (plan diff only)
- False-positive classification layer
- Explainability: "why is this a risk?"
- Full Terraform remediation snippet generation
- Official GitHub Action release
- Official Docker image release
- GitLab CI integration
- Azure DevOps integration
- Slack / Microsoft Teams webhook notifications
- Web dashboard (SaaS mode)
- Historical analysis per branch
- Baseline vs current comparison
- Security maturity metrics
- Dynamic security score badge
- Parallel processing for large plans
- Support for very large plans (>50MB)
- JSON chunking for AI token optimization
- Streaming-based AI analysis
- Multi-tenant architecture
- RBAC for report access
- Structured logs for SIEM ingestion
- Public REST API for integrations
- Custom policy-as-code validation
The limitations listed below represent current technical and architectural constraints.
Some of these constraints may be addressed and improved in future releases as the platform evolves.
- Requires a valid Google Gemini API key
- Results are probabilistic (not deterministic)
- Possible false positives and false negatives
- Requires internet connectivity for AI analysis
- Azure and GCP require valid credentials for
terraform plan - Fully offline mode currently supported only for AWS (via LocalStack)
- Limited support for newly released provider resources
- Focused exclusively on Terraform
- Does not support CloudFormation, Pulumi or ARM templates
- No runtime (dynamic) security analysis
- Does not replace traditional SAST/DAST scanners
- Very large plans may increase analysis latency
- AI token usage may generate cost depending on plan size
- No built-in rate limiting or quota management yet
- Does not automatically block deployments (advisory analysis only)
- No native SIEM integration yet
- No persistent historical storage outside generated reports
We welcome contributions! Please follow these steps:
- Fork the repository and create a feature branch.
- Ensure code adheres to PEP 8 and includes tests.
- Run tests:
pytest tests/ - Submit a pull request with a clear description of changes.
For issues or feature requests, use the GitHub Issues page.
This project is licensed under the MIT License. See LICENSE for details.