This Terraform configuration provides a complete AWS Config setup with dynamic Lambda-based custom rules. It automatically discovers Python rule files and creates the necessary AWS resources for compliance monitoring.
The solution creates:
- AWS Config: Configuration recorder and delivery channel
- S3 Bucket: Stores configuration snapshots and history
- SNS Topic: Sends compliance notifications
- Lambda Functions: Custom Config rules (both RDK and Boto3 based)
- Lambda Layer: RDKLib dependencies for RDK-based rules
- IAM Roles: Proper permissions for all components
βββ modules/
β βββ config/ # AWS Config setup
β βββ iam/ # IAM roles and policies
β βββ lambda_config_rule/ # Lambda function module
β βββ rdklib_layer/ # Lambda layer with RDK dependencies
β βββ s3/ # S3 bucket for Config
β βββ sns/ # SNS notifications
βββ rules/
β βββ s3/ # Rule files organized by service
β βββ s3_rdk.py # RDK-based rule
β βββ s3_boto3.py # Boto3-based rule
βββ temp/ # Generated zip files
βββ locals.tf # Dynamic rule discovery
βββ main.tf # Main configuration
βββ variables.tf # Input variables
βββ terraform.tfvars # Variable values
βββ outputs.tf # Output values
- Framework: Uses the RDKLib framework for AWS Config rule development
- Layer: Automatically gets the RDKLib Lambda layer with all dependencies
- Configuration: Requires
AssumeRoleMode = "False"parameter - Structure: Provides standardized class-based approach with built-in evaluation methods
- Error Handling: Built-in exception handling and logging
- Example:
s3_rdk.py
When to use RDK:
- β Complex rules with multiple evaluation scenarios
- β Standardized development following AWS best practices
- β Built-in testing framework with RDKLib test utilities
- β Consistent error handling and logging patterns
- β Future-proof - maintained by AWS Labs
- β Rich evaluation context with client factory pattern
- β Automatic compliance type handling
- Framework: Uses native boto3 SDK directly
- Layer: No additional layer required (boto3 included in Lambda runtime)
- Configuration: Direct AWS API calls with manual error handling
- Structure: Simple function-based approach
- Flexibility: Full control over AWS service interactions
- Example:
s3_boto3.py
When to use Boto3:
- β Simple rules with straightforward logic
- β Minimal dependencies - no external layers needed
- β Custom AWS service interactions not covered by RDKLib
- β Performance-critical rules (no framework overhead)
- β Legacy compatibility with existing boto3 code
- β Fine-grained control over AWS API calls
- β Smaller deployment package size
| Feature | RDK-Based | Boto3-Based |
|---|---|---|
| Learning Curve | Moderate | Low |
| Development Speed | Fast (after setup) | Fast (immediate) |
| Code Structure | Standardized | Flexible |
| Error Handling | Built-in | Manual |
| Testing | Framework provided | Custom |
| Dependencies | RDKLib layer | None |
| Package Size | Larger | Smaller |
| AWS Best Practices | Enforced | Manual |
| Maintenance | Framework updates | Self-maintained |
- Terraform >= 1.0
- AWS CLI configured with appropriate permissions
- Python 3.12 (for local testing)
Update terraform.tfvars:
region = "eu-west-2"
bucket_name = "your-config-bucket"
sns_email_addresses = "your-email@domain.com"
# ... other variablesterraform init
terraform plan
terraform applymkdir rules/ec2Create Python files following naming convention:
ec2_instance_check_rdk.py(for RDK-based rules)ec2_instance_check_boto3.py(for Boto3-based rules)
Add the new service to locals.tf:
rule_directories = ["s3", "iam", "ec2", "rds", "lambda"]Add required permissions to permission_map in locals.tf:
permission_map = {
"ec2-ec2_instance_check_rdk" = [
"ec2:DescribeInstances",
"ec2:DescribeInstanceAttribute"
]
}Add resource types to resource_type_map:
resource_type_map = {
"ec2-ec2_instance_check_rdk" = ["AWS::EC2::Instance"]
}terraform plan
terraform applyimport json
import logging
from rdklib import ConfigRule, Evaluator, Evaluation, ComplianceType
class YourRuleClass(ConfigRule):
def evaluate_change(self, event, client_factory, configuration_item, valid_rule_parameters):
# Handle configuration changes
pass
def evaluate_periodic(self, event, client_factory, valid_rule_parameters):
# Handle periodic evaluations
pass
def handler(event, context):
rule = YourRuleClass()
evaluator = Evaluator(rule, ["AWS::ResourceType"])
return evaluator.handle(event, context)import json
import boto3
import logging
from datetime import datetime
def handler(event, context):
config = boto3.client("config")
# Your evaluation logic here
evaluations = []
# Build evaluations list
config.put_evaluations(
Evaluations=evaluations,
ResultToken=event["resultToken"]
)
return {"evaluations": evaluations}- Use least privilege IAM policies
- Enable S3 bucket encryption
- Configure VPC endpoints for private communication
- Review and audit rule permissions regularly
- Set appropriate CloudWatch log retention (currently 14 days)
- Monitor Lambda execution costs
- Use Config rule scopes to limit evaluations
- Consider using Config Organization rules for multi-account setups
- Set up CloudWatch alarms for Lambda errors
- Monitor Config compliance dashboards
- Configure SNS notifications for critical compliance failures
- Use AWS Config Insights for trend analysis
- The solution automatically scales with new rules
- Lambda concurrency limits may need adjustment for large environments
- Consider Config aggregators for multi-region setups
- Regularly update RDKLib layer dependencies
- Test rules in development environment first
- Use version control for rule changes
- Document custom rule logic and requirements
-
Lambda Import Errors
- Ensure RDK rules use
*_rdk.pynaming convention - Verify RDKLib layer is attached to RDK rules only
- Ensure RDK rules use
-
Permission Denied
- Check IAM policies in
permission_map - Verify Config service role permissions
- Check IAM policies in
-
Rules Not Triggering
- Confirm resource types in
resource_type_map - Check Config recorder is active
- Confirm resource types in
-
Archive File Issues
- Ensure rule directories exist under
rules/ - Verify Python files have
.pyextension
- Ensure rule directories exist under
- Check CloudWatch logs for Lambda functions
- Use AWS Config console to view rule evaluations
- Monitor SNS topic for compliance notifications
To destroy all resources:
terraform destroyNote: This will delete all Config history and compliance data.
- Follow the established naming conventions
- Test rules thoroughly before deployment
- Update documentation for new rule types
- Ensure proper error handling in custom rules
Oluwaseun Alausa DevOps Engineer | Enabling Secure, Scalable, and Observable Infrastructure π LinkedIn | YouTube