Skip to content

Commit

Permalink
updated cloudtrail module
Browse files Browse the repository at this point in the history
  • Loading branch information
Farah Hassan committed Jul 12, 2021
1 parent f763647 commit fe3d308
Show file tree
Hide file tree
Showing 3 changed files with 102 additions and 3 deletions.
16 changes: 15 additions & 1 deletion modules/cloudtrail/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,18 @@
- All management and global events are captured within CloudTrail
- Store CloudTrail logs are stored in a private s3 bucket
- Access to the above bucket should also be logged by CloudTrail
- Integrate
- Integrate CloudTrail with CloudWatch Logs
- Ensure CloudTrail logs are encrypted at rest using KMS customer managed keys (CMKs)
- Enable CloudTrail log file integrity validation

# Notes:
- Create s3 private bucket
- logging to own bucket doesn't work, can create another bucket to save logs to and add those logs to the cloudtrail bucket?
- created kms cmk in the console, which can be imported into our config and used to encrypt/decrypt logs.
- bucket permissions may cause issues later, `private` vs `log-delivery-write`
- private = owner has FULL_CONTROL permissions
- log-delivery-write = LogDelivery group gets WRITE and READ_ACP permissions
- https://blog.runpanther.io/s3-bucket-security/
- based on this^ it seems like log-delivery-write is secure enough
- as per gov.uk website, companies must keep records for 6 years from the end of the financial year they relate to, following the same rules to keep logs for 6 years after creation. `days = 2190`
- need to confirm that this is 6 years after input not 6 years after bucket creation
82 changes: 82 additions & 0 deletions modules/cloudtrail/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# for now we are importing an existing key
# this key enables encrption/decryption of logs
resource "aws_kms_key" "bucket_key" {
description = "key for cloudtrail logs bucket"

# policy here, by default key has all permissions if a policy is not defined
# a policy was already defined when I created the key in console.
}

resource "aws_s3_bucket" "cloudtrail_logs" {
bucket = "cloudtrail_logs"
acl = "log-delivery-write"

logging {
# logging to own bucket doesn't work
target_bucket = aws_s3_bucket.cloudtrail_bucket_logs.id
}

server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.bucket_key.arn
sse_algorithm = "aws:kms"
}
}
}

# versioning ensures that when the bucket recieves multiple requests at the same time
# it stores all of them, this makes sense to enable for logging
versioning {
enabled = true
}

expiration {
days = 2190
}
}

# there is probably a better way to do this
# this bucket exists so that cloudtrail bucket logs can be sent here
# the plan is to route these logs back into the cloudtrail bucket somehow
resource "aws_s3_bucket" "cloudtrail_bucket_logs" {
bucket = "cloudtrail_bucket_logs"
acl = "log-delivery-write"
logging {
target_bucket = aws_s3_bucket.cloudtrail_logs.id
}

server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.bucket_key.arn
sse_algorithm = "aws:kms"
}
}
}

versioning {
enabled = true
}

expiration {
days = 2190
}
}

resource "aws_cloudtrail" "cloudtrail_logging" {
name = "cloudtrail_logging"
s3_bucket_name = aws_s3_bucket.cloudtrail_logs.id
include_global_service_events = true
enable_log_file_validation = true
is_multi_region_trail = true
is_organisation_trail = true
kms_key_id = aws_kms_key.bucket_key.arn


# events to track
event_selector {
include_management_events = true
}

}
7 changes: 5 additions & 2 deletions modules/default_vpc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ After creating a new IAM user, named 'test_2', in the AWS Consule. I logged on a

**The aim of the policies in this module is that, when attached to an IAM user, they would remove the default VPN and all its associated infrastructure.**

### Thought Process:
### Notes:
- It is hard for Terraform to remove a VPC that it did not create (you have to trick it into thinking it did)
- tried importing the vpc into terraform then applying with `count = 0` to destroy, which returned success message but didn't actually delete the vpc (checked using AWS console)
- tried importing the vpc then `terraform destroy` which also returned a success message but didn't delete the vpc
Expand All @@ -25,4 +25,7 @@ After creating a new IAM user, named 'test_2', in the AWS Consule. I logged on a
- succesfully deleted vpc and all associated infrastructure in one region (`us-east-2`)
- tested in second region to confirm that only the subnets, vpc, and igw are required for import.
- Moved terraform script into the default_vpc directory, create python script to loop through regions, can use `boto3` to get vpc_ids, subnet_ids, subnet_cidr and igw_id
- tried to condense subnet resources into one with `count` but didn't work because then it requires `terraform destroy` to delete the subnet.
- tried to condense subnet resources into one with `count` but didn't work because then it requires `terraform destroy` to delete the subnet.
- look up terraform `where` clause to explicitly delete subnets / vpc where it's the default, as opposed to filtering in python.
- test whether subnet cidr is necessary or if can be left blank (`"0.0.0.0/0"`)
- if it doesn't need to be correct, it makes the python script much easier

0 comments on commit fe3d308

Please sign in to comment.