This repo is part of a multi-part guide that shows how to configure and deploy the example.com reference architecture described in Google Cloud security foundations guide (PDF). The following table lists the parts of the guide.
0-bootstrap | Bootstraps a Google Cloud organization, creating all the required resources and permissions to start using the Cloud Foundation Toolkit (CFT). This step also configures a CI/CD pipeline for foundations code in subsequent stages. |
1-org | Sets up top level shared folders, monitoring and networking projects, and organization-level logging, and sets baseline security settings through organizational policy. |
2-environments | Sets up development, non-production, and production environments within the Google Cloud organization that you've created. |
3-networks (this file) | Sets up base and restricted shared VPCs with default DNS, NAT (optional), Private Service networking, VPC service controls, Dedicated or Partner Interconnect, and baseline firewall rules for each environment. It also sets up the global DNS hub. |
4-projects | Sets up a folder structure, projects, and application infrastructure pipeline for applications, which are connected as service projects to the shared VPC created in the previous stage. |
5-app-infra | Deploy a simple Compute Engine instance in one of the business unit projects using the infra pipeline set up in 4-projects. |
For an overview of the architecture and the parts, see the terraform-example-foundation README.
The purpose of this step is to:
- Set up the global DNS Hub.
- Set up base and restricted shared VPCs with default DNS, NAT (optional), Private Service networking, VPC service controls, on-premises Dedicated or Partner Interconnect, and baseline firewall rules for each environment.
- 0-bootstrap executed successfully.
- 1-org executed successfully.
- 2-environments executed successfully.
- Obtain the value for the access_context_manager_policy_id variable. Can be obtained by running
gcloud access-context-manager policies list --organization YOUR_ORGANIZATION_ID --format="value(name)"
Please refer to troubleshooting if you run into issues during this step.
You need to set variables enable_hub_and_spoke
and enable_hub_and_spoke_transitivity
to true
to be able to use the Hub-and-Spoke architecture detailed in the Networking section of the Google cloud security foundations guide.
If you provisioned the prerequisites listed in the Dedicated Interconnect README, follow these steps to enable Dedicated Interconnect to access on-premises resources.
- Rename
interconnect.tf.example
tointerconnect.tf
in each environment folder in3-networks/envs/<ENV>
. - Update the file
interconnect.tf
with values that are valid for your environment for the interconnects, locations, candidate subnetworks, vlan_tag8021q and peer info. - The candidate subnetworks and vlan_tag8021q variables can be set to
null
to allow the interconnect module to auto generate these values.
If you provisioned the prerequisites listed in the Partner Interconnect README follow this steps to enable Partner Interconnect to access on-premises resources.
- Rename
partner_interconnect.tf.example
topartner_interconnect.tf
andinterconnect.auto.tfvars.example
tointerconnect.auto.tfvars
in the environment folder in3-networks/envs/<environment>
. - Update the file
partner_interconnect.tf
with values that are valid for your environment for the VLAN attachments, locations, and candidate subnetworks. - The candidate subnetworks variable can be set to
null
to allow the interconnect module to auto generate this value.
If you are not able to use Dedicated or Partner Interconnect, you can also use an HA Cloud VPN to access on-premises resources.
- Rename
vpn.tf.example
tovpn.tf
in each environment folder in3-networks/envs/<ENV>
. - Create secret for VPN private preshared key.
echo '<YOUR-PRESHARED-KEY-SECRET>' | gcloud secrets create <VPN_PRIVATE_PSK_SECRET_NAME> --project <ENV_SECRETS_PROJECT> --replication-policy=automatic --data-file=-
- Create secret for VPN restricted preshared key.
echo '<YOUR-PRESHARED-KEY-SECRET>' | gcloud secrets create <VPN_RESTRICTED_PSK_SECRET_NAME> --project <ENV_SECRETS_PROJECT> --replication-policy=automatic --data-file=-
- In the file
vpn.tf
, update the values forenvironment
,vpn_psk_secret_name
,on_prem_router_ip_address1
,on_prem_router_ip_address2
andbgp_peer_asn
. - Verify other default values are valid for your environment.
- Clone repo.
gcloud source repos clone gcp-networks --project=YOUR_CLOUD_BUILD_PROJECT_ID
- Change to the freshly cloned repo and change to non-master branch.
git checkout -b plan
- Copy contents of foundation to new repo.
cp -RT ../terraform-example-foundation/3-networks/ .
- Copy Cloud Build configuration files for Terraform.
cp ../terraform-example-foundation/build/cloudbuild-tf-* .
- Copy Terraform wrapper script to the root of your new repository.
cp ../terraform-example-foundation/build/tf-wrapper.sh .
- Ensure wrapper script can be executed.
chmod 755 ./tf-wrapper.sh
- Rename
common.auto.example.tfvars
tocommon.auto.tfvars
and update the file with values from your environment and bootstrap. See any of the envs folder README.md files for additional information on the values in thecommon.auto.tfvars
file. - Rename
shared.auto.example.tfvars
toshared.auto.tfvars
and update the file with thetarget_name_server_addresses
(the list of target name servers for the DNS forwarding zone in the DNS Hub). - Rename
access_context.auto.example.tfvars
toaccess_context.auto.tfvars
and update the file with theaccess_context_manager_policy_id
. - Commit changes
git add . git commit -m 'Your message'
- You must manually plan and apply the
shared
environment (only once) since thedevelopment
,non-production
andproduction
environments depend on it.- Run
cd ./envs/shared/
. - Update
backend.tf
with your bucket name from the bootstrap step. - Run
terraform init
. - Run
terraform plan
and review output. - Run
terraform apply
. - If you would like the bucket to be replaced by Cloud Build at run time, change the bucket name back to
UPDATE_ME
.
- Run
- Push your plan branch to trigger a plan.
git push --set-upstream origin plan
- Review the plan output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds?project=YOUR_CLOUD_BUILD_PROJECT_ID
- Merge changes to production.
git checkout -b production git push origin production
- Review the apply output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds?project=YOUR_CLOUD_BUILD_PROJECT_ID
- After production has been applied, apply development.
- Merge changes to development.
git checkout -b development git push origin development
- Review the apply output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds?project=YOUR_CLOUD_BUILD_PROJECT_ID
- After development has been applied, apply non-production.
- Merge changes to non-production.
git checkout -b non-production git push origin non-production
- Review the apply output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds?project=YOUR_CLOUD_BUILD_PROJECT_ID
- You can now move to the instructions in the step 4-projects.
- Clone the repo you created manually in 0-bootstrap.
git clone <YOUR_NEW_REPO-3-networks>
- Navigate into the repo and change to a non-production branch.
cd YOUR_NEW_REPO_CLONE-3-networks git checkout -b plan
- Copy contents of foundation to new repo.
cp -RT ../terraform-example-foundation/3-networks/ .
- Copy the Jenkinsfile script to the root of your new repository.
cp ../terraform-example-foundation/build/Jenkinsfile .
- Update the variables located in the
environment {}
section of theJenkinsfile
with values from your environment:_TF_SA_EMAIL _STATE_BUCKET_NAME _PROJECT_ID (the cicd project id)
- Copy Terraform wrapper script to the root of your new repository.
cp ../terraform-example-foundation/build/tf-wrapper.sh .
- Ensure wrapper script can be executed.
chmod 755 ./tf-wrapper.sh
- Rename
common.auto.example.tfvars
tocommon.auto.tfvars
and update the file with values from your environment and bootstrap. See any of the envs folder README.md files for additional information on the values in thecommon.auto.tfvars
file. - Rename
shared.auto.example.tfvars
toshared.auto.tfvars
and update the file with thetarget_name_server_addresses
. - Rename
access_context.auto.example.tfvars
toaccess_context.auto.tfvars
and update the file with theaccess_context_manager_policy_id
. - Commit changes.
git add . git commit -m 'Your message'
- You must manually plan and apply the
shared
environment (only once) since thedevelopment
,non-production
andproduction
environments depend on it.- Run
cd ./envs/shared/
. - Update
backend.tf
with your bucket name from the bootstrap step. - Run
terraform init
. - Run
terraform plan
and review output. - Run
terraform apply
. - If you would like the bucket to be replaced by Cloud Build at run time, change the bucket name back to
UPDATE_ME
.
- Run
- Push your plan branch.
git push --set-upstream origin plan
- Assuming you configured an automatic trigger in your Jenkins Master (see Jenkins sub-module README), this will trigger a plan. You can also trigger a Jenkins job manually. Given the many options to do this in Jenkins, it is out of the scope of this document see Jenkins website for more details.
- Review the plan output in your Master's web UI.
- Merge changes to production branch.
git checkout -b production git push origin production
- Review the apply output in your Master's web UI (you might want to use the option to "Scan Multibranch Pipeline Now" in your Jenkins Master UI).
- After production has been applied, apply development and non-production.
- Merge changes to development
git checkout -b development git push origin development
- Review the apply output in your Master's web UI (you might want to use the option to "Scan Multibranch Pipeline Now" in your Jenkins Master UI).
- Merge changes to non-production.
git checkout -b non-production git push origin non-production
- Review the apply output in your Master's web UI (you might want to use the option to "Scan Multibranch Pipeline Now" in your Jenkins Master UI).
- Change into the 3-networks folder.
- Run
cp ../build/tf-wrapper.sh .
- Run
chmod 755 ./tf-wrapper.sh
. - Rename
common.auto.example.tfvars
tocommon.auto.tfvars
and update the file with values from your environment and bootstrap. See any of the envs folder README.md files for additional information on the values in thecommon.auto.tfvars
file. - Rename
shared.auto.example.tfvars
toshared.auto.tfvars
and update the file with thetarget_name_server_addresses
. - Rename
access_context.auto.example.tfvars
toaccess_context.auto.tfvars
and update the file with theaccess_context_manager_policy_id
. - Update
backend.tf
with your bucket name from the bootstrap step.You can runfor i in `find -name 'backend.tf'`; do sed -i 's/UPDATE_ME/<YOUR-BUCKET-NAME>/' $i; done
terraform output gcs_bucket_tfstate
in the 0-bootstrap folder to obtain the bucket name.
We will now deploy each of our environments(development/production/non-production) using this script. When using Cloud Build or Jenkins as your CI/CD tool each environment corresponds to a branch in the repository for 3-networks step and only the corresponding environment is applied.
To use the validate
option of the tf-wrapper.sh
script, please follow the instructions in the Install Terraform Validator section and install version 2021-03-22
in your system. You will also need to rename the binary from terraform-validator-<your-platform>
to terraform-validator
and the terraform-validator
binary must be in your PATH
.
- Run
./tf-wrapper.sh init shared
. - Run
./tf-wrapper.sh plan shared
and review output. - Run
./tf-wrapper.sh validate shared $(pwd)/../policy-library <YOUR_CLOUD_BUILD_PROJECT_ID>
and check for violations. - Run
./tf-wrapper.sh apply shared
. - Run
./tf-wrapper.sh init production
. - Run
./tf-wrapper.sh plan production
and review output. - Run
./tf-wrapper.sh validate production $(pwd)/../policy-library <YOUR_CLOUD_BUILD_PROJECT_ID>
and check for violations. - Run
./tf-wrapper.sh apply production
. - Run
./tf-wrapper.sh init non-production
. - Run
./tf-wrapper.sh plan non-production
and review output. - Run
./tf-wrapper.sh validate non-production $(pwd)/../policy-library <YOUR_CLOUD_BUILD_PROJECT_ID>
and check for violations. - Run
./tf-wrapper.sh apply non-production
. - Run
./tf-wrapper.sh init development
. - Run
./tf-wrapper.sh plan development
and review output. - Run
./tf-wrapper.sh validate development $(pwd)/../policy-library <YOUR_CLOUD_BUILD_PROJECT_ID>
and check for violations. - Run
./tf-wrapper.sh apply development
.
If you received any errors or made any changes to the Terraform config or any .tfvars
you must re-run ./tf-wrapper.sh plan <env>
before run ./tf-wrapper.sh apply <env>
.