diff --git a/10-custom_data.md b/10-custom_data.md index 37fcb72..34cdeb9 100644 --- a/10-custom_data.md +++ b/10-custom_data.md @@ -137,7 +137,7 @@ resource "azurerm_network_interface" "nic" { ip_configuration { name = "${var.prefix}NICConfg" subnet_id = azurerm_subnet.subnet.id - private_ip_address_allocation = "dynamic" + private_ip_address_allocation = "Dynamic" public_ip_address_id = azurerm_public_ip.publicip.id } } diff --git a/18-vmss.md b/18-vmss.md index dadd48d..77db6ab 100644 --- a/18-vmss.md +++ b/18-vmss.md @@ -235,7 +235,7 @@ resource "azurerm_network_interface" "jumpbox" { ip_configuration { name = "IPConfiguration" subnet_id = azurerm_subnet.vmss.id - private_ip_address_allocation = "dynamic" + private_ip_address_allocation = "Dynamic" public_ip_address_id = azurerm_public_ip.jumpbox.id } diff --git a/solution/lifecycle/main.tf b/solution/lifecycle/main.tf index e2cb003..1185dfd 100644 --- a/solution/lifecycle/main.tf +++ b/solution/lifecycle/main.tf @@ -35,7 +35,7 @@ resource "azurerm_network_interface" "main" { ip_configuration { name = "config1" subnet_id = azurerm_subnet.main.id - private_ip_address_allocation = "dynamic" + private_ip_address_allocation = "Dynamic" public_ip_address_id = azurerm_public_ip.main.id } } @@ -48,7 +48,7 @@ resource "azurerm_network_interface" "new" { ip_configuration { name = "config1" subnet_id = azurerm_subnet.main.id - private_ip_address_allocation = "dynamic" + private_ip_address_allocation = "Dynamic" public_ip_address_id = azurerm_public_ip.new.id } } diff --git a/terraform_advanced/02-versions.md b/terraform_advanced/02-versions.md new file mode 100644 index 0000000..0f16b59 --- /dev/null +++ b/terraform_advanced/02-versions.md @@ -0,0 +1,222 @@ +# Lab: Terraform Versions + +## Description + +In this challenge you will configure your Terraform code to control which versions of Terraform and Terraform providers that the code is compatible with. + +Duration: 10 minutes + +- Task 1: Check Terraform version +- Task 2: Require specific versions of Terraform +- Task 3: Require specific versions of Providers +- Task 4: Format and Validate Terraform Configuration +- Task 5: Validate versions of Terraform and Required Providers +- Task 6: Update the version of the AzureRM provider + +## Task 1: Check Terraform version + +Check the version of Terraform you are running. + +```bash +terraform version +``` + +```bash +Terraform v1.5.0 +on linux_amd64 +``` + +## Task 2: Require specific versions of Terraform + +Create a Terraform configuration block within a `terraform.tf` in the `~/workstation/terraform/versions` directory to specify which version of Terraform is required to run this code base. + +```bash +mkdir ~/workstation/terraform/versions && cd $_ +touch {terraform,main}.tf +``` + +`terraform.tf` + +```hcl +terraform { + required_version = ">= 2.0.0" +} +``` + +This informs Terraform that it must be at least of version 2.0.0 to run the code. If Terraform is an earlier version it will throw an error. You can validate your configuration parameters easily. + +```bash +terraform validate +``` + +Since we are running Terraform 1.5.0, we should see an error similar to the following: + +```bash +│ Error: Unsupported Terraform Core version +│ +│ on terraform.tf line 2, in terraform: +│ 2: required_version = ">= 2.0.0" +``` + +Change the `required_version` to `>= 1.0.0` and run `terraform validate` again. You should see the following output: + +```bash +Success! The configuration is valid. +``` + +You might have noticed that we didn't initialize terraform yet! That's because we're not using any providers, so terraform doesn't need to download anything. We can still run `terraform validate` without any issues. + +## Task 3: Require specific versions of Providers + +Terraform Providers are plugins that implement resource types for particular clouds, platforms and generally speaking any remote system with an API. Terraform configurations must declare which providers they require, so that Terraform can install and use them. Popular Terraform Providers include: AWS, Azure, Google Cloud, VMware, Kubernetes and Oracle. + +You can update the terraform configuration block to specify a compatible Azure provider version similar to how you did for the Terraform version. Update the `terraform.tf` with a `required_providers`: + +`terraform.tf` + +```hcl +terraform { + required_version = ">= 1.0.0" + required_providers { + azurerm = { + source = "hashicorp/azurerm" + version = "~> 2.0" + } + } +} +``` + +Now add a simple resource to the `main.tf` file. + +`main.tf` + +```hcl +provider "azurerm" { + features {} +} + +resource "azurerm_resource_group" "training" { + name = "rg-versions-test" + location = "East US" +} +``` + +By default Terraform will always pull the latest provider if no version is set. However setting a version provides a way to ensure your Terraform code remains working in the event a newer version introduces a change that +would not work with your existing code. To have more strict controls over the version you may want to require a specific version ( e.g. required_version = "= 1.0.0" ) or use the ~>operator to only allow the right-most version number to increment. + +## Task 4: Format and Validate Terraform Configuration + +Initialize, Format and Validate your terraform configuration by executing the following from the `~/workstation/terraform/versions` directory in the code terminal. + +```bash +terraform init +terraform fmt +terraform validate +``` + +You should see Terraform download a version of the AzureRM provider in the major version 2 family and then format and validate the configuration. + +Terraform will also create a `.terraform.lock.hcl` file that contains the exact version of the provider that was downloaded. This file is used to ensure that the same version of the provider is used when running `terraform plan` or `terraform apply` in the future. + +```bash +cat .terraform.lock.hcl +``` + +```bash +# This file is maintained automatically by "terraform init". +# Manual edits may be lost in future updates. + +provider "registry.terraform.io/hashicorp/azurerm" { + version = "2.99.0" + constraints = "~> 2.0" + hashes = [ + "h1:FXBB5TkvZpZA+ZRtofPvp5IHZpz4Atw7w9J8GDgMhvk=", + "zh:08d81e72e97351538ab4d15548942217bf0c4d3b79ad3f4c95d8f07f902d2fa6", + "zh:11fdfa4f42d6b6f01371f336fea56f28a1db9e7b490c5ca0b352f6bbca5a27f1", + "zh:12376e2c4b56b76098d5d713d1a4e07e748a926c4d165f0bd6f52157b1f7a7e9", + "zh:31f1cb5b88ed1307625050e3ee7dd9948773f522a3f3bf179195d607de843ea3", + "zh:767971161405d38412662a73ea40a422125cdc214c72fbc569bcfbea6e66c366", + "zh:973c402c3728b68c980ea537319b703c009b902a981b0067fbc64e04a90e434c", + "zh:9ec62a4f82ec1e92bceeff80dd8783f61de0a94665c133f7c7a7a68bda9cdbd6", + "zh:bbb3b7e1229c531c4634338e4fc81b28bce58312eb843a931a4420abe42d5b7e", + "zh:cbbe02cd410d21476b3a081b5fa74b4f1b3d9d79b00214009028d60e859c19a3", + "zh:cc00ecc7617a55543b60a0da1196ea92df48c399bcadbedf04c783e3d47c6e08", + "zh:eecb9fd0e7509c7fd4763e546ef0933f125770cbab2b46152416e23d5ec9dd53", + ] +} +``` + +Terraform will use this version of the provider until you change the constraint or run `terraform init -upgrade` to upgrade to a newer version. + +## Task 5: Validate versions of Terraform and Required Providers + +To see the version of Terraform and providers installed, along with which versions are required by the current configuration you can issue the following commands: + +```bash +terraform version +terraform providers +``` + +```bash +Providers required by configuration: +. +└── provider[registry.terraform.io/hashicorp/azurerm] ~> 2.0 +``` + +## Task 6: Update the version of the AzureRM provider + +We set Terraform to use major version 2 of the AzureRM provider, but now we're ready to upgrade to version 3. First we need to change the version constraint in our `terraform.tf` file. + +`terraform.tf` + +```hcl +terraform { + required_version = ">= 1.0.0" + required_providers { + azurerm = { + source = "hashicorp/azurerm" + version = "~> 3.0" + } + } +} +``` + +If we try to run a `terraform plan` now before we update the provider plugin and lock file we will get an error: + +```bash +terraform plan +``` + +```bash +│ Error: Inconsistent dependency lock file +│ +│ The following dependency selections recorded in the lock file are inconsistent with the current configuration: +│ - provider registry.terraform.io/hashicorp/azurerm: locked version selection 2.99.0 doesn't match the updated version constraints "~> 3.0" +``` + +To upgrade the locally installed provider, we need to run `terraform init -upgrade`. + +```bash +terraform init -upgrade +``` + +```bash +... +Initializing provider plugins... +- Finding hashicorp/azurerm versions matching "~> 3.0"... +- Installing hashicorp/azurerm v3.69.0... +- Installed hashicorp/azurerm v3.69.0 (signed by HashiCorp) + +Terraform has made some changes to the provider dependency selections recorded +in the .terraform.lock.hcl file. Review those changes and commit them to your +version control system if they represent changes you intended to make. +... +``` + +Now we can run `terraform plan` and it will execute successfully. + +```bash +terraform plan +``` + +You do not need to actually deploy the configuration, unless you really want to. \ No newline at end of file diff --git a/workspaces.md b/terraform_advanced/03-workspaces.md similarity index 95% rename from workspaces.md rename to terraform_advanced/03-workspaces.md index 9f3aafe..0004d8f 100644 --- a/workspaces.md +++ b/terraform_advanced/03-workspaces.md @@ -122,6 +122,13 @@ Check out all your workspaces! terraform workspace list ``` +You can also view all the state files in the `terraform.tfstate.d` directory: + +```bash +ls -l terraform.tfstate.d/ +ls -l terraform.tfstate.d/development/ +``` + ## Task 5: Destroy and delete the staging workspace Try to delete the staging workspace: diff --git a/conditional_logic.md b/terraform_advanced/04-conditional_logic.md similarity index 99% rename from conditional_logic.md rename to terraform_advanced/04-conditional_logic.md index 0e8706e..f67e74d 100644 --- a/conditional_logic.md +++ b/terraform_advanced/04-conditional_logic.md @@ -107,7 +107,6 @@ You can now use the network module in your root module. First create the files f ```bash cd .. touch {main,terraform}.tf -touch terraform.tfvars ``` Add the following to the `terraform.tf` file: diff --git a/variable_validation.md b/terraform_advanced/05-variable_validation.md similarity index 92% rename from variable_validation.md rename to terraform_advanced/05-variable_validation.md index fc7fa06..acdeee5 100644 --- a/variable_validation.md +++ b/terraform_advanced/05-variable_validation.md @@ -1,24 +1,43 @@ # Lab: Variable Validation and Suppression +Duration: 15 minutes + We may want to validate and possibly suppress any sensitive information defined within our variables. - Task 1: Validate variables in Terraform Configuration - Task 2: Suppress sensitive information - Task 3: View the Terraform state file -## Task 1: Valdiate variables in Terraform Configuration +## Task 1: Validate variables in Terraform Configuration ### Create the base Terraform Configuration -Change directory into a folder specific to this challenge. +Create the necessary folders and files for the configuration -For example: `cd /workstation/terraform/azure/variable-validation/`. +```bash +mkdir -p ~/workstation/terraform/variable-validation && cd $_ +touch {terraform,main,variables}.tf +touch terraform.tfvars +``` We will start with a few of the basic resources needed. +Add the following to the `terraform.tf` file: + +```hcl +terraform { + required_providers { + azurerm = { + source = "hashicorp/azurerm" + version = "~>3.0" + } + } +} +``` + ### Create Variables -Create a file `variables.tf` and add the following configuration: +In `variables.tf` add the following configuration: ```hcl variable "resource_group_name" {} @@ -136,7 +155,7 @@ resource "azurerm_virtual_machine" "training" { } ``` -### Create Variables TFVARS File +### Update the Variables TFVARS File Create a file `terraform.tfvars` and add the following configuration and change out the ### with your initials. @@ -284,7 +303,7 @@ output "phone_number" { terraform validate ``` -After validation is succesful, apply the configuration. +After validation is successful, apply the configuration. ```bash terraform apply diff --git a/for_each.md b/terraform_advanced/06-for_each.md similarity index 82% rename from for_each.md rename to terraform_advanced/06-for_each.md index 37a6788..570555b 100644 --- a/for_each.md +++ b/terraform_advanced/06-for_each.md @@ -6,27 +6,40 @@ So far, we've already used arguments to configure your resources. These argument The count argument does however have a few limitations in that it is entirely dependent on the count index which can be shown by performing a `terraform state list`. -A more mature approach to create multiple instances while keeping code DRY is to leverage Terraform's `for-each`. +A more mature approach to create multiple instances while keeping code DRY is to leverage Terraform's `for_each`. - Task 1: Change the number of VM instances with `count` - Task 2: Look at the number of VM instances with `terraform state list` - Task 3: Decrease the Count and determine which instance will be destroyed. -- Task 4: Refactor code to use Terraform `for-each` +- Task 4: Refactor code to use Terraform `for_each` - Task 5: Look at the number of VM instances with `terraform state list` - Task 6: Update the output variables to pull IP and DNS addresses. - Task 7: Update the server variables to determine which instance will be destroyed. ## Task 1: Change the number of VM instances with `count` -Change directory into a folder specific to this challenge. +Create the necessary folders and files for the configuration -For example: `cd /workstation/terraform/azure/for_each/`. +```bash +mkdir -p ~/workstation/terraform/azure/for_each && cd $_ +touch {terraform,main,variables,outputs}.tf +touch terraform.tfvars +``` -We will start with a few of the basic resources needed. +Add the following to the `terraform.tf` file: -Create a `variables.tf`, `main.tf`, `outputs.tf` and `terraform.tfvars` files to hold our configuration. +```hcl +terraform { + required_providers { + azurerm = { + source = "hashicorp/azurerm" + version = "~>3.0" + } + } +} +``` -Update the root `main.tf` to utilize the `count` paramater on the VM resource. Notice the count has been variablized to specify the number of VMs. +Populate the root `main.tf` utilizing the `count` parameter on the VM resource. Notice the count has been variablized to specify the number of VMs. `main.tf` @@ -36,7 +49,7 @@ provider "azurerm" { } resource "azurerm_resource_group" "training" { - name = "${var.prefix}-resourcegroup" + name = "${var.prefix}-foreach" location = var.location } @@ -130,7 +143,6 @@ output "public_dns" { `variables.tf` ```hcl variable "prefix" { - default = "" type = string description = "Prefix to append to resources" } @@ -167,36 +179,39 @@ admin_password = "Password1234!" num_vms = 2 ``` +Now deploy the configuration: + +```bash +terraform init +terraform apply +``` + ## Task 2: Look at the number of servers with `terraform state list` ```bash terraform state list +``` +```bash azurerm_network_interface.training[0] azurerm_network_interface.training[1] - +... azurerm_public_ip.training[0] azurerm_public_ip.training[1] - - +... azurerm_virtual_machine.training[0] azurerm_virtual_machine.training[1] - ``` -Notice the way resources are indexed when using meta-arguments. +Notice the way resources are indexed when using the `count` meta-argument. -## Task 3: Decrease the Count and determine which instance will be destroyed. +## Task 3: Decrease the Count and determine which instance will be destroyed Update the count from `2` to `1` by changing the `num_vms` variable in your `terraform.tfvars` file. -Replace the `###` with your initials. - +`terraform.tfvars` ```hcl -prefix = "###" -location = "East US" -admin_username = "testadmin" -admin_password = "Password1234!" +... num_vms = 1 ``` @@ -206,9 +221,11 @@ Run a `terraform apply` followed by a `terraform state list` to view how the ser terraform apply ``` -``` +```bash terraform state list +`````` +```bash azurerm_network_interface.training[0] azurerm_public_ip.training[0] azurerm_resource_group.training @@ -219,9 +236,9 @@ azurerm_virtual_network.training You will see that when using the `count` parameter you have very limited control as to which server Terraform will destroy. It will always default to destroying the server with the highest index count. -## Task 4: Refactor code to use Terraform `for-each` +## Task 4: Refactor code to use Terraform `for_each` -Refactor `main.tf` to make use of the `for-each` command rather then the count command. Replace the following in the `main.tf` and comment out the `output` blocks for now. +Refactor `main.tf` to make use of the `for_each` meta-argument rather then the count command. Replace the contents in the `main.tf` with the following, and comment out the `output` blocks in `outputs.tf` for now. ```hcl locals { @@ -333,13 +350,21 @@ resource "azurerm_virtual_machine" "training" { } ``` -If you run `terraform apply` now, you'll notice that this code will destroy the previous resource and create two new servers based on the attributes defined inside the `servers` variable, which is defined as a map of our servers. +Run `terraform apply` now. + +```bash +terraform apply +``` + +You'll notice that this code will destroy the previous resource and create two new servers based on the attributes defined inside the `servers` local value, which is defined as a map of our servers. ### Task 5: Look at the number of VM instances with `terraform state list` ```bash terraform state list +``` +```bash azurerm_network_interface.training["server-ubuntu-16"] azurerm_network_interface.training["server-ubuntu-18"] azurerm_public_ip.training["server-ubuntu-16"] @@ -351,11 +376,11 @@ azurerm_virtual_machine.training["server-ubuntu-18"] azurerm_virtual_network.training ``` -Since we used _for-each_ to the azurerm_virtual_machine.training resource, it now refers to multiple resources with key references from the `servers` variable. +Since we used `for_each` to create the azurerm_virtual_machine.training resource, it now refers to multiple resources with key references from the `servers` variable. -### Task 6: Update the output variables to pull IP and DNS addresses. +### Task 6: Update the output variables to pull IP and DNS addresses -When using Terraform's `for-each` our output blocks need to be updated to utilize `for` to loop through the server names. This differs from using `count` which utilized the Terraform splat operator `*`. Uncomment and update the output block of your `main.tf`. +When using Terraform's `for_each` our output blocks need to be updated to utilize `for` expressions to loop through the server names. This differs from using `count` which utilized the Terraform splat operator `*`. Update the output block of your `outputs.tf`. ```hcl output "public_dns" { @@ -366,7 +391,7 @@ output "public_dns" { Format, validate and apply your configuration to now see the format of the Outputs. -``` +```bash terraform fmt terraform validate terraform apply @@ -379,18 +404,20 @@ public_dns = { } ``` -## Task 7: Update the server variables to determine which instance will be destroyed. +## Task 7: Update the server variables to determine which instance will be destroyed -Update the `servers` local variable to remove the `server-ubuntu-16` instance by removing the following block: +Update the `servers` local value to the following, removing the `server-ubuntu-16` key and its values: ```hcl - server-ubuntu-16 = { - identity = "${var.prefix}-ubuntu-16" + servers = { + server-ubuntu-18 = { + identity = "${var.prefix}-ubuntu-18" publisher = "Canonical" offer = "UbuntuServer" - sku = "16.04-LTS" + sku = "18.04-LTS" version = "latest" }, + } ``` -If you run `terraform apply` now, you'll notice that this code will destroy the `server-ubuntu-16`, allowing us to target a specific instance that needs to be updated/removed. +If you run `terraform plan` now, you'll notice that this code will destroy the `server-ubuntu-16`, allowing us to target a specific instance that needs to be updated/removed. diff --git a/dynamic_blocks.md b/terraform_advanced/07-dynamic_blocks.md similarity index 90% rename from dynamic_blocks.md rename to terraform_advanced/07-dynamic_blocks.md index 52e7d2c..8bb488a 100644 --- a/dynamic_blocks.md +++ b/terraform_advanced/07-dynamic_blocks.md @@ -104,3 +104,7 @@ Take a look at the properties of the network security group to validate all the ```bash terraform state show azurerm_network_security_group.nsg ``` + +## Bonus Task + +How could you handle rules that have different properties defined? Could you use a default value if none is defined by the local value? *Hint: the [lookup](https://www.terraform.io/docs/language/functions/lookup.html) function may be helpful.* diff --git a/null_resource.md b/terraform_advanced/08-null_resource.md similarity index 81% rename from null_resource.md rename to terraform_advanced/08-null_resource.md index 927d925..60e9906 100644 --- a/null_resource.md +++ b/terraform_advanced/08-null_resource.md @@ -9,21 +9,33 @@ This lab demonstrates the use of the `null_resource`. Instances of `null_resourc We'll demonstrate how `null_resource` can be used to take action on a set of existing resources that are specified within the `triggers` argument - ## Task 1: Create a Azure Virtual Machine using Terraform ### Step 1.1: Create Server instances -Build the web servers using the Azure Virtual Machine: +Build the web servers using the Azure Virtual Machine resource: Create the folder structure: ```bash mkdir ~/workstation/terraform/null_resource && cd $_ -touch {variables,main}.tf +touch {variables,main,terraform}.tf touch terraform.tfvars ``` +Add the following to the `terraform.tf` file: + +```hcl +terraform { + required_providers { + azurerm = { + source = "hashicorp/azurerm" + version = "~>3.0" + } + } +} +``` + Update your `main.tf` with the following: ```hcl @@ -135,7 +147,7 @@ variable "num_vms" { Update or your `terraform.tfvars` with the following and replace the `###` with your initials: ```hcl -resource_group_name = "###-resourcegroup" +resource_group_name = "###-nullrg" EnvironmentTag = "staging" prefix = "###" location = "East US" @@ -151,7 +163,7 @@ Then perform an `init`, `plan`, and `apply`. ### Step 2.1: Use `null_resource` -Add `null_resource` stanza to the `main.tf`. Notice that the trigger for this resource is set to monitor changes to the number of virtual machines. +Add the `null_resource` block to the `main.tf`. Notice that the trigger for this resource is set to monitor changes to the number of virtual machines. ```hcl resource "null_resource" "web_cluster" { @@ -173,22 +185,31 @@ resource "null_resource" "web_cluster" { } ``` -Initialize the configuration with a `terraform init` followed by a `plan` and `apply`. +The `null_resource` uses the `null` provider, so you need to initialize the configuration to download the `null` provider plugin. Then run a `terraform apply`. + +```bash +terraform init +terraform apply +``` ### Step 2.2: Re-run `plan` and `apply` to trigger `null_resource` -After the infrastructure has completed its buildout, change your machine count (in your terraform.tfvars) and re-run a plan and apply and notice that the null resource is triggered. This is because the `web_cluster_size` changed, triggering our null_resource. +After the infrastructure has completed its buildout, change your machine count (`num_vms` in your terraform.tfvars) and re-run a plan and apply and notice that the null resource is triggered. This is because the `web_cluster_size` changed, triggering our null_resource. -```shell +```bash terraform apply ``` -Run `apply` a few times to see the `null_resource`. +If you run `terraform plan` again, the `null_resource` will not be triggered because the `web_cluster_size` value has not changed. ### Step 2.3: Destroy Finally, run `destroy`. -```shell +```bash terraform destroy ``` + +## Bonus Task + +The `null_resource` is being deprecated in favor of the built-in `terraform_data` resource. Refactor the configuration to use the `terraform_data` resource instead of the `null_resource`. diff --git a/azure_remote_state.md b/terraform_advanced/09-azure_remote_state.md similarity index 82% rename from azure_remote_state.md rename to terraform_advanced/09-azure_remote_state.md index 19e28c5..303e1a6 100644 --- a/azure_remote_state.md +++ b/terraform_advanced/09-azure_remote_state.md @@ -1,3 +1,5 @@ +# Lab: Azure Remote State + ## Description In this challenge you will use create an Azure storage account for remote state storage and then update a configuration to use that storage account. @@ -17,13 +19,15 @@ You will use Terraform to create the Azure storage account, a container in the s Create the folder structure for the storage account and main configuration: ```bash -mkdir -p ~/workstation/terraform/azure_remote_state/{storage_account,main} +mkdir -p ~/workstation/terraform/azure_remote_state/{storage_account,vnet} touch ~/workstation/terraform/azure_remote_state/storage_account/{terraform,main}.tf -touch ~/workstation/terraform/azure_remote_state/main/{terraform,main}.tf +touch ~/workstation/terraform/azure_remote_state/vnet/{terraform,main}.tf touch ~/workstation/terraform/azure_remote_state/storage_account/terraform.tfvars cd ~/workstation/terraform/azure_remote_state/storage_account ``` +First you need to deploy the storage account. + Add the following to the `terraform.tf` file in the `storage_account` directory: ```hcl @@ -171,7 +175,7 @@ terraform apply ## Task 2: Deploy the configuration using the `local` backend -In the `main` directory add the following to the `terraform.tf` directory: +In the `vnet` directory add the following to the `terraform.tf` directory: ```hcl terraform { @@ -217,8 +221,9 @@ resource "azurerm_virtual_network" "remote_state" { At first you are going to use the `local` backend, so the `azurerm` backend is commented out. You'll remove those comments in a moment. For now, initialize and apply the configuration: ```bash +cd ../vnet/ terraform init -terrform apply +terraform apply ``` ## Task 3: Update the configuration with the `azurerm` backend and migrate your state data @@ -233,6 +238,18 @@ You are going to migrate your existing state data to the Azure storage account c You are changing the backend for state data, so Terraform must be initialized with the new values. The `backend` block is a partial configuration. The rest of the configuration will be specified as part of the `terraform init` command. You will need that `init_string` output now to run the command. +```bash +terraform -chdir="../storage_account" output init_string +``` + +Do not copy the `< Note: You will see your github user name instead of 'azure-terraform-workshop/' since you forked this repo. + +Click "Publish Module". + +This will query the repository for necessary files and tags used for versioning. + +Congrats, you are done! + +Ok, not really... + +Repeat this step for the other three modules: + +- terraform-azurerm-appserver +- terraform-azurerm-dataserver +- terraform-azurerm-webserver + +### Create a new github repository + +In github, create a new public repository names "tfc-workspace-modules". + +Create a single `main.tf` file with the following contents: + +```hcl +variable "name" {} +variable "location" {} +variable "username" {} +variable "password" {} + +provider "azurerm" { + features {} +} + +variable "vnet_address_spacing" { + type = list +} + +variable "subnet_address_prefixes" { + type = list +} + +module "networking" { + source = "app.terraform.io/YOUR_ORG_NAME/networking/azurerm" + version = "0.12.0" + + name = var.name + location = var.location + vnet_address_spacing = var.vnet_address_spacing + subnet_address_prefixes = var.subnet_address_prefixes +} +``` + +Update the source arguments to your organization by replacing "YOUR_ORG_NAME" with your TFC organization name. + +Commit the file and check the code into github. + +### Create a workspace + +Create a TFC workspace that uses the VCS connection to load this new repository. + +![](img/tfe-new-workspace.png) + +Select the repository and name the workspace the same thing "tfc-workspace-modules" + +![](img/tfe-new-workspace-final.png) + +### Configure Workspace Variables + +Navigate back to your "tfc-workspace-modules" workspace. + +Set the Terraform Variables: + +- 'name' - A unique environment name such as `###env` +- 'location' - An Azure region such as `eastus` or `centralus` +- 'username' (sensitive) - A username for the VM's +> Note: this can not be "admin" +- 'password' (sensitive) - A password for the VM's +> NOTE: password must be between 6-72 characters long and must satisfy at least 3 of password complexity requirements from the following: +> 1. Contains an uppercase character +> 2. Contains a lowercase character +> 3. Contains a numeric digit +> 4. Contains a special character +- 'vnet_address_spacing' (HCL) - The Vnet Address space + ```hcl + ["10.0.0.0/16"] + ``` +- 'subnet_address_prefixes' (HCL) - The Subnet Address spaces representing 3 subnets + ```hcl + [ + "10.0.0.0/24", + "10.0.1.0/24", + "10.0.2.0/24" + ] + ``` + +Set Environment Variables for your Azure Service Principal (be sure check the 'sensitive' checkbox to hide these values): + +- ARM_TENANT_ID +- ARM_SUBSCRIPTION_ID +- ARM_CLIENT_ID +- ARM_CLIENT_SECRET + +### Run a Plan + +Click the "Queue Plan" button. + +![](img/tfe-queue-plan.png) + +Wait for the Plan to complete. + +You should see several additions to deploy your networking. + +### Apply the Plan + +Approve the plan and apply it. + +Watch the apply progress and complete. + +Login to the at Azure Portal to see your infrastructure. + +### Update a Module + +In the `tfc-workspace-modules` repository, navigate to the `main.tf` file. + +Add the following to deploy the rest of your application (again, be sure to update the source references): + +```hcl +module "webserver" { + source = "app.terraform.io/YOUR_ORG_NAME/webserver/azurerm" + version = "0.12.0" + + name = var.name + location = var.location + subnet_id = module.networking.subnet-ids[0] + vm_count = 1 + username = var.username + password = var.password +} + +module "appserver" { + source = "app.terraform.io/YOUR_ORG_NAME/appserver/azurerm" + version = "0.12.0" + + name = var.name + location = var.location + subnet_id = module.networking.subnet-ids[1] + vm_count = 1 + username = var.username + password = var.password +} + +module "dataserver" { + source = "app.terraform.io/YOUR_ORG_NAME/dataserver/azurerm" + version = "0.12.0" + + name = var.name + location = var.location + subnet_id = module.networking.subnet-ids[2] + vm_count = 1 + username = var.username + password = var.password +} +``` + +Commit your change and see what the changes show in the plan. + +If you are satisfied with the changes, apply the changes. + +## Advanced areas to explore + +1. Make a change to a module repository and tag it in such a way that the change shows in your Private Module Registry. + +## Clean Up + +Navigate to the workspace "Settings" -> "Destruction and Deletion". + +Click Queue Destroy Plan + +Once the plan completes, apply it to destroy your infrastructure. + +## Resources + +- [Private Registries](https://www.terraform.io/docs/registry/private.html) +- [Publishing Modules](https://www.terraform.io/docs/registry/modules/publish.html) diff --git a/terraform_advanced/14-tfc_vcs_workflow.md b/terraform_advanced/14-tfc_vcs_workflow.md new file mode 100644 index 0000000..e9e352f --- /dev/null +++ b/terraform_advanced/14-tfc_vcs_workflow.md @@ -0,0 +1,39 @@ +# Terraform Enterprise - VCS Connection + +## Expected Outcome + +In this challenge, you will connect TFE to your personal github account. + +## How to + +### Create the VCS Connection + +Login to github in one browser tab. + +Login to TFE in another browser tab. + +Within TFE, navigate to the settings page: + +![](img/tfe-settings.png) + +Click "VCS Providers" link: + +![](img/tfe-settings-vcs.png) + +Following the instructions on the documents page + +The process involves several back and forth changes and is documented well in the link. + +### Verify Connection + +Navigate to and click "+ New Workspace". + +Click the VCS Connection in the "Source" section. + +Verify you can see repositories: + +![](img/tfe-vcs-verify.png) + +If you can see repositories then you are good :+1:. + +In the next lab you will create a repo and workspace. diff --git a/tfc-teams-governance.md b/terraform_advanced/15-tfc-teams-governance.md similarity index 100% rename from tfc-teams-governance.md rename to terraform_advanced/15-tfc-teams-governance.md diff --git a/vcs-code-promote.md b/terraform_advanced/16-vcs-code-promote.md similarity index 100% rename from vcs-code-promote.md rename to terraform_advanced/16-vcs-code-promote.md diff --git a/automated_testing.md b/terraform_advanced/17-automated_testing.md similarity index 100% rename from automated_testing.md rename to terraform_advanced/17-automated_testing.md diff --git a/terraform_advanced/18-tfc_sentinel_use.md b/terraform_advanced/18-tfc_sentinel_use.md new file mode 100644 index 0000000..b40f8da --- /dev/null +++ b/terraform_advanced/18-tfc_sentinel_use.md @@ -0,0 +1,285 @@ +# Terraform Enterprise - Sentinel Policy Use + +## Expected Outcome + +In this challenge, you will see how you can apply policies around your Azure subscriptions using Sentinel Policies. + +## How to + +### View Policies + +In the Terraform Enterprise web app, click on your organization -> Organization Settings + + + +![](img/sentinel-policy-add.png) + +### Create Policy Set + +First we need a place to stor our policies, namely a Policy Set. + +On the left menu, click the "Policy set" tab. + +Click "Create new policy set" + +![](img/sentinel-policyset-add-new.png) + +Create the following policy: + +![](img/sentinel-policyset-add-new-form.png) + +Create the following policy: + +__Name:__ MyWorkspacePolicies + +__Description:__ Policies I use for user 'INSERT USERNAME'. + +__Policy Set Source__: Select Upload Via API + +__Scope of Policies:__ Select -> "Policies enforced on selected workspaces" + +__Policies:__ Select the Policy created above -> Click "Add" + +__Workspaces:__ Select the workspace you created in the `vcs-code-promote` lab ("web-net-prod") -> Click "Add" + +### Create Policy + +Now lets create a Policy to enforce governance. + +Click "Create new policy" + +![](img/sentinel-policy-add-new.png) + +Create the following policy: + +__Policy Name:__ ResourceGroupRequireTag + +__Description:__ Policy requiring resource group tags + +__Policy Enforcement:__ advisory (logging only) + +__Policy Code:__ + +```hcl +import "tfplan" + +required_tags = [ + "owner", + "environment", +] + +getTags = func(group) { + tags = keys(group.applied.tags) + + for required_tags as t { + if t not in tags { + print("Resource Missing Tag:", t) + return false + } + } + + return true +} +main = rule { + all tfplan.resources.azurerm_resource_group as _, groups { + all groups as _, group { + getTags(group) + } + } +} +``` + +__Policy Sets__: Select the Policy Set we just created "MyWorkspacePolicies". + +### Manually Run a Plan + +> Note: be sure to discard any existing plans. + +Navigate to your "ptfe-workspace" and queue a plan. + +### Review the Plan + +Will see the plan was successful but there was a policy failure, however the option to Apply is still available. Why is that? + +![](img/sentinel-advisory.png) + +**Discard the plan.** + +### Update the Policy + +Update the Policy Enforcement to be `hard-mandatory`. + +![](img/tfe-policy-hard-mandatory.png) + +### Run a Plan + +Queue a plan for the workspace. + +### Review the Plan + +This time the the run fails due to the hard enforcement. + +![](img/tfe-policy-fail.png) + +### Sentinel - Advanced + +Create a new Sentinel Policy with following policy: + +__Policy Name:__ ResourceGroupRequireTag-Advanced + +__Description:__ Policy requiring resource group tags, advanced + +__Policy Enforcement:__ hard-mandatory + +__Policy Code:__ + +```hcl +# This policy uses the Sentinel tfplan import to require that all Azure Resource Groups have all mandatory tags. + +##### Imports ##### + +import "tfplan" +import "strings" +import "types" + +### List of mandatory tags ### +mandatory_tags = [ + "owner", + "environment", +] + +##### Functions ##### + +# Find all resources of a specific type from all modules using the tfplan import +find_resources_from_plan = func(type) { + + resources = {} + + # Iterate over all modules in the tfplan import + for tfplan.module_paths as path { + # Iterate over the named resources of desired type in the module + for tfplan.module(path).resources[type] else {} as name, instances { + # Iterate over resource instances + for instances as index, r { + + # Get the address of the instance + if length(path) == 0 { + # root module + address = type + "." + name + "[" + string(index) + "]" + } else { + # non-root module + address = "module." + strings.join(path, ".module.") + "." + + type + "." + name + "[" + string(index) + "]" + } + + # Add the instance to resources map, setting the key to the address + resources[address] = r + } + } + } + + return resources +} + +# Validate that all instances of specified type have a specified top-level +# attribute that contains all members of a given list +validate_attribute_contains_list = func(type, attribute, required_values) { + + validated = true + + # Get all resource instances of the specified type + resource_instances = find_resources_from_plan(type) + + # Loop through the resource instances + for resource_instances as address, r { + + # Skip resource instances that are being destroyed + # to avoid unnecessary policy violations. + # Used to be: if length(r.diff) == 0 + if r.destroy { + print("Skipping resource", address, "that is being destroyed.") + continue + } + + # Determine if the attribute is computed + # We check "attribute.%" and "attribute.#" because an + # attribute of type map or list won't show up in the diff + if (r.diff[attribute + ".%"].computed else false) or + (r.diff[attribute + ".#"].computed else false) { + print("Resource", address, "has attribute", attribute, + "that is computed.") + # If you want computed values to cause the policy to fail, + # uncomment the next line. + # validated = false + } else { + # Validate that the attribute is a list or a map + if length(r.applied[attribute]) else 0 > 0 and + (types.type_of(r.applied[attribute]) is "list" or + types.type_of(r.applied[attribute]) is "map") { + + # Evaluate each member of required_values list + for required_values as rv { + if r.applied[attribute] not contains rv { + print("Resource", address, "has attribute", attribute, + "that is missing required value", rv, "from the list:", + required_values) + validated = false + } // end rv + } // end required_values + + } else { + print("Resource", address, "is missing attribute", attribute, + "or it is not a list or a map") + validated = false + } // end check that attribute is list or map + + } // end computed check + } // end resource instances + + return validated +} + +### Rules ### + +# Call the validation function +tags_validated = validate_attribute_contains_list("azurerm_resource_group", + "tags", mandatory_tags) + +#Main rule that evaluates results +main = rule { + tags_validated +} +``` + +__Policy Sets__: Select the Policy Set "MyWorkspacePolicies". + +### Run another plan + +We know this will fail due to our first policy, but this advanced policy provides more valuable information to the end user. + +![](img/tfe-policy-fail-advanced.png) + +### Update Workspace + +Update the workspace `main.tf` to comply with the policy failure. What change is required? + +Save and commit the code to your repository. + +### Run a Plan + +Run another plan. + +> Note: You may need to discard the last non-applied build. + +### Review the Plan + +The plan should succeed and now pass the sentinel policy check. + +## Advanced areas to explore + +1. Write another Sentinel Policy restricting VM types in Azure. + +## Resources + +- [Policy](https://app.terraform.io/app/cardinalsolutions/settings/policies) +- [Sentinel Language Spec](https://docs.hashicorp.com/sentinel/language/spec) diff --git a/terraform_fundmentals/00-instruqt.md b/terraform_fundamentals/00-instruqt.md similarity index 100% rename from terraform_fundmentals/00-instruqt.md rename to terraform_fundamentals/00-instruqt.md diff --git a/terraform_fundmentals/00-vscode.md b/terraform_fundamentals/00-vscode.md similarity index 100% rename from terraform_fundmentals/00-vscode.md rename to terraform_fundamentals/00-vscode.md diff --git a/terraform_fundmentals/01-basic_commands.md b/terraform_fundamentals/01-basic_commands.md similarity index 100% rename from terraform_fundmentals/01-basic_commands.md rename to terraform_fundamentals/01-basic_commands.md diff --git a/terraform_fundmentals/02-basic_configuration.md b/terraform_fundamentals/02-basic_configuration.md similarity index 100% rename from terraform_fundmentals/02-basic_configuration.md rename to terraform_fundamentals/02-basic_configuration.md diff --git a/terraform_fundmentals/03-virtual_machine.md b/terraform_fundamentals/03-virtual_machine.md similarity index 100% rename from terraform_fundmentals/03-virtual_machine.md rename to terraform_fundamentals/03-virtual_machine.md diff --git a/terraform_fundmentals/04-outputs.md b/terraform_fundamentals/04-outputs.md similarity index 100% rename from terraform_fundmentals/04-outputs.md rename to terraform_fundamentals/04-outputs.md diff --git a/terraform_fundmentals/05-console.md b/terraform_fundamentals/05-console.md similarity index 100% rename from terraform_fundmentals/05-console.md rename to terraform_fundamentals/05-console.md diff --git a/terraform_fundmentals/06-variables.md b/terraform_fundamentals/06-variables.md similarity index 100% rename from terraform_fundmentals/06-variables.md rename to terraform_fundamentals/06-variables.md diff --git a/terraform_fundmentals/07-format_validate.md b/terraform_fundamentals/07-format_validate.md similarity index 100% rename from terraform_fundmentals/07-format_validate.md rename to terraform_fundamentals/07-format_validate.md diff --git a/terraform_fundmentals/08-modules.md b/terraform_fundamentals/08-modules.md similarity index 100% rename from terraform_fundmentals/08-modules.md rename to terraform_fundamentals/08-modules.md diff --git a/terraform_fundmentals/09-provisioners.md b/terraform_fundamentals/09-provisioners.md similarity index 100% rename from terraform_fundmentals/09-provisioners.md rename to terraform_fundamentals/09-provisioners.md diff --git a/terraform_fundmentals/10-graph.md b/terraform_fundamentals/10-graph.md similarity index 100% rename from terraform_fundmentals/10-graph.md rename to terraform_fundamentals/10-graph.md diff --git a/terraform_fundmentals/11-meta-arguments.md b/terraform_fundamentals/11-meta-arguments.md similarity index 99% rename from terraform_fundmentals/11-meta-arguments.md rename to terraform_fundamentals/11-meta-arguments.md index bfa92c8..4306784 100644 --- a/terraform_fundmentals/11-meta-arguments.md +++ b/terraform_fundamentals/11-meta-arguments.md @@ -52,7 +52,7 @@ resource "azurerm_network_interface" "training" { ip_configuration { name = "azureuser${var.prefix}ip" subnet_id = azurerm_subnet.training.id - private_ip_address_allocation = "dynamic" + private_ip_address_allocation = "Dynamic" #private_ip_address = "10.0.2.5" public_ip_address_id = azurerm_public_ip.training[count.index].id } diff --git a/terraform_fundmentals/12-destroy.md b/terraform_fundamentals/12-destroy.md similarity index 100% rename from terraform_fundmentals/12-destroy.md rename to terraform_fundamentals/12-destroy.md diff --git a/terraform_fundmentals/13-azure_auth.md b/terraform_fundamentals/13-azure_auth.md similarity index 100% rename from terraform_fundmentals/13-azure_auth.md rename to terraform_fundamentals/13-azure_auth.md diff --git a/terraform_fundmentals/14-data_source.md b/terraform_fundamentals/14-data_source.md similarity index 100% rename from terraform_fundmentals/14-data_source.md rename to terraform_fundamentals/14-data_source.md diff --git a/terraform_fundmentals/15-reading_state.md b/terraform_fundamentals/15-reading_state.md similarity index 100% rename from terraform_fundmentals/15-reading_state.md rename to terraform_fundamentals/15-reading_state.md diff --git a/terraform_fundmentals/16-import.md b/terraform_fundamentals/16-import.md similarity index 100% rename from terraform_fundmentals/16-import.md rename to terraform_fundamentals/16-import.md diff --git a/terraform_fundmentals/17-store-state.md b/terraform_fundamentals/17-store-state.md similarity index 100% rename from terraform_fundmentals/17-store-state.md rename to terraform_fundamentals/17-store-state.md diff --git a/terraform_fundmentals/18-secure_variables.md b/terraform_fundamentals/18-secure_variables.md similarity index 100% rename from terraform_fundmentals/18-secure_variables.md rename to terraform_fundamentals/18-secure_variables.md diff --git a/terraform_fundmentals/19-lifecycles.md b/terraform_fundamentals/19-lifecycles.md similarity index 100% rename from terraform_fundmentals/19-lifecycles.md rename to terraform_fundamentals/19-lifecycles.md diff --git a/terraform_fundmentals/20-templatefile.md b/terraform_fundamentals/20-templatefile.md similarity index 100% rename from terraform_fundmentals/20-templatefile.md rename to terraform_fundamentals/20-templatefile.md diff --git a/terraform_fundmentals/21-debug.md b/terraform_fundamentals/21-debug.md similarity index 100% rename from terraform_fundmentals/21-debug.md rename to terraform_fundamentals/21-debug.md diff --git a/versions.md b/versions.md deleted file mode 100644 index b449fb4..0000000 --- a/versions.md +++ /dev/null @@ -1,130 +0,0 @@ -## Description - -In this challenge you will configure your Terraform code to control which versions of Terraform and Terraform providers that the code is compatible with. - -Duration: 10 minutes - -- Task 1: Check Terraform version -- Task 2: Require specific versions of Terraform -- Task 3: Require specific versions of Providers -- Task 4: Format and Validate Terraform Configuration -- Task 5: Validate versions of Terraform and Required Providers - -## Task 1: Check Terraform version - -Check the version of Terraform you are running. - -```bash -terraform version -``` - -```bash -Terraform v1.0.8 -on linux_amd64 -+ provider registry.terraform.io/hashicorp/aws v3.62.0 -+ provider registry.terraform.io/hashicorp/random v3.1.0 -``` - -## Task 2: Require specific versions of Terraform - -Create a Terraform configuration block within a `terraform.tf` in the `~/workstation/terraform/versions` directory to specify which version of Terraform is required to run this code base. - -```bash -mkdir ~/workstation/terraform/versions && cd $_ -touch {terraform,main}.tf -``` - -`terraform.tf` - -```hcl -terraform { - required_version = ">= 1.0.0" -} -``` - -This informs Terraform that it must be at least of version 1.0.0 to run the code. If Terraform is an earlier version it will throw an error. You can validate your configuration parameters easily. - -``` -terraform validate -``` - -```bash -Success! The configuration is valid. -``` - -## Task 3: Require specific versions of Providers - -Terraform Providers are plugins that implement resource types for particular clouds, platforms and generally speaking any remote system with an API. Terraform configurations must declare which providers they require, so that Terraform can install and use them. Popular Terraform Providers include: AWS, Azure, Google Cloud, VMware, Kubernetes and Oracle. - -You can update the terraform configuration block to specify a compatible AWS provider version similar to how you did for the Terraform version. Update the `terraform.tf` with a `required_providers`: - -```hcl -terraform { - required_version = ">= 1.0.0" - required_providers { - azurerm = { - source = "hashicorp/azurerm" - version = "~> 3.0" - } - } -} -``` - -By default Terraform will always pull the latest provider if no version is set. However setting a version provides a way to ensure your Terraform code remains working in the event a newer version introduces a change that -would not work with your existing code. To have more strict controls over the version you may want to require a specific version ( e.g. required_version = "= 1.0.0" ) or use the ~>operator to only allow the right-most version number to increment. - -## Task 4: Format and Validate Terraform Configuration - -Initialize, Format and Validate your terraform configuration by executing the following from the `~/workstation/terraform` directory in the code terminal. - -```bash -cd ~/workstation/terraform/versions -terraform init -upgrade -terraform fmt -recursive -terraform validate -``` - -## Task 5: Validate versions of Terraform and Required Providers - -To see the version of Terraform and providers installed, along with which versions are required by the current configuration you can issue the following commands: - -```bash -terraform version -terraform providers -``` - -```bash -Providers required by configuration: -. -└── provider[registry.terraform.io/hashicorp/azurerm] ~> 3.6.0 - -Providers required by state: - - provider[registry.terraform.io/hashicorp/azurerm] -``` - -## Task 6: Add some basic configuration objects and deploy it - -Now you can add some Azure resources to the configuration and deploy them. - -`main.tf` - -Replace the prefix with with your initials. - -```hcl -provider "azurerm" { - features {} -} - -resource "azurerm_resource_group" "training" { - name = "-resourcegroup" - location = "East US" -} -``` - -Then run the standard workflow: - -```bash -terraform plan -terraform apply -```