From 5c1cf1a5fc2aebd9c32cc4539da9b50a4dcd3d7a Mon Sep 17 00:00:00 2001 From: ned1313 Date: Wed, 16 Aug 2023 09:12:44 -0400 Subject: [PATCH] Day 3 ready --- terraform_advanced/02-versions.md | 2 + terraform_advanced/03-workspaces.md | 7 +++ terraform_advanced/06-for_each.md | 63 +++++++++++---------- terraform_advanced/07-dynamic_blocks.md | 4 ++ terraform_advanced/08-null_resource.md | 41 ++++++++++---- terraform_advanced/09-azure_remote_state.md | 59 +++++++++++++++++-- 6 files changed, 132 insertions(+), 44 deletions(-) diff --git a/terraform_advanced/02-versions.md b/terraform_advanced/02-versions.md index fe5b21a..0f16b59 100644 --- a/terraform_advanced/02-versions.md +++ b/terraform_advanced/02-versions.md @@ -218,3 +218,5 @@ Now we can run `terraform plan` and it will execute successfully. ```bash terraform plan ``` + +You do not need to actually deploy the configuration, unless you really want to. \ No newline at end of file diff --git a/terraform_advanced/03-workspaces.md b/terraform_advanced/03-workspaces.md index 9f3aafe..0004d8f 100644 --- a/terraform_advanced/03-workspaces.md +++ b/terraform_advanced/03-workspaces.md @@ -122,6 +122,13 @@ Check out all your workspaces! terraform workspace list ``` +You can also view all the state files in the `terraform.tfstate.d` directory: + +```bash +ls -l terraform.tfstate.d/ +ls -l terraform.tfstate.d/development/ +``` + ## Task 5: Destroy and delete the staging workspace Try to delete the staging workspace: diff --git a/terraform_advanced/06-for_each.md b/terraform_advanced/06-for_each.md index d132379..570555b 100644 --- a/terraform_advanced/06-for_each.md +++ b/terraform_advanced/06-for_each.md @@ -6,12 +6,12 @@ So far, we've already used arguments to configure your resources. These argument The count argument does however have a few limitations in that it is entirely dependent on the count index which can be shown by performing a `terraform state list`. -A more mature approach to create multiple instances while keeping code DRY is to leverage Terraform's `for-each`. +A more mature approach to create multiple instances while keeping code DRY is to leverage Terraform's `for_each`. - Task 1: Change the number of VM instances with `count` - Task 2: Look at the number of VM instances with `terraform state list` - Task 3: Decrease the Count and determine which instance will be destroyed. -- Task 4: Refactor code to use Terraform `for-each` +- Task 4: Refactor code to use Terraform `for_each` - Task 5: Look at the number of VM instances with `terraform state list` - Task 6: Update the output variables to pull IP and DNS addresses. - Task 7: Update the server variables to determine which instance will be destroyed. @@ -39,7 +39,7 @@ terraform { } ``` -Update the root `main.tf` to utilize the `count` parameter on the VM resource. Notice the count has been variablized to specify the number of VMs. +Populate the root `main.tf` utilizing the `count` parameter on the VM resource. Notice the count has been variablized to specify the number of VMs. `main.tf` @@ -143,7 +143,6 @@ output "public_dns" { `variables.tf` ```hcl variable "prefix" { - default = "" type = string description = "Prefix to append to resources" } @@ -196,29 +195,23 @@ terraform state list ```bash azurerm_network_interface.training[0] azurerm_network_interface.training[1] - +... azurerm_public_ip.training[0] azurerm_public_ip.training[1] - - +... azurerm_virtual_machine.training[0] azurerm_virtual_machine.training[1] - ``` -Notice the way resources are indexed when using meta-arguments. +Notice the way resources are indexed when using the `count` meta-argument. ## Task 3: Decrease the Count and determine which instance will be destroyed Update the count from `2` to `1` by changing the `num_vms` variable in your `terraform.tfvars` file. -Replace the `###` with your initials. - +`terraform.tfvars` ```hcl -prefix = "###" -location = "East US" -admin_username = "testadmin" -admin_password = "Password1234!" +... num_vms = 1 ``` @@ -228,9 +221,11 @@ Run a `terraform apply` followed by a `terraform state list` to view how the ser terraform apply ``` -``` +```bash terraform state list +`````` +```bash azurerm_network_interface.training[0] azurerm_public_ip.training[0] azurerm_resource_group.training @@ -241,9 +236,9 @@ azurerm_virtual_network.training You will see that when using the `count` parameter you have very limited control as to which server Terraform will destroy. It will always default to destroying the server with the highest index count. -## Task 4: Refactor code to use Terraform `for-each` +## Task 4: Refactor code to use Terraform `for_each` -Refactor `main.tf` to make use of the `for-each` command rather then the count command. Replace the following in the `main.tf` and comment out the `output` blocks for now. +Refactor `main.tf` to make use of the `for_each` meta-argument rather then the count command. Replace the contents in the `main.tf` with the following, and comment out the `output` blocks in `outputs.tf` for now. ```hcl locals { @@ -355,13 +350,21 @@ resource "azurerm_virtual_machine" "training" { } ``` -If you run `terraform apply` now, you'll notice that this code will destroy the previous resource and create two new servers based on the attributes defined inside the `servers` variable, which is defined as a map of our servers. +Run `terraform apply` now. + +```bash +terraform apply +``` + +You'll notice that this code will destroy the previous resource and create two new servers based on the attributes defined inside the `servers` local value, which is defined as a map of our servers. ### Task 5: Look at the number of VM instances with `terraform state list` ```bash terraform state list +``` +```bash azurerm_network_interface.training["server-ubuntu-16"] azurerm_network_interface.training["server-ubuntu-18"] azurerm_public_ip.training["server-ubuntu-16"] @@ -373,11 +376,11 @@ azurerm_virtual_machine.training["server-ubuntu-18"] azurerm_virtual_network.training ``` -Since we used _for-each_ to the azurerm_virtual_machine.training resource, it now refers to multiple resources with key references from the `servers` variable. +Since we used `for_each` to create the azurerm_virtual_machine.training resource, it now refers to multiple resources with key references from the `servers` variable. -### Task 6: Update the output variables to pull IP and DNS addresses. +### Task 6: Update the output variables to pull IP and DNS addresses -When using Terraform's `for-each` our output blocks need to be updated to utilize `for` to loop through the server names. This differs from using `count` which utilized the Terraform splat operator `*`. Uncomment and update the output block of your `main.tf`. +When using Terraform's `for_each` our output blocks need to be updated to utilize `for` expressions to loop through the server names. This differs from using `count` which utilized the Terraform splat operator `*`. Update the output block of your `outputs.tf`. ```hcl output "public_dns" { @@ -388,7 +391,7 @@ output "public_dns" { Format, validate and apply your configuration to now see the format of the Outputs. -``` +```bash terraform fmt terraform validate terraform apply @@ -401,18 +404,20 @@ public_dns = { } ``` -## Task 7: Update the server variables to determine which instance will be destroyed. +## Task 7: Update the server variables to determine which instance will be destroyed -Update the `servers` local variable to remove the `server-ubuntu-16` instance by removing the following block: +Update the `servers` local value to the following, removing the `server-ubuntu-16` key and its values: ```hcl - server-ubuntu-16 = { - identity = "${var.prefix}-ubuntu-16" + servers = { + server-ubuntu-18 = { + identity = "${var.prefix}-ubuntu-18" publisher = "Canonical" offer = "UbuntuServer" - sku = "16.04-LTS" + sku = "18.04-LTS" version = "latest" }, + } ``` -If you run `terraform apply` now, you'll notice that this code will destroy the `server-ubuntu-16`, allowing us to target a specific instance that needs to be updated/removed. +If you run `terraform plan` now, you'll notice that this code will destroy the `server-ubuntu-16`, allowing us to target a specific instance that needs to be updated/removed. diff --git a/terraform_advanced/07-dynamic_blocks.md b/terraform_advanced/07-dynamic_blocks.md index 52e7d2c..8bb488a 100644 --- a/terraform_advanced/07-dynamic_blocks.md +++ b/terraform_advanced/07-dynamic_blocks.md @@ -104,3 +104,7 @@ Take a look at the properties of the network security group to validate all the ```bash terraform state show azurerm_network_security_group.nsg ``` + +## Bonus Task + +How could you handle rules that have different properties defined? Could you use a default value if none is defined by the local value? *Hint: the [lookup](https://www.terraform.io/docs/language/functions/lookup.html) function may be helpful.* diff --git a/terraform_advanced/08-null_resource.md b/terraform_advanced/08-null_resource.md index 927d925..60e9906 100644 --- a/terraform_advanced/08-null_resource.md +++ b/terraform_advanced/08-null_resource.md @@ -9,21 +9,33 @@ This lab demonstrates the use of the `null_resource`. Instances of `null_resourc We'll demonstrate how `null_resource` can be used to take action on a set of existing resources that are specified within the `triggers` argument - ## Task 1: Create a Azure Virtual Machine using Terraform ### Step 1.1: Create Server instances -Build the web servers using the Azure Virtual Machine: +Build the web servers using the Azure Virtual Machine resource: Create the folder structure: ```bash mkdir ~/workstation/terraform/null_resource && cd $_ -touch {variables,main}.tf +touch {variables,main,terraform}.tf touch terraform.tfvars ``` +Add the following to the `terraform.tf` file: + +```hcl +terraform { + required_providers { + azurerm = { + source = "hashicorp/azurerm" + version = "~>3.0" + } + } +} +``` + Update your `main.tf` with the following: ```hcl @@ -135,7 +147,7 @@ variable "num_vms" { Update or your `terraform.tfvars` with the following and replace the `###` with your initials: ```hcl -resource_group_name = "###-resourcegroup" +resource_group_name = "###-nullrg" EnvironmentTag = "staging" prefix = "###" location = "East US" @@ -151,7 +163,7 @@ Then perform an `init`, `plan`, and `apply`. ### Step 2.1: Use `null_resource` -Add `null_resource` stanza to the `main.tf`. Notice that the trigger for this resource is set to monitor changes to the number of virtual machines. +Add the `null_resource` block to the `main.tf`. Notice that the trigger for this resource is set to monitor changes to the number of virtual machines. ```hcl resource "null_resource" "web_cluster" { @@ -173,22 +185,31 @@ resource "null_resource" "web_cluster" { } ``` -Initialize the configuration with a `terraform init` followed by a `plan` and `apply`. +The `null_resource` uses the `null` provider, so you need to initialize the configuration to download the `null` provider plugin. Then run a `terraform apply`. + +```bash +terraform init +terraform apply +``` ### Step 2.2: Re-run `plan` and `apply` to trigger `null_resource` -After the infrastructure has completed its buildout, change your machine count (in your terraform.tfvars) and re-run a plan and apply and notice that the null resource is triggered. This is because the `web_cluster_size` changed, triggering our null_resource. +After the infrastructure has completed its buildout, change your machine count (`num_vms` in your terraform.tfvars) and re-run a plan and apply and notice that the null resource is triggered. This is because the `web_cluster_size` changed, triggering our null_resource. -```shell +```bash terraform apply ``` -Run `apply` a few times to see the `null_resource`. +If you run `terraform plan` again, the `null_resource` will not be triggered because the `web_cluster_size` value has not changed. ### Step 2.3: Destroy Finally, run `destroy`. -```shell +```bash terraform destroy ``` + +## Bonus Task + +The `null_resource` is being deprecated in favor of the built-in `terraform_data` resource. Refactor the configuration to use the `terraform_data` resource instead of the `null_resource`. diff --git a/terraform_advanced/09-azure_remote_state.md b/terraform_advanced/09-azure_remote_state.md index 19e28c5..303e1a6 100644 --- a/terraform_advanced/09-azure_remote_state.md +++ b/terraform_advanced/09-azure_remote_state.md @@ -1,3 +1,5 @@ +# Lab: Azure Remote State + ## Description In this challenge you will use create an Azure storage account for remote state storage and then update a configuration to use that storage account. @@ -17,13 +19,15 @@ You will use Terraform to create the Azure storage account, a container in the s Create the folder structure for the storage account and main configuration: ```bash -mkdir -p ~/workstation/terraform/azure_remote_state/{storage_account,main} +mkdir -p ~/workstation/terraform/azure_remote_state/{storage_account,vnet} touch ~/workstation/terraform/azure_remote_state/storage_account/{terraform,main}.tf -touch ~/workstation/terraform/azure_remote_state/main/{terraform,main}.tf +touch ~/workstation/terraform/azure_remote_state/vnet/{terraform,main}.tf touch ~/workstation/terraform/azure_remote_state/storage_account/terraform.tfvars cd ~/workstation/terraform/azure_remote_state/storage_account ``` +First you need to deploy the storage account. + Add the following to the `terraform.tf` file in the `storage_account` directory: ```hcl @@ -171,7 +175,7 @@ terraform apply ## Task 2: Deploy the configuration using the `local` backend -In the `main` directory add the following to the `terraform.tf` directory: +In the `vnet` directory add the following to the `terraform.tf` directory: ```hcl terraform { @@ -217,8 +221,9 @@ resource "azurerm_virtual_network" "remote_state" { At first you are going to use the `local` backend, so the `azurerm` backend is commented out. You'll remove those comments in a moment. For now, initialize and apply the configuration: ```bash +cd ../vnet/ terraform init -terrform apply +terraform apply ``` ## Task 3: Update the configuration with the `azurerm` backend and migrate your state data @@ -233,6 +238,18 @@ You are going to migrate your existing state data to the Azure storage account c You are changing the backend for state data, so Terraform must be initialized with the new values. The `backend` block is a partial configuration. The rest of the configuration will be specified as part of the `terraform init` command. You will need that `init_string` output now to run the command. +```bash +terraform -chdir="../storage_account" output init_string +``` + +Do not copy the `<