Skip to content

Commit

Permalink
Day 3 ready
Browse files Browse the repository at this point in the history
  • Loading branch information
ned1313 committed Aug 16, 2023
1 parent cae4f83 commit 5c1cf1a
Show file tree
Hide file tree
Showing 6 changed files with 132 additions and 44 deletions.
2 changes: 2 additions & 0 deletions terraform_advanced/02-versions.md
Original file line number Diff line number Diff line change
Expand Up @@ -218,3 +218,5 @@ Now we can run `terraform plan` and it will execute successfully.
```bash
terraform plan
```
You do not need to actually deploy the configuration, unless you really want to.
7 changes: 7 additions & 0 deletions terraform_advanced/03-workspaces.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,13 @@ Check out all your workspaces!
terraform workspace list
```

You can also view all the state files in the `terraform.tfstate.d` directory:

```bash
ls -l terraform.tfstate.d/
ls -l terraform.tfstate.d/development/
```

## Task 5: Destroy and delete the staging workspace

Try to delete the staging workspace:
Expand Down
63 changes: 34 additions & 29 deletions terraform_advanced/06-for_each.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ So far, we've already used arguments to configure your resources. These argument

The count argument does however have a few limitations in that it is entirely dependent on the count index which can be shown by performing a `terraform state list`.

A more mature approach to create multiple instances while keeping code DRY is to leverage Terraform's `for-each`.
A more mature approach to create multiple instances while keeping code DRY is to leverage Terraform's `for_each`.

- Task 1: Change the number of VM instances with `count`
- Task 2: Look at the number of VM instances with `terraform state list`
- Task 3: Decrease the Count and determine which instance will be destroyed.
- Task 4: Refactor code to use Terraform `for-each`
- Task 4: Refactor code to use Terraform `for_each`
- Task 5: Look at the number of VM instances with `terraform state list`
- Task 6: Update the output variables to pull IP and DNS addresses.
- Task 7: Update the server variables to determine which instance will be destroyed.
Expand Down Expand Up @@ -39,7 +39,7 @@ terraform {
}
```

Update the root `main.tf` to utilize the `count` parameter on the VM resource. Notice the count has been variablized to specify the number of VMs.
Populate the root `main.tf` utilizing the `count` parameter on the VM resource. Notice the count has been variablized to specify the number of VMs.

`main.tf`

Expand Down Expand Up @@ -143,7 +143,6 @@ output "public_dns" {
`variables.tf`
```hcl
variable "prefix" {
default = "<initials>"
type = string
description = "Prefix to append to resources"
}
Expand Down Expand Up @@ -196,29 +195,23 @@ terraform state list
```bash
azurerm_network_interface.training[0]
azurerm_network_interface.training[1]

...
azurerm_public_ip.training[0]
azurerm_public_ip.training[1]


...
azurerm_virtual_machine.training[0]
azurerm_virtual_machine.training[1]

```

Notice the way resources are indexed when using meta-arguments.
Notice the way resources are indexed when using the `count` meta-argument.

## Task 3: Decrease the Count and determine which instance will be destroyed

Update the count from `2` to `1` by changing the `num_vms` variable in your `terraform.tfvars` file.

Replace the `###` with your initials.

`terraform.tfvars`
```hcl
prefix = "###"
location = "East US"
admin_username = "testadmin"
admin_password = "Password1234!"
...
num_vms = 1
```

Expand All @@ -228,9 +221,11 @@ Run a `terraform apply` followed by a `terraform state list` to view how the ser
terraform apply
```

```
```bash
terraform state list
``````

```bash
azurerm_network_interface.training[0]
azurerm_public_ip.training[0]
azurerm_resource_group.training
Expand All @@ -241,9 +236,9 @@ azurerm_virtual_network.training

You will see that when using the `count` parameter you have very limited control as to which server Terraform will destroy. It will always default to destroying the server with the highest index count.

## Task 4: Refactor code to use Terraform `for-each`
## Task 4: Refactor code to use Terraform `for_each`

Refactor `main.tf` to make use of the `for-each` command rather then the count command. Replace the following in the `main.tf` and comment out the `output` blocks for now.
Refactor `main.tf` to make use of the `for_each` meta-argument rather then the count command. Replace the contents in the `main.tf` with the following, and comment out the `output` blocks in `outputs.tf` for now.

```hcl
locals {
Expand Down Expand Up @@ -355,13 +350,21 @@ resource "azurerm_virtual_machine" "training" {
}
```

If you run `terraform apply` now, you'll notice that this code will destroy the previous resource and create two new servers based on the attributes defined inside the `servers` variable, which is defined as a map of our servers.
Run `terraform apply` now.

```bash
terraform apply
```

You'll notice that this code will destroy the previous resource and create two new servers based on the attributes defined inside the `servers` local value, which is defined as a map of our servers.
### Task 5: Look at the number of VM instances with `terraform state list`
```bash
terraform state list
```
```bash
azurerm_network_interface.training["server-ubuntu-16"]
azurerm_network_interface.training["server-ubuntu-18"]
azurerm_public_ip.training["server-ubuntu-16"]
Expand All @@ -373,11 +376,11 @@ azurerm_virtual_machine.training["server-ubuntu-18"]
azurerm_virtual_network.training
```
Since we used _for-each_ to the azurerm_virtual_machine.training resource, it now refers to multiple resources with key references from the `servers` variable.
Since we used `for_each` to create the azurerm_virtual_machine.training resource, it now refers to multiple resources with key references from the `servers` variable.
### Task 6: Update the output variables to pull IP and DNS addresses.
### Task 6: Update the output variables to pull IP and DNS addresses
When using Terraform's `for-each` our output blocks need to be updated to utilize `for` to loop through the server names. This differs from using `count` which utilized the Terraform splat operator `*`. Uncomment and update the output block of your `main.tf`.
When using Terraform's `for_each` our output blocks need to be updated to utilize `for` expressions to loop through the server names. This differs from using `count` which utilized the Terraform splat operator `*`. Update the output block of your `outputs.tf`.

```hcl
output "public_dns" {
Expand All @@ -388,7 +391,7 @@ output "public_dns" {
Format, validate and apply your configuration to now see the format of the Outputs.
```
```bash
terraform fmt
terraform validate
terraform apply
Expand All @@ -401,18 +404,20 @@ public_dns = {
}
```
## Task 7: Update the server variables to determine which instance will be destroyed.
## Task 7: Update the server variables to determine which instance will be destroyed
Update the `servers` local variable to remove the `server-ubuntu-16` instance by removing the following block:
Update the `servers` local value to the following, removing the `server-ubuntu-16` key and its values:
```hcl
server-ubuntu-16 = {
identity = "${var.prefix}-ubuntu-16"
servers = {
server-ubuntu-18 = {
identity = "${var.prefix}-ubuntu-18"
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
sku = "18.04-LTS"
version = "latest"
},
}
```
If you run `terraform apply` now, you'll notice that this code will destroy the `server-ubuntu-16`, allowing us to target a specific instance that needs to be updated/removed.
If you run `terraform plan` now, you'll notice that this code will destroy the `server-ubuntu-16`, allowing us to target a specific instance that needs to be updated/removed.
4 changes: 4 additions & 0 deletions terraform_advanced/07-dynamic_blocks.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,3 +104,7 @@ Take a look at the properties of the network security group to validate all the
```bash
terraform state show azurerm_network_security_group.nsg
```

## Bonus Task

How could you handle rules that have different properties defined? Could you use a default value if none is defined by the local value? *Hint: the [lookup](https://www.terraform.io/docs/language/functions/lookup.html) function may be helpful.*
41 changes: 31 additions & 10 deletions terraform_advanced/08-null_resource.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,21 +9,33 @@ This lab demonstrates the use of the `null_resource`. Instances of `null_resourc

We'll demonstrate how `null_resource` can be used to take action on a set of existing resources that are specified within the `triggers` argument


## Task 1: Create a Azure Virtual Machine using Terraform

### Step 1.1: Create Server instances

Build the web servers using the Azure Virtual Machine:
Build the web servers using the Azure Virtual Machine resource:

Create the folder structure:

```bash
mkdir ~/workstation/terraform/null_resource && cd $_
touch {variables,main}.tf
touch {variables,main,terraform}.tf
touch terraform.tfvars
```

Add the following to the `terraform.tf` file:

```hcl
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~>3.0"
}
}
}
```

Update your `main.tf` with the following:

```hcl
Expand Down Expand Up @@ -135,7 +147,7 @@ variable "num_vms" {
Update or your `terraform.tfvars` with the following and replace the `###` with your initials:

```hcl
resource_group_name = "###-resourcegroup"
resource_group_name = "###-nullrg"
EnvironmentTag = "staging"
prefix = "###"
location = "East US"
Expand All @@ -151,7 +163,7 @@ Then perform an `init`, `plan`, and `apply`.

### Step 2.1: Use `null_resource`

Add `null_resource` stanza to the `main.tf`. Notice that the trigger for this resource is set to monitor changes to the number of virtual machines.
Add the `null_resource` block to the `main.tf`. Notice that the trigger for this resource is set to monitor changes to the number of virtual machines.

```hcl
resource "null_resource" "web_cluster" {
Expand All @@ -173,22 +185,31 @@ resource "null_resource" "web_cluster" {
}
```

Initialize the configuration with a `terraform init` followed by a `plan` and `apply`.
The `null_resource` uses the `null` provider, so you need to initialize the configuration to download the `null` provider plugin. Then run a `terraform apply`.

```bash
terraform init
terraform apply
```

### Step 2.2: Re-run `plan` and `apply` to trigger `null_resource`

After the infrastructure has completed its buildout, change your machine count (in your terraform.tfvars) and re-run a plan and apply and notice that the null resource is triggered. This is because the `web_cluster_size` changed, triggering our null_resource.
After the infrastructure has completed its buildout, change your machine count (`num_vms` in your terraform.tfvars) and re-run a plan and apply and notice that the null resource is triggered. This is because the `web_cluster_size` changed, triggering our null_resource.

```shell
```bash
terraform apply
```

Run `apply` a few times to see the `null_resource`.
If you run `terraform plan` again, the `null_resource` will not be triggered because the `web_cluster_size` value has not changed.

### Step 2.3: Destroy

Finally, run `destroy`.

```shell
```bash
terraform destroy
```

## Bonus Task

The `null_resource` is being deprecated in favor of the built-in `terraform_data` resource. Refactor the configuration to use the `terraform_data` resource instead of the `null_resource`.
59 changes: 54 additions & 5 deletions terraform_advanced/09-azure_remote_state.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# Lab: Azure Remote State

## Description

In this challenge you will use create an Azure storage account for remote state storage and then update a configuration to use that storage account.
Expand All @@ -17,13 +19,15 @@ You will use Terraform to create the Azure storage account, a container in the s
Create the folder structure for the storage account and main configuration:

```bash
mkdir -p ~/workstation/terraform/azure_remote_state/{storage_account,main}
mkdir -p ~/workstation/terraform/azure_remote_state/{storage_account,vnet}
touch ~/workstation/terraform/azure_remote_state/storage_account/{terraform,main}.tf
touch ~/workstation/terraform/azure_remote_state/main/{terraform,main}.tf
touch ~/workstation/terraform/azure_remote_state/vnet/{terraform,main}.tf
touch ~/workstation/terraform/azure_remote_state/storage_account/terraform.tfvars
cd ~/workstation/terraform/azure_remote_state/storage_account
```

First you need to deploy the storage account.

Add the following to the `terraform.tf` file in the `storage_account` directory:

```hcl
Expand Down Expand Up @@ -171,7 +175,7 @@ terraform apply

## Task 2: Deploy the configuration using the `local` backend

In the `main` directory add the following to the `terraform.tf` directory:
In the `vnet` directory add the following to the `terraform.tf` directory:

```hcl
terraform {
Expand Down Expand Up @@ -217,8 +221,9 @@ resource "azurerm_virtual_network" "remote_state" {
At first you are going to use the `local` backend, so the `azurerm` backend is commented out. You'll remove those comments in a moment. For now, initialize and apply the configuration:

```bash
cd ../vnet/
terraform init
terrform apply
terraform apply
```

## Task 3: Update the configuration with the `azurerm` backend and migrate your state data
Expand All @@ -233,6 +238,18 @@ You are going to migrate your existing state data to the Azure storage account c

You are changing the backend for state data, so Terraform must be initialized with the new values. The `backend` block is a partial configuration. The rest of the configuration will be specified as part of the `terraform init` command. You will need that `init_string` output now to run the command.

```bash
terraform -chdir="../storage_account" output init_string
```

Do not copy the `<<EOT` and `EOT` lines. Only copy the string between them. It should look something like this:

```bash
-backend-config=storage_account_name=eco98775 -backend-config=container_name=terraform-state -backend-config=sas_token="?sv=2017-07-29&ss=b&srt=sco&sp=rwdlac&se=2022-08-16T15:51:29Z&st=2022-05-18T15:51:29Z&spr=https&sig=45%2B3sGaBL%2F6Pw4YEDQG70kbKu%2FDojFlWILlyqz43mQA%3D"
```

Copy the string and paste it into the `terraform init` command.

```bash
terraform init PASTE_THE_STRING_HERE
```
Expand All @@ -256,4 +273,36 @@ Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.
```

You can now delete the local `terraform.tfstate` file and run a `terraform plan` to confirm the state data migration was successful.
The local `terraform.tfstate` is now empty:

```bash
cat terraform.tfstate
```

You can now delete the local `terraform.tfstate` file and run a `terraform plan` to confirm the state data migration was successful.

The new backend configuration is held in the `.terraform/terraform.tfstate` file. You can view the contents of that file to see the new configuration:

```bash
cat .terraform/terraform.tfstate
```

```bash
{
"version": 3,
"serial": 1,
"lineage": "b803be3b-bf51-0f78-858c-bd4b0e7b928d",
"backend": {
"type": "azurerm",
"config": {
"access_key": null,
"client_certificate_password": null,
"client_certificate_path": null,
"client_id": null,
"client_secret": null,
"container_name": "terraform-state",
"endpoint": null,
"environment": null,
"key": "terraform.tfstate",
...
```

0 comments on commit 5c1cf1a

Please sign in to comment.