Duration: 15 minutes
This lab demonstrates the use of the null_resource
. Instances of null_resource
are treated like normal resources, but they don't do anything. Like with any other resource, you can configure provisioners and connection details on a null_resource. You can also use its triggers argument and any meta-arguments to control exactly where in the dependency graph its provisioners will run.
- Task 1: Create a Azure Virtual Macine using Terraform
- Task 2: Use
null_resource
with a VM to take action withtriggers
.
We'll demonstrate how null_resource
can be used to take action on a set of existing resources that are specified within the triggers
argument
Build the web servers using the Azure Virtual Machine:
Create the folder structure:
mkdir ~/workstation/terraform/null_resource && cd $_
touch {variables,main}.tf
touch terraform.tfvars
Update your main.tf
with the following:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "training" {
name = var.resource_group_name
location = var.location
}
resource "azurerm_virtual_network" "training" {
name = "azureuser${var.prefix}vn"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.training.location
resource_group_name = azurerm_resource_group.training.name
}
resource "azurerm_subnet" "training" {
name = "azureuser${var.prefix}sub"
resource_group_name = azurerm_resource_group.training.name
virtual_network_name = azurerm_virtual_network.training.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_public_ip" "training" {
count = var.num_vms
name = "azureuser${var.prefix}ip-${count.index + 1}"
location = azurerm_resource_group.training.location
resource_group_name = azurerm_resource_group.training.name
allocation_method = "Dynamic"
idle_timeout_in_minutes = 30
domain_name_label = "azureuser${var.prefix}domain${count.index + 1}"
}
resource "azurerm_network_interface" "training" {
count = var.num_vms
name = "azureuser${var.prefix}ni-${count.index + 1}"
location = azurerm_resource_group.training.location
resource_group_name = azurerm_resource_group.training.name
ip_configuration {
name = "azureuser${var.prefix}ip"
subnet_id = azurerm_subnet.training.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.training[count.index].id
}
}
resource "azurerm_virtual_machine" "training" {
count = var.num_vms
name = "${var.prefix}vm-${count.index + 1}"
location = azurerm_resource_group.training.location
resource_group_name = azurerm_resource_group.training.name
network_interface_ids = [azurerm_network_interface.training[count.index].id]
vm_size = "Standard_D2s_v4"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "${var.prefix}disk-${count.index + 1}"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = var.computer_name
admin_username = var.admin_username
admin_password = var.admin_password
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = var.EnvironmentTag
}
}
Update your variables.tf
with the following:
variable "resource_group_name" {}
variable "EnvironmentTag" {}
variable "prefix" {}
variable "location" {
default = "East US"
}
variable "computer_name" {}
variable "admin_username" {}
variable "admin_password" {}
variable "num_vms" {
default = 2
}
Update or your terraform.tfvars
with the following and replace the ###
with your initials:
resource_group_name = "###-resourcegroup"
EnvironmentTag = "staging"
prefix = "###"
location = "East US"
computer_name = "myserver"
admin_username = "testadmin"
admin_password = "Password1234!"
num_vms = 1
Then perform an init
, plan
, and apply
.
Add null_resource
stanza to the main.tf
. Notice that the trigger for this resource is set to monitor changes to the number of virtual machines.
resource "null_resource" "web_cluster" {
# Changes to any instance of the cluster requires re-provisioning
triggers = {
web_cluster_size = join(",",azurerm_virtual_machine.training.*.id)
}
# Bootstrap script can run on any instance of the cluster
# So we just choose the first in this case
connection {
host = element(azurerm_public_ip.training.*.ip_address, 0)
}
provisioner "local-exec" {
# Bootstrap script called with private_ip of each node in the cluster
command = "echo ${join(" Cluster local IP is : ", azurerm_public_ip.training.*.ip_address)}"
}
}
Initialize the configuration with a terraform init
followed by a plan
and apply
.
After the infrastructure has completed its buildout, change your machine count (in your terraform.tfvars) and re-run a plan and apply and notice that the null resource is triggered. This is because the web_cluster_size
changed, triggering our null_resource.
terraform apply
Run apply
a few times to see the null_resource
.
Finally, run destroy
.
terraform destroy