Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / Hosted-services / Azure

Remote Setup/Install VMs on Azure with Terraform

5.00/5 (3 votes)
31 Aug 2017CPOL8 min read 15.8K   78  
Provisioning with Terraform on Azure

Introduction

Microsoft recently announced increased investment in integrating Terraform with Azure (Aug 2017). This is a continuation of Microsoft's reach into the agnostic/multi-cloud cloud arena, where they are doing whatever it takes to help developers succeed on the cloud. It used to be the case that Azure was only for Microsoft developers - well now more. The crew in Redmond and every office around the globe are really pushing the open-source and 'Azure for everyone' opportunity. Openness can only be good for everyone in the long run. You can read more details about Terraform meets Azure on the official MS Terraform page.

Anyways, back to the task at hand - progressing this DevOps/Infrastructure focused series. In the first article in this series, I introduced 'Terraform', and gave an introduction to what it is, and how to use it. The second article discussed how to use variables, interpolation and resource count to make multiple copies of resources to save duplication of config/code. This article assumes you have read the other two, and gives instruction for setting up or 'provisioning' the remote virtual machines with your particular setup once they have been deployed.

Background

When you create an instance of a virtual machine in Azure, AWS or elsewhere, you cannot be sure if it is completely up to date and patched, etc. Even if it is, you may need to configure ('provision') a setup that is quite specific to your needs and supports the solution you want to build the infrastructure for. This article shows you some of the different options for provisioning machines using Terraform on Azure.

Provisioners

When we create virtual machines, we then need to run things on them - this might be for setup, updates, security hardening, etc. 'Provisioners' are the construct we use to declare what we want to provision against a particular resource. This might be uploading files, installing software, or running some custom script.

Code Placement

You will recall from the earlier articles in this series that we put together basic building blocks that define our infrastructure in '.TF' configuration files. Building blocks represent the different parts of our infrastructure, for example virtual networks, public IP addresses, virtual network cards, disks, etc. Provisioners are generally placed inside the resource block they are targeted at. So in this case, we are going to create a provisioner block inside our virtual machine creation block.

# outer virtual machine resource definition
resource "azurerm_virtual_machine" "workerNode" {
	name = "workerNode"
    location = "North Europe"
    resource_group_name = "${azurerm_resource_group.Alias_RG.name}"  
	network_interface_ids = [network_interface_ids = ???]
    vm_size = "Standard_A0"
    ... etc
# inner definition of a provisioner
    provisioner XXX TYPE XXX {
       XXX specific settings XXX
}

Lifecycle Events

Terraform offers a number of specific templates we can use when provisioning that will be triggered at specific times during the provisioning life-cycle - these include create, destroy and on-fail provisioners. By default, the config is set to assume a create event, however, you can flag the provisioner to only happen when the resource it's attached to is being destroyed. Here are three examples showing default, default with create (the same) and destroy. Note the keyword 'when' that needs to be put in to tell Terraform when to do the provisioning in this block.

provisioner XXX TYPE XXX {
    XXX specific settings XXX
}
provisioner XXX TYPE XXX {
   when = "create"}

provisioner XXX TYPE XXX {
   when = "destroy"}

Now, one gotcha - at the time of writing (Aug 2017), there seems to be some instability in this particular setting - so if it is flakey for you, raise a ticket on Github!

So - we know what the basic structure looks like, but how can we actually provision things to our remote server? ... to do this, we can use one of three types of provisioning code block (that would be the 'XXX type XXX' place-holder I have above!).

File Provisioning

Uploading files is carried out using the FILE provisioning setting. We can use FILE to upload both single files, and the contents of entire folders. Terraform works out if it is a file or folder depending on what you give it as a 'source', and then copies that file or folder up to the remote system 'destination' location. The only thing you must be careful of is that you upload to a location that exists, and that you have write permissions on.

 # Copies all files and folders in apps/app1 to D:/IIS/webapp1
   provisioner "file" {
   source "apps/app1/"
   destination = "D:/IIS/webapp1"
}

# Copies a single file "MyTextFile.txt" to the d:\data folder on remote machine
   provisioner "file" {
   source "c:\data\MyTextFile.txt"
   destination = "d:\data\files\MyTextFile.txt"
}

You can also get Terraform to create a file, and then pass over what the contents of that file should be - this is useful if you are making dynamic file contents for example. The trick here is we don't use the 'source' attribute, but a 'content' attribute instead.

#  Copies the string in "content"into c:\data\someNotes.txt
   provisioner "file" {
   content "my notes: ${var.Notes}"
   destination = "d:\data\someNotes.txt"
}

EXEC Provisioning

Aside from uploading files and content to our newly built remote machines, most likely we also need to run scripts and other forms of setup. This can be achieved by using the EXEC commands. There are two of these, one for remote hosts, and one that can be used to execute back on the local host that Terraform mis running from. As you may expect, these are remote-exec and local-exec.

A provisioner type of remote-exec takes the following format:

provisioner "remote-exec" {
    inline = [   # argument declaration
      "RunApp.exe someParam"   # 1 or more elements to execute inline
    ]
  }

Remote-exec has three different argument types:

  1. inline - This is a list of command strings. They are executed in the order they are provided. This cannot be combined with script or scripts.
  2. script - This is a path (relative or absolute) to a local script that will be copied to the remote resource and then executed. This cannot be combined with inline or scripts.
  3. scripts - This is a list of paths (relative or absolute) to local scripts that will be copied to the remote resource and then executed. They are executed in the order they are provided. This cannot be combined with inline or script.

Local-exec gets constructed like this:

provisioner "local-exec" {
    command = [
      "RunApp.exe someParam"
    ]
  }

So where remote-exec had an argument element of 'inline', local-exec uses 'command'. The command given can be provided as a relative path to the current working directory or as an absolute path. It is evaluated in a shell, and can use environment variables or Terraform variables.

Connections

When we run a provisioner, it is done within the context of a resource being created or destroyed. By default, the provisioners should use the established connection it they already have to the machine, however, sometimes you need to help things along. To be associated with a resource, connections need to be declared within their context - the following example demonstrates:

provisioner "remote-exec" {
inline = [
 "sudo mkdir /tmp/staging"
 ]
  connection {
     type = "ssh"
     user = "testadmin"
     host = "<<some public IP or server host name>>"
     private_key = "${file("id_rsa")}"
  }
}

You will notice in the connection I define the user and also a private key, but not a password. This is my choice - if you wish, you can configure remote hosts to use passwords. I have denied this, and prefer to connect securely using keys. The particular interpolation shown parses to load a local file I have placed in the same folder as the terraform.exe called 'id_rsa'.

Timeouts

Sometimes, when using Terraform, you will find that your script fails with a timeout. A common problem here is that the cloud provider (AWS, Google, Azure) is just taking its jolly old time spinning up the resources you have requested, or perhaps responding down the pipe with feedback to the script. To help with this issue, we can define and lengthen the 'timeout' period on exec provisioners - all we need to do is provide the amount of time in seconds that we want to delay before calling it quits and starting again....

provisioner "remote-exec" {
inline = [
 "sudo mkdir /tmp/staging"
 ]
  connection {
     type = "ssh"
     user = "testadmin"
     host = "<<some public IP or server host name>>"
     private_key = "${file("id_rsa")}"
     timeout = "300s"
  }
}

Null Resources

It's worth mentioning here a useful thing called a 'null resource'. This is a type of resource that you can use to wrap more generic provisioners that you wish to carry out on the overall infrastructure, not just a particular machine. Here is an example that echos a message to the local machine you are running Terraform on, when any instance of a cluster changes:

resource "null_resource" "cluster" {
  # Changes to any instance of the cluster requires re-provisioning
  triggers {
    cluster_instance_ids = "${join(",", aws_instance.cluster.*.id)}"
  }

The 'trigger' is the event that kicks off the change in the dependency graph.

When Things Go Wrong....

Despite our best efforts, sometimes things just don't work out as we should like. From a Terraform point of view, this might mean that the cloud provider does not complete a request, a request fails for some reason, etc. The way we handle these gracefully, is to use the 'on_failure' tag. In the example below, let's say that we made a typo when naming the variable, and put in 'NotesN', instead of just 'Notes'. We don't have a variable called 'NotesN', and even if we did, and it pointed to a local file, let's say that didn't exist, this provisioner would fail. The default behaviour of Terraform in this case is to 'Taint' the resource being created (that is, mark it as not completed/needs rebuilding), and report an error. If you set 'on_failure' to 'fail', a taint will occur (this is the default setting), if on the other hand it's not a problem, you can set the 'on_failure' to 'continue' and the provisioning will proceed regardless.

#  Copies the string in "content"into c:\data\someNotes.txt
   provisioner "file" {
   content "my notes: ${var.NotesN}"
   destination = "d:\data\someNotes.txt"
   on_failure="continue" # or on_failure="fail"
   ...
}

Wrap-Up

Ok, that's the basics of provisionoing. If you need further details and more options, you can read all about it on the Terraform provisioners documentation page.

I have attached an example script so you don't have to type out to test it out - just remember to fill in your own details!... Happy coding and don't forget to give the article a vote if it was useful!

In the next article in this series, we will take a look at the next step, setting up a Kubernetes/Docker cluster inside the virtual network of machines we have created using Terraform.

Finally, if you want to get a broader view of things, this video about Terraform on Azure is worth a look.

History

  • 31st August, 2017: Version 1

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)