Friday 9 February 2018

Terraform - Level 2

Terraform - Level 2


So here I'll continue on from my previous post and show how you "arrange" terraform files in a better way. Some elements of the previous build.tf file will never change or rarely. The connection information might update it's password but the rest is fairly static. You may wish to change which Datacenter, Cluster, Datastore and Network is used for each set of VMs but the definitions themselves that you want to work with can remain fairly constant. You might decide tags are important and add that to your definitions so that everyone now has to include tag data in their build.tf file.

By the way I first came across Terraform recently from a vBrownbag session by Colin Westwater:
http://www.vgemba.net
https://www.youtube.com/watch?v=nQ7oRSi6mBU
Great work by him on this!

I'd also recommend the following book as a general guide to this whole area:
Infrastructure as Code : Managing Server in the Cloud - Kief Morris
https://www.amazon.co.uk/Infrastructure-Code-Managing-Servers-Cloud-ebook/dp/B01GUG9ZNU

You can check the Terraform version as follows:
They have added a lot of improvements recently but note that your code may give errors from time to time after providers are updated. Then it's back to the documentation to see what's happened!

There are three broad files you should use to start with:

build.tf - tells terraform what to build (what we used in the last post)
variables.tf - define the variables, static, rarely changes
terraform.tfvars - defines the values for the variables, usernames, password, vcenter address etc

So I'd expect there to be a range of build.tf files (each with different names of course) that were written to perform a set of tasks. Developers can handle that. A Senior Developer might be assigned to maintain the variables and values for them so these are controlled and kept sane!

Note: if using Github you should exclude the terraform.tfvars from being updated or you'll leak your environment credentials!

The full range of create / clone VM options is covered in the documentation here:
https://www.terraform.io/docs/providers/vsphere/r/virtual_machine.html

So, let's create a fresh folder, put the Terraform.exe in it and start over. The three files we'll need to create are listed below with their contents to start you off, then we'll do the same commands "terraform init / plan / apply / destroy" as before and see what happens.

terraform.tfvars

# Top level variables that define the connection to the environment
vsphere_vcenter = "192.168.10.17"
vsphere_user = "administrator@vsphere.local"
vsphere_password = "Get Your Own Password"
vsphere_datacenter = "Labdc"

variables.tf

# Variables
variable "vsphere_vcenter" {}
variable "vsphere_user" {}
variable "vsphere_password" {}
variable "vsphere_datacenter" {}

build.tf

# Configure the VMware vSphere Provider
provider "vsphere" {
    vsphere_server = "${var.vsphere_vcenter}"
    user = "${var.vsphere_user}"
    password = "${var.vsphere_password}"
    allow_unverified_ssl = true
}

data "vsphere_datacenter" "dc" {
  name = "Labdc"
}

data "vsphere_datastore" "datastore" {
  name          = "Datastore0"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_resource_pool" "pool" {
  name          = "Labcl/Resources"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_network" "network" {
  name          = "VM Network"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_virtual_machine" "template" {
  name          = "CentOS"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

resource "vsphere_virtual_machine" "vm" {
  name             = "terraform-test"
  resource_pool_id = "${data.vsphere_resource_pool.pool.id}"
  datastore_id     = "${data.vsphere_datastore.datastore.id}"

  num_cpus = 2
  memory   = 1024
  guest_id = "${data.vsphere_virtual_machine.template.guest_id}"

  scsi_type = "${data.vsphere_virtual_machine.template.scsi_type}"

  network_interface {
    network_id   = "${data.vsphere_network.network.id}"
    adapter_type = "${data.vsphere_virtual_machine.template.network_interface_types[0]}"
  }

  disk {
    label            = "disk0"
    size             = "${data.vsphere_virtual_machine.template.disks.0.size}"
    eagerly_scrub    = "${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
    thin_provisioned = "${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
  }

  clone {
    template_uuid = "${data.vsphere_virtual_machine.template.id}"

    customize {
      linux_options {
        host_name = "terraform-test"
        domain    = "lab.local"
      }

      network_interface { }

    }
  }
}


Now I don't see people firing up Terraform for one off builds. I see this tool being used as part of an automated strategy where servers are built, updated, destroyed and rebuilt automatically. Someone updates the Template once per week perhaps and someone else may adjust the virtual hardware settings in the build.tf file and next time the automated script runs the environment takes on the new values. This also doesn't address auto-scaling, another level entirely. Your inventory and monitoring solutions should handle these changes with ease.
Of course not all applications will accept this approach and it has to be seamless. But this is a journey so read the book above and see how this approach could be of benefit to you in your particular environment to help stabilise a more agile approach to IT.

In a later post I'll show you Azure in action as that will help coalesce how this tool is more powerful than one which just speaks to a single environment.