Product
|
Cloud costs
|
released
July 13, 2021
|
3
min read
|

Terraform 201: What It Is, Tutorial, and More

Updated

In this article, we're going to be covering HashiCorp Terraform, an Infrastructure as code (IAC) tool that's hastening DevOps and engineering teams in the world of cloud computing. HashiCorp Terraform, like AWS CloudFormation, enables you to define the desired state of your infrastructure using code and to deploy those changes into your AWS accounts, Google Cloud projects, and N other solutions (more on this later). Let's dive into some of the core use cases, concepts, and some caveats to be aware of.

Terraform Tutorial

Terraform State

First off, let's talk about how Terraform keeps track of the infrastructure it manages. One might wonder, "How does Terraform know which changes need to be made?"

Typically, an Amazon S3 or Google Cloud Storage bucket is used as the source of truth for what Terraform manages. Local state is not recommended for most projects, as your state recording for infrastructure could be lost during a disk failure.

If you choose to use Amazon S3, it's highly recommended to enable S3 object versioning on the bucket you designate as your "remote state," just in case any issues arise down the road.

Terraform commands such as `terraform plan`, `terraform apply`, and the nerve-racking `terraform destroy` will leverage what's been recorded in remote state while Terraform reconciles the desired state (what you have written as Terraform code) vs reality, from the vantage point of whichever cloud provider or SaaS API your code is interacting with.

Terraform Core Concepts

Now that we've covered Terraform remote state from a high level, let's talk about the types of resources that would live there, and how one can manipulate the state as they're making changes to the infrastructure code. 

Terraform File Extensions

Here's a quick rundown on some of the native Terraform file extensions:

*.tf
As you'll see below, files with a .tf extension are processed as "infrastructure declarative" files. Terraform is fairly flexible with the naming of these files, although it is advisable to follow HashiCorp's Standard Module Documentation. Doing so will ensure that new engineers onboarded to your IAC Git repository as quickly as possible.

*.tfvars
Following suit with the standard module documentation convention, variables for a root module are defined in variables.tf, which serves as the default values for the module. Let's say that by default, you want RDS backups retention to be set to 7 days, however, in production you want to hold those for 14 days.

Here's what the variables.tf object would look like to define the variable:

variable "webapp_rds_backup_retention_days" {
 description = "Number of days to retain RDS backups"
 type        = number
 default     = 7
}

To override this value, you'd create a production.tfvars or similarly named file and add a line like the below:

webapp_rds_backup_retention_days=14

*.tf.json
Albeit somewhat uncommon, you may run across this file extension at some point. While Terraform has its own native language (HCL), it also supports JSON-based configuration.

*.tpl
Without going into too much detail, tpl files are template files that can be interpolated by Terraform in an elegant and reusable way. Terraform provides capabilities to loop through a provided list() and generate, for instance, an IAM policy resource block dynamically. Here's some further reading material on the subject, should you be interested!

Deploying Changes

To deploy (or apply) changes in your current working directory, we'd use commands like the below. Depending on whether or not you're looking at using Terraform workspaces or environments, you may pass in tfvars files as part of `terraform apply`.

# Pull any modules and connect to the backend remote state
terraform init

# Optional, since terraform apply plans first and prompts for approval
terraform plan

# Reconcile reality vs state vs cwd code
terraform apply

It's important to note that terraform init typically needs to be called before applying in a directory for the first time, or if a modification is made to a module your root module depends on.

If your team has multiple engineers that will be contributing to your Terraform codebase, consider using a solution like Atlantis or HashiCorp's Terraform Cloud to orchestrate your infrastructure deployments in a consistent and elegant way.

Catching Syntax Errors & Formatting

When coding in other languages, how many times have you pushed code only to see a linting check failure in CI? Oy, too many times for me!

Well, with Terraform, we can run some spot checks on our code locally to make sure it's in good shape for CI. Here are a couple of commands you can run to:

  1. Format your code
  2. Catch syntax errors
  3. Validate our assumptions 

# Indent and sanity check the code
# Any file names written to STDOUT have been formatted
terraform fmt

# Validate Terraform syntax in cwd
terraform validate

Providers / Extensibility

You may have heard previously of folks managing their AWS, Google Cloud, and Azure infrastructure all using Terraform. This can immediately induce the thought: "How do they keep up with all the latest API updates in a timely manner?"

At the root of the wonderful Infrastructure as code tool called Terraform is a myriad of subcomponents that handle the interactions with each specific cloud provider. HashiCorp calls these Terraform providers. 

To highlight a few popular providers that are frequently used, along with links to the documentation for your perusing:

aws - https://registry.terraform.io/providers/hashicorp/aws/latest/docs
google - https://registry.terraform.io/providers/hashicorp/google/latest/docs
azurerm - https://registry.terraform.io/providers/hashicorp/azurerm/latest

In order to start creating resources in your AWS account, you'll need to add a provider block such as the following, into providers.tf:

provider "aws" {
   version = "~>2.54.0"
   region  = "us-east-2"
}

In the above example, Terraform and the AWS SDK will work through the usual AWS credential provider chain process of finding AWS credentials to authenticate the request. In this case, we're falling back to using the [default] profile in ~/.aws/credentials.

If you need to deploy your resources to multiple accounts or regions, we have the ability to alias providers and can pick out specific Terraform resources that we want to deploy within the context of those configurations. A great example use case here is creating records in a Route53 zone that is hosted in another AWS account.

Data Sources - Think "Reader"

The concept of data sources is quite interesting and is a common subject up for debate. The basic value adds here is that you can have Terraform "query" the provider of your choice for a particular resource. 

In the perfect world, every resource in your AWS, GCP, or Azure cloud environments would be managed in a central place, i.e. in Terraform. 

In reality, and in most cases with established companies, there's a sort of hybrid infrastructure management model implemented. As an example, maybe the VPCs in your AWS account were created manually or via CloudFormation, but you want to use Terraform to deploy resources into said VPC.

With this VPC scenario in mind, in Terraform, you may need to identify the private subnet IDs for subnets within that particular VPC. Sure, you could add a Terraform input variable and manually provide the list of overrides in prod.tfvars, but this new day and age is all about dynamic infrastructure!

To accomplish a dynamic lookup, we can use the `aws_subnet_ids` data source like so, and simply pass in the ID of the VPC along with a tag that definitively identifies the private subnets. It's important to scope out the documentation for the data source you're considering, as there are some limits to which fields one may filter by. Here's what the aforementioned lookup might look like in HashiCorp configuration language:

variable "vpc_id" {}

data "aws_subnet_ids" "private" {
vpc_id = var.vpc_id

tags = {
  subnet_type = "private"
}
}

For more information on this data source, as well as others, check out HashiCorp's latest documentation.

Resources - Think "Creator"

Resources, resources, resources - this is easily the bread and butter of Terraform which you'll leverage most heavily. 

Want to create a new EC2 instance in your AWS account? That'd be with the aws_instance resource, like so:

resource "aws_instance" "web" {
ami                       = "ami-12e4cfd5"
    instance_type        = "t2.micro"
    tags = {
        Name = "BasicEC2InstanceExample"
    }
}

Want to create a new monitor in Datadog? Using the Datadog Terraform provider, you can deploy a `datadog_monitor` resource into your Datadog account.

Want to create a new Auth0 application in your development or production tenant? No problem, there's a Terraform provider and corresponding resources to support it!

I'd opine that the integrability of Terraform is one of the biggest selling points. It's not just an Infrastructure as code solution; it's capable of administering resources in a plethora of SaaS/PaaS solutions out there! Very often, there are dependencies between these external systems and the infrastructure we're hosting our applications on top of.

Modules - Classes for Terraform

So now that we've covered the building blocks of modules: data sources, resources, and input variables, let's talk about how we can keep our Terraform code DRY and reusable using Terraform modules!

First off, it's worth calling out that prior to Terraform 0.13, there were some particularly unfortunate limitations to modules. There was no count parameter available for modules. As a side effect, you'll find tons of modules on Github and the like that implement a multitude of "enablement" variables to control the creation of each individual resource within the module. I used to call these "plumbing" variables since we're passing them through from the calling code (where you want to leverage the module). This made using modules cumbersome, as you'd have a proliferation of variables to control basic creation behavior.

Well, if you're just getting started with Terraform, I'm happy to announce that this is no longer the case!

One can think of Terraform modules as classes for Terraform. Let's say that you frequently deploy one particular architecture to your AWS environments, which is comprised of an RDS instance and Route53 records. Instead of calling each resource individually in each Terraform root module you want to deploy it in, we can encapsulate everything into a single module. As with usual root modules, we define Terraform input variables, providers and their version requirements, resources, and even data sources! 

You might be thinking, "This all sounds cool, but do I have to write everything custom myself?" 

Fortunately, like most open-source communities, there are a ton of prebuilt modules out there. Forking module repositories from GitHub into your business' organization might make sense, but there's another option: the Terraform registry.

Before you go creating a complex module, it could be worth perusing the Terraform Registry for community-supported pre-built modules there. If you do create a custom module, consider breaking it into a bespoke Git repository, or at least in a generic "terraform-modules" repository with an appropriately divided directory structure.

Version controlling your Terraform code and modules (what's the difference, really?) is paramount to enabling us to move quickly and be able to audit changes when things don't go quite according to plan. I highly recommend using semantic versioning to keep things sane. One final thought to share on this topic is (and this sometimes resonates in software development): consider the other engineers that might want to use your module. Avoid breaking changes in minor or patch releases. Terraform-docs can also help with maintenance by automatically updating the README.md in your modules, as part of CI or a pre-commit hook.

Functions - Base Library Functions

Just as with most other languages, Terraform provides a standard library of functions you can use to make mutations to input variables and any other data you need to pass around. In this article, we'll cover a few common functions one might call on a daily basis.

Here's an example using the format() function to concatenate a KMS key's description:

locals {
key_description = format("Encryption key for the %s environment", var.environment)
}

variable "environment" {
   description = "The environment's name."
   type        = string
}

resource "aws_kms_key" "fintech" {
   description = local.key_description
}

Another common scenario you might encounter is when JSON is expected as a parameter to a resource. Let's take a look at how to appropriately encode the redrive policy for an SQS queue, as JSON is required there.

resource "aws_sqs_queue" "my-queue" {
 name                      = "example-tf-queue"
 message_retention_seconds = 86400
 receive_wait_time_seconds = 10
 redrive_policy = jsonencode({
     deadLetterTargetArn = var.dlq_arn
     maxReceiveCount     = 5
})
}

For more information on using the built-in functions Terraform provides, check out Hashicorp's function documentation. I'd highly recommend checking out the IP Network functions, which can really come in handy when dynamically slicing up IP addresses in your VPC's subnets.

Outputs

Sometimes, putting all of your resources in a single Terraform root module can lead to overly slow terraform applies, and so many engineering teams logically divide their Terraform code into multiple directories. Consider the example of RDS instances and an ECS service's task definition, which includes environment variables. One may want these to be isolated from one another, and so, this is where Terraform outputs come into play.

Outputs allow you to, well... output data from one Terraform state and then import it into another. This can be handy for things like RDS hostnames, VPC objects, and more. Newer versions of Terraform allow you to export entire modules now, instead of just primitive data types, lists, maps, etc.

Conclusion

As you can see, Terraform provides a wealth of functionality to support deploying, maintaining, and interacting with your infrastructure. Think carefully about how you want to group resources into different root modules, and you'll find Terraform to be very easy to work with and maintain. Try to keep your Terraform code as DRY as possible, and leverage automation to handle applies when dealing with a larger group of contributors

This concludes our 201 look into Terraform. We hope this was helpful to you - and if you have any questions or comments (or even want us to do a Terraform 301!) please reach out. Don't forget to check out Harness' robust Terraform integration by booking your demo today.

Marcus

Sign up now

Sign up for our free plan, start building and deploying with Harness, take your software delivery to the next level.

Get a demo

Sign up for a free 14 day trial and take your software development to the next level

Documentation

Learn intelligent software delivery at your own pace. Step-by-step tutorials, videos, and reference docs to help you deliver customer happiness.

Case studies

Learn intelligent software delivery at your own pace. Step-by-step tutorials, videos, and reference docs to help you deliver customer happiness.

We want to hear from you

Enjoyed reading this blog post or have questions or feedback?
Share your thoughts by creating a new topic in the Harness community forum.

Sign up for our monthly newsletter

Subscribe to our newsletter to receive the latest Harness content in your inbox every month.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Platform