Terraform Modularization in Simple Steps

March 24, 2023by Vaibhav Ambekar

This  blog post covers Terraform modules and their usage. It explains what modules are and provides guidance on how to use them effectively.

 

In this post, you’ll learn about Terraform modules, which help simplify managing and scaling infrastructure configurations. As infrastructure grows, managing it within a single directory can become difficult, which is where modules come in. We’ll explore what modules are, how to use them, and the problems they solve.

 

What is a Terraform Module? 

 

A Terraform module is a set of configuration files in a directory that encapsulates resources for a specific task. It reduces code repetition and improves code clarity by enabling reuse of existing code for similar infrastructure components. The module can consist of one or more .tf files, and can be used to extend an existing Terraform configuration 

 

A typical module can look like this: 

├── main.tf 
├── variables.tf
└── outputs.tf  

 

Any Terraform configuration can be considered a module in itself, as it defines a set of resources for a specific purpose. When Terraform is run in the directory containing the configuration files, they are treated as a root module, forming the foundation of your infrastructure. This root module can then be expanded upon by adding more modules or resources.

 

How To Use Terraform Modules 

 

OK, now that you know what a Terraform module is, let’s take a look at a step-by-step process of creating them. So, let’s get started!  

 

Step 1: First, create a new directory for your Terraform module 

mkdir terraform-module 

cd terraform-module

mkdir vpc 

cd vpc

Step 2:  Create a main.tf file to define your VPC module 

——————————————————————————————————main.tf————————————————————————————————————

# VPC

resource "aws_vpc" "vpc" {
  cidr_block           = var.vpc_cidr
  enable_dns_hostnames = true
  enable_dns_support   = true
  tags = {
    Name = "${var.project}-${var.env}-vpc"
  }
}

# INTERNET GATEWAY

resource "aws_internet_gateway" "igw" {
  vpc_id = aws_vpc.vpc.id
  tags = {
    Name = "${var.project}-${var.env}-internet-gateway"
  }
}

# NAT GATEWAY ELASTIC EIP

resource "aws_eip" "eip_natgateway" {
  vpc = true
  tags = {
    Name = "${var.project}-${var.env}-natgateway-elastic-ip"
  }
}

# Create NAT Gateway resource and attach it to the VPC

resource "aws_nat_gateway" "natgateway" {
  allocation_id = aws_eip.eip_natgateway.id
  subnet_id     = element(aws_subnet.public_subnet.*.id, 0)
  tags = {
    Name = "${var.project}-${var.env}-nat-gateway"
  }
}

# PUBLIC SUBNET

resource "aws_subnet" "public_subnet" {
  vpc_id                  = aws_vpc.vpc.id
  count                   = length(var.public_subnet_cidr)
  cidr_block              = element(var.public_subnet_cidr, count.index)
  availability_zone       = element(var.availability_zones, count.index)
  map_public_ip_on_launch = "true"
  tags = {
    Name                  = "${var.project}-${var.env}-public-subnet"
  }
}

# PRIVATE SUBNET

resource "aws_subnet" "private_subnet" {
  vpc_id                  = aws_vpc.vpc.id
  count                   = length(var.private_subnet_cidr)
  cidr_block              = element(var.private_subnet_cidr, count.index
  availability_zone       = element(var.availability_zones, count.index)
  map_public_ip_on_launch = "false"
  tags = {
    Name = "${var.project}-${var.env}-private-subnet"
  }
}

# ROUTE TABLE

#  ROUTE TABLE for Public Subnet

resource "aws_route_table" "public_routetable" {
  vpc_id = aws_vpc.vpc.id
  tags = {
    Name = "${var.project}-${var.env}-public-route-table"
  }
}

# ROUTE TABLE for Private Subnet

resource "aws_route_table" "private_routetable" {
  vpc_id = aws_vpc.vpc.id
  tags = {
    Name = "${var.project}-${var.env}-private-route-table"
  }
}

# Public route can acess internet gateway

resource "aws_route" "public_igw" {
  route_table_id         = aws_route_table.public_routetable.id
  destination_cidr_block = "0.0.0.0/0"
  gateway_id             = aws_internet_gateway.igw.id
}

# Private route can acess NAT gateway

resource "aws_route" "private_natgateway" {
  route_table_id         = aws_route_table.private_routetable.id
  destination_cidr_block = "0.0.0.0/0"
  nat_gateway_id         = aws_nat_gateway.natgateway.id
}

# Public Route Table Association

resource "aws_route_table_association" "public_routetableassociation" {
  count          = length(var.public_subnet_cidr)
  subnet_id      = element(aws_subnet.public_subnet.*.id, count.index)
  route_table_id = aws_route_table.public_routetable.id
}

# Private Route Table Association

resource "aws_route_table_association" "private_routetableassociation" {
  count          = length(var.private_subnet_cidr)
  subnet_id      = element(aws_subnet.private_subnet.*.id, count.index)
  route_table_id = aws_route_table.private_routetable.id
}


Step 3:  Create a variables.tf file to define in your VPC module  

——————————————————————————————————variables.tf—————————————————————————————————-

 

#VARIABLES

variable "project" {
  description = "name of the project"
  default     = "project"
}

variable "env" {
  description = "Name of the environment"
  default     = "environment"
}

# VPC

variable "vpc_cidr" {
  description = "aws vpc"
  default     = "10.0.0.0/16"
}

#REGION

variable "region" {
  description = "aws region"
  default     = "us-east-1"
}

#AZ

variable "availability_zones" {
  description = "aws az for  environment"
  default     = ["us-east-1a", "us-east-1b", "us-east-1c"]
}

# PUBLIC SUBNET

variable "public_subnet_cidr" {
  description = "public subnet"
  default     = ["10.0.0.0/20", "10.0.16.0/20", "10.0.32.0/20"]
}

# PRIVATE SUBNET

variable "private_subnet_cidr" {
  description = "private subnet"
  default     = ["10.0.128.0/20", "10.0.144.0/20", "10.0.160.0/20"]
}

Step 4:  Create an output.tf file to refer the output values to other resources

output "vpc_id" {
  value = aws_vpc.vpc.id
}

output "igw_id" {
  value = aws_internet_gateway.igw.id
}

output "eip_natgateway_id" {
  value = aws_eip.eip_natgateway.id
}

output "natgateway_id" {
  value = aws_nat_gateway.natgateway.id
}

output "public_subnet_id" {
  value = aws_subnet.public_subnet.*.id
}

output "private_subnet_id" {
  value = aws_subnet.private_subnet.*.id
}

output "public_routetable_id" {
  value = aws_route_table.public_routetable.id
}

output "private_routetable_id" {
  value = aws_route_table.private_routetable.id
}

————————————————————————————————————————————————————————————————————————

Provider in Terraform Code:

In Terraform, a provider is a plugin that enables Terraform to interact with a specific cloud service or infrastructure technology. Providers allow Terraform to manage resources across multiple cloud platforms and infrastructure technologies using a consistent syntax. You specify the provider in your Terraform code to correspond with the cloud platform or infrastructure technology you are working with. Terraform includes many built-in providers for popular platforms like AWS, Google Cloud Platform, Microsoft Azure, and VMware, as well as third-party providers that extend Terraform.

S3 backend:

The Terraform S3 backend is a way of storing Terraform state data in an S3 bucket. By using the S3 backend, Terraform can store the state data remotely, which allows multiple users to collaborate on infrastructure changes. To use the S3 backend, you create an S3 bucket and configure the backend in your Terraform code. Once the backend is configured, Terraform will automatically store the state data in the specified S3 bucket. The benefits of using the S3 backend include improved collaboration, increased reliability, improved security, and better recovery from infrastructure failures.
————————————————————————————————————————————————————————————————————————-
Step 1:  Create a new directory for your environment(develop / production)

mkdir develop-environment
cd develop-environment

Step 2:  Create a main.tf file to define your develop-environment 

——————————————————————————————————main.tf————————————————————————————————————

provider "aws" {
  region = "us-east-1"
  default_tags {
    tags = {
      ProjectName = "${var.project}"
      Environment = "${var.env}"
    }
  }
}

terraform {
  backend "s3" {
    bucket = "example-bucket"
    key    = "example-bucket/example-key/"
    region = "us-east-1"
  }
}

module "vpc" {
  source                        = "../terraform-module/vpc"
  project                        = var.project
  env                             = var.env
  vpc_cidr                      = var.vpc_cidr
  public_subnet_cidr      = var.public_subnet_cidr
  private_subnet_cidr     = var.private_subnet_cidr  

}

Step 3:  Create a variables.tf file to define in your develop-environment 

——————————————————————————————————variables.tf————————————————————————————————-

#VARIABLES

variable "project" {
  description = "name of the project"
  default     = "example"
}

variable "env" {
  description = "name of the environment"
  default     = "develop-environment"
}

#VPC

variable "vpc_cidr" {
  description = "aws vpc"
  default     = "10.8.0.0/16"
}

#PUBLIC SUBNET

variable "public_subnet_cidr" {
  description = "public subnet"
  default     = ["10.8.30.0/28", "10.8.31.0/28", "10.8.21.0/28"]
}

#PRIVATE SUBNET

variable "private_subnet_cidr" {
  description = "private subnet"
  default     = ["10.8.33.0/28", "10.8.34.0/28", "10.8.35.0/28"]
}

#AZ

variable "availability_zones" {
  description = "aws az "
  default     = ["us-east-1a", "us-east-1b", "us-east-1c"]
}

Step 4:  Create an output.tf file to view the output values of infrastructure resources 

——————————————————————outputs.tf———————————————————————————————————————————-

output "vpc" {
  value = module.vpc.vpc_id
}

output "public_subnet_id" {
  value = module.vpc.public_subnet_id
}

output "private_subnet_id" {
  value = module.vpc.private_subnet_id
}

Step 5:  After this go to develop-environment directory location by doing

cd develop-environment

Then Follow this commands

Step 6:terraform init

Terraform init prepares the environment for Terraform to manage infrastructure by downloading required dependencies and setting up the backend. It needs to be run before using any other Terraform command.


Step 7:terraform validate

Terraform validate command checks the Terraform configuration files for syntax errors, ensuring that the files are written correctly and can be successfully parsed by Terraform.


Step 8:terraform plan

Terraform plan command generates an execution plan for applying the Terraform configuration to the infrastructure. It analyzes the configuration files, determines the changes that need to be made to the infrastructure, and generates a report of those changes without actually applying them.

Step 9:terraform apply


Terraform apply command applies the changes specified in the Terraform configuration to the infrastructure, creating, modifying, or deleting resources as necessary. Before running terraform apply, it’s important to ensure that you have run terraform init, terraform validate, and terraform plan.

Step 10:terraform destroy

Terraform destroy command destroys the resources created by the Terraform configuration, removing them from the infrastructure. It’s important to use terraform destroy carefully, as it can cause data loss if used incorrectly. Before running terraform destroy, it’s important to ensure that you understand which resources will be deleted and that you have a backup of any data that you need to keep.


What Problems Do Terraform Modules Solve?

Terraform modules are an essential tool for managing large-scale infrastructure. They help solve several common problems that arise as infrastructure becomes more complex. Here are some of the key issues that Terraform modules can address:


1) Lack of Code Clarity

Maintaining clear and readable code is essential to ensure that your infrastructure is easy to understand and maintain. Copy-pasting code can make it harder to read and increase the likelihood of errors. With Terraform modules, you can organize your infrastructure into smaller, more manageable components that are easier to read and maintain.
2) Human Error

Mistakes can happen when creating or copying code, leading to costly errors and downtime. Terraform modules can help reduce human error by enabling you to define and test a single module before deploying it in multiple places. This approach ensures that any mistakes are caught early and can be easily corrected.

3) Code Repetition

As your infrastructure grows, you will likely need to create multiple instances of the same resource. Copy-pasting code is not only tedious but also error-prone. Terraform modules enable you to define reusable components that can be shared across your infrastructure, reducing code repetition and saving time.

4) Lack of Compliance

Adhering to best practices and compliance standards is critical for maintaining a secure and reliable infrastructure. Terraform modules allow you to define configurations that follow best practices, such as encryption, redundancy, or lifecycle policies. Reusing these modules ensures that your infrastructure adheres to the same standards consistently.

In summary, Terraform modules provide a powerful way to manage complex infrastructure by reducing code repetition, improving code clarity, ensuring compliance, and reducing human error. However, it’s essential to strike the right balance between modularization and simplicity, so it’s important to use them wisely.