Automating AWS Resource Creation with Terraform

Zhimin Wen
ITNEXT
Published in
6 min readNov 21, 2021

--

While studying AWS, I was assigned a time-restricted sandbox to play around with the AWS resources. I am using the AWS console to create the required resources initially. When the amount of these resources and interaction among them increases, I start to think if we can automate the creation of the resources.

This was a solved problem. The answer is to use the Infra as code tooling — Terraform.

The Target AWS VPC Environment

I will create an AWS VPC environment for running Web server and DB server in separate subnets, as shown in the below diagram.

Toolings Setup

Install terraform and the aws command line tools on my Mac with brew,

brew install terraform
brew install awscli

Terraform file structure

Terraform will run based on the descriptive content from multiple *.tf files. Upon running, Terraform will read all the *tf files and assembly them in the proper order.

We start with two files, main.tf and variables.tf. The main.tf is listed as below,

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.65"
}
}
}
provider "aws" {
access_key = var.access_key
secret_key = var.secret_key
region = var.region
}

Terraform’s functionality is brought in by the plugins. The recent version of Terraform allows you to specify your providers in the tf file. Here we defined the required AWS provider and its version, followed by setting the AWS provider with the credentials. The var indicates the value will be reading from the variable, which is defined in the variables.tf file with the content as below,

variable "access_key" {
description = "AWS Access key"
default = "A...."
}
variable "secret_key" {
description = "AWS Secret Key"
default = "j...."
}
variable "region" {
description = "AWS region for hosting our your network"
default = "us-east-1"
}
variable "aws_ami" {
description = "Amazone linux"
default = "ami-04ad2567c9e3d7893" //x86_64
}
variable "ssh_key_name" {
description = "ssh key"
default = "mykey"
}

We can now run terraform init which will download the defined provider accordingly,

terraform initInitializing the backend...Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v3.65.0
Terraform has been successfully initialized!
....

Based on the different functionality, we group the AWS resources into different tf files. The format of the resource definition is straightforward.

resource type_of_resource "resource name" {
attribute = "attribue value"
...
}

VPC: vpc.tf

In the vpc.tf file, we start to create the VPC first with the CIDR set as “192.168.0.0/16”

resource "aws_vpc" "my-vpc" {
cidr_block = "192.168.0.0/16"
instance_tenancy = "default"
enable_dns_hostnames = true
tags = {
Name = "my-vpc"
}
}

Then create two subnets for web zone (192.168.20.0/16) and DB zone (192.168.20.0/24) respectively,

resource "aws_subnet" "web-subnet" {
cidr_block = "192.168.10.0/24"
vpc_id = "${aws_vpc.my-vpc.id}"
#availability_zone = "us-east-1a"
map_public_ip_on_launch = true
tags = {
Name = "192.168.10.0"
}
}
resource "aws_subnet" "db-subnet" {
cidr_block = "192.168.20.0/24"
vpc_id = "${aws_vpc.my-vpc.id}"
#availability_zone = "us-east-1a"
map_public_ip_on_launch = false
tags = {
Name = "192.168.20.0"
}
}

Take note of the vpc_id, its value is defined as “${aws_vpc.my-vpc.id}” which is to use the aws_vpc resource whose name is “my_vpc” as we just defined as above, get the id field, assign it to vpc_id. We don’t need to copy the VPC id value or hard code its id value.

For the instance in web-subnet, we will assign a public IP by setting the field map_public_ip_on_launch as true. While for the DB subnet, public IP is not assigned.

Create the internet gateway and the NAT gateway respectively,

# Defining the VPC Internet Gateway
resource "aws_internet_gateway" "my-internet-gw" {
vpc_id = "${aws_vpc.my-vpc.id}"
tags = {
Name = "my-internet-gw"
}
}
# Defining the Elastic IP Address for NAT
resource "aws_eip" "nat" {
vpc = true
}
# Defining the VPC NAT Gateway
resource "aws_nat_gateway" "my-nat-gw" {
allocation_id = "${aws_eip.nat.id}"
subnet_id = "${aws_subnet.web-subnet.id}"
depends_on = [aws_internet_gateway.my-internet-gw]
tags = {
Name = "my nat gateway"
}
}

The NAT gateway will be using the web subnet so that it can talk to the internet through the internet gateway.

Now we create the route tables.

# Defining the route table for web subnet
resource "aws_route_table" "webzone-rt" {
vpc_id = "${aws_vpc.my-vpc.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.my-internet-gw.id}"
}
tags = {
Name = "webzone route"
}
}
# Associating the web subnet
resource "aws_route_table_association" "web-rt-association" {
route_table_id = "${aws_route_table.webzone-rt.id}"
subnet_id = "${aws_subnet.web-subnet.id}"
}
# Defining the route table for private subnet
resource "aws_route_table" "dbzone-rt" {
vpc_id = "${aws_vpc.my-vpc.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_nat_gateway.my-nat-gw.id}"
}
tags = {
Name = "DB zone route"
}
}
# Associating the DB subnet to the NAT exposed route table
resource "aws_route_table_association" "db-rt-association" {
route_table_id = "${aws_route_table.dbzone-rt.id}"
subnet_id = "${aws_subnet.db-subnet.id}"
}

In the web zone, for the traffic to outside (0.0.0.0/0) we route it to the internet gateway. In the DB zone, the traffic will route through the NAT gateway so that the EC2 instances in the zone still can access the internet through NAT.

We associate the subnet to its route table explicitly by creating the resource of aws_route_table_association

Now we create the security group for the EC2 instances.

# Security Group for web subnet
resource "aws_security_group" "web-sg" {
name = "web-sg"
description = "Allow HTTP/SSH Access"
vpc_id = "${aws_vpc.my-vpc.id}"
tags = {
Name = "web sg"
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "db-sg" {
name = "db-sg"
description = "DB zone Access"
vpc_id = "${aws_vpc.my-vpc.id}"
tags = {
Name = "db sg"
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [aws_subnet.web-subnet.cidr_block]
}

ingress {
from_port = 3306
to_port = 3306
protocol = "tcp"
cidr_blocks = [aws_subnet.web-subnet.cidr_block]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

For web zone EC2 instances, we allow HTTP access in addition to SSH. While in the DB zone, only SSH/MySQL(3306) coming from the web zone is allowed. Any outgoing traffic is allowed.

The VPC settings are done. The Network ACLs are the default ones created with the creation of the VPC.

We are ready to create our EC2 instances. But before that let’s create the SSK key pairs.

SSH Key pairs: ssh-key.tf

resource "aws_key_pair" "mykey" {
key_name = "mykey"
public_key = file(pathexpand("~/.ssh/id_rsa.pub"))
}

We call the Terraform function file to read the content of my local public SSH key, create the resource in AWS and name it as mykey.

EC2 instances: ec2.tf

resource "aws_instance" "web-server" {
ami = "${var.aws_ami}"
subnet_id = "${aws_subnet.web-subnet.id}"
instance_type = "t2.micro"
key_name = "${var.ssh_key_name}"
#user_data = "${file("httpd.sh")}"
vpc_security_group_ids = ["${aws_security_group.web-sg.id}"]

tags = {
Name = "web-server"
}
}
resource "aws_instance" "db-server" {
ami = "${var.aws_ami}"
subnet_id = "${aws_subnet.db-subnet.id}"
instance_type = "t2.micro"
associate_public_ip_address = false

key_name = "${var.ssh_key_name}"
vpc_security_group_ids = ["${aws_security_group.db-sg.id}"]

tags = {
Name = "db-server"
}
}

We create the EC2 instance using the AMI from the variable aws_ami. For the DB server, we set associate_public_ip_address as false so no public IP was assigned for it.

Create the resources

Once all the tf files are ready, we can create the resources,

terraform plan
terraform apply -auto-approve

The plan is like a dry run. The apply will create the resources.

We can run terraform destroy to delete all the resources created.

--

--