Deploy Kubernetes Cluster on AWS With Terraform and KOPS

Deploy EKS with Terraform and KOPS

Introduction

This written workshop describes how to deploy a reliable, high available and a production-ready Kubernetes cluster on AWS with Terraform and KOPS.

Amazon EKS is the default go-to solution for Kubernetes on AWS.It simplifies a Kubernetes cluster deployment by taking away the hassle of maintaining a master control plane. It leaves the worker node provisioning to you, which is simplified by Amazon EKS pre-configured Amazon Machine Images (AMIs).

But sometimes the default settings are not enough for a particular solution.

  • For example, Amazon EKS does not allow custom settings on its control plane, so if that’s something you need you’ll have to consider a self-hosted solution for your Kubernetes cluster.
  • Another case is deploying Pods (groups of one or more containers with shared storage/network, and a specification on how to run the containers) in the master nodes. This capability is locked in Amazon EKS. For certain add-ons that can be installed in Kubernetes like KIAM (a tool to provide IAM credentials to Pods for target IAM roles), a master/worker Pod setup is necessary. If you can’t run Pods in the master nodes in Amazon EKS, you’ll need to provision some extra nodes in your cluster to simulate a master role for this type of add-on.

If you find yourself in any of the above scenarios, I recommend that you deploy a self-managed Kubernetes cluster on AWS using Terraform and kops.

In this tutorial, we will deploy the following architecture using Terraform and KOPS. You use Terraform to create and manage the shared VPC resources, KOPS resources and the application environment (AWS ECR, AWS ACM, AWS Route 53, …) . You use KOPS  as the mechanism to install and manage the K8S cluster in AWS. You use kubectl to test and manage the K8S application deployment

Architecture

Deploy EKS with Terraform and KOPS: architecture

The following table maps most of the AWS products used in this tutorial to their use case in K8S cluster.

ServiceUse case
VPCProvision a logically isolated section of the AWS Cloud
ACMAWS Certificate Manager: provision SSL/TLS certificates 
ECRAmazon ECR is a fully-managed Docker container registry
Kubernetes Server Bastion host used to ssh the private K8S cluster
K8S Worker NodesWorker machines,  part of the Kubernetes cluster 
Internet GatewayAllows communication between VPC instances and the internet
Route 53Amazon Route 53 is a cloud DNS service
S3Amazon Simple Storage Service is an object storage service

Assumptions and Prerequisites

  • You have basic knowledge of AWS
  • Have basic knowledge of Kubernetes
  • You have a basic knowledge of Terraform
  • You have Terraform v0.12x / v0.11x installed in your machine
  • You mist have kubectl and KOPS installed in your machine
  • You must have an AWS account, with an IAM keypair which has owner access to your AWS environment.

Objectives

This guide walks you through how to the following tasks:

  1. ✅ Use Terraform and KOPS to create a Kubernetes cluster
  2. ✅ Create a bastion machine to manage your cluster masters/nodes
  3. ✅ Deploy a sample Kubernetes application in the created cluster
  4. ✅ Learn and use HCL (HashiCorp Language), Terraform and KOPS best practices

Software Dependencies

What is out-of-scope

This is not a tutorial on terraform, even without knowing it you should still be able to understand most of it. You can learn the basics here in my previous blog with Azure AKS.
We will also not dive deep into kubernetes and just limit ourself to creating the cluster.

Before you begin

Setup the Terraform State environment

In order to deploy Kubernetes cluster on AWS with Terraform and KOPS we need to create 2 resources:

  • A S3 bucket (in our tutorial it will be named terraform-eks-dev, I recommend to set the versioning)
  • A DynamoDB table (in our tutorial it will be named terraform-state-lock)

Configuring AWS

In order to follow the best practices, let’s create a user for Terraform. Go to your AWS console and create terraform_user user:

Deploy EKS with Terraform and KOPS

Give it the good rights. In my example, I need Terraform to be able to manage all of my AWS Cloud resources:

Deploy EKS with Terraform and KOPS terraform user access

Don’t forget to store the AWS access key id and secret access key. Next, you need to copy them in your AWS credential file, you can also execute $ aws configure to add a new user.

[terraform_user]
aws_access_key_id = xxxxxxxxxxxxxxxxxxx
aws_secret_access_key = xxx/xxxxxxxxxxxxx/xxxx

Technical setup of our cluster

AWS

We are going to create a kubernetes cluster inside a private VPC (we will create it using terraform) in the Frankfurt region (eu-central-1).
This VPC will have 3 private and 3 public subnets (one per Availability zone).
For our private subnets we will have only 1 NAT gateway (for economy purpose).

Kubernetes

Our kubernetes cluster will run in a private topology (i.e. in privates subnets).
The kubernetes API (running on masters node) will only accessible through a Load Balancer (created by kops).
All the node won’t be internet accessible by default, but using a bastion host we will be able to ssh to them.

The following setup is “prod” ready, we will have 3 masters (one per Availability zone) and 2 nodes.
Kubernetes imposes the following fundamental requirements (shamefully stolen from here):

  • All containers can communicate with all other containers without NAT
  • All nodes can communicate with all containers (and vice-versa) without NAT
  • The IP address that a container sees itself as is the same IP address that others see it as

So in AWS we need to choose a network plugin. Here we will use the amazon-vpc-cni-k8s plugin. It is the recommended plugin and it’s maintained by AWS.

To deploy an K8S cluster with Terraform and KOPS, the first step is to obtain the source code from my Github repository.
This will clone the sample repository and make it the current directory

$ git clone https://github.com/AymenSegni/aws-eks-cluster-tf-kops.git
$ cd aws-eks-cluster-tf kops

Our project directory tree will look like at the end:

deploy eks with terraform and kops project tree

Project structure

1- Terraform config directory: /terraform

a- modules: represent here in this layout the Terraform modules (general re-used functions) . In this lab, we have basically 4 modules:
– shared_vpc: Define the shared VPC resources
kops_resources: the AWS resources needed to run the KOPS configs
– ecr: Create an AWS ECR repository used to store docker images needed to deploy the kubernetes application later
app_env : Host the Teeraform configs necessary to create a Route 53 dns records and SSL ACM certificate for the kubernetes application deployment

b- Deploymentis the root Terraform function of the layout, responsible of the K8S Kubernetes cluster deployment on AWS.
main.tf we define the Terraform modules already created in /modules sub-foldesr with the appropriate inputs defined in variables.tf or in a terraform.tfvars file (wich is not covered in this guide).
provider.tf: define the AWS provider configuration used by Terraform including the version, the main deployment region and the AWS technical user (terraform_user)
backend.tf: Define the S3 bucket and the dynamodb table that manage the Terraform state file.

2- KOPS config directory: /kops

template.yaml : template the K8S creation in AWS
The rest of the KOPS config files are auto-generated by the KOPS CLI. Only the template file is needed.

2- Kubernetes Application deployment config directory: /k8s-deployment

a- /src: Holds the application code source, nginx config file and the Dockerfile
b- /deploy-app: Manage the K8S application deployment and the service definition

As you can see our terraform and kops configuration are separated. This is because the kops configuration files are fully managed by kops and modifying them is not “persisted” between kops run.

Terraform deployment setup

So as to deploy Kubernetes cluster on AWS with Terraform and KOPS, we need to setup our Terraform deployment in root function at /terraform/deployment.
In this stage, we must define the provider and the backend configs as the following:

1- provider.tf

provider "aws" {
region = "eu-central-1"
version = "~> 2.57"
profile = "terraform-user"
}
view raw aws_eks_providr.tf hosted with ❤ by GitHub

2- backend.tf

terraform {
backend "s3" {
region = "eu-central-1"
bucket = "terraform-eks-dev"
key = "terraform.tfstate"
encrypt = "true"
dynamodb_table = "terraform-state-lock"
}
}
view raw aws_eks_backend.tf hosted with ❤ by GitHub

Stay tuned, in the next section, we’re going to talk about how to create shared AWS VPC resources using Terraform modules.

Shared VPC resources

We need to setup some Terraform resources that will be used by kops for deploying K8S cluster but could also be used by other things.

We will use the very good terraform-aws-vpc module to avoid having to setup each resource individually.

But first, we need to define the generic TF module terraform/modules/shared_vpc that will be used throughout the whole tutorial.
Our VPC will be on the 10.0.0.0/16 with a separation of private and public subnets.

#
# VPC Resources
# * VPC
# * Subnets
# * Internet Gateway
# * Route Tables
# * Sec Groups
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "${var.environment}-vpc"
cidr = var.cidr
azs = var.azs
private_subnets = var.private_subnets
public_subnets = var.public_subnets
enable_nat_gateway = true
single_nat_gateway = true
one_nat_gateway_per_az = false
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
}
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
tags = {
Environment = var.environment
Application = "network"
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
}
}
// variables.tf
variable "environment" {
type = string
default = "krypton"
description = "Name prefix"
}
variable "cidr" {
type = string
default = "10.0.0.0/16"
description = "vpc cidr"
}
variable "azs" {
type = list
description = "Avaibility zones list"
}
variable "private_subnets" {
type = list
description = "list of private subnets in the vpc"
}
variable "public_subnets" {
type = list
description = "public subnets list"
}
variable "ingress_ips" {
type = list
description = "List of Ingress IPs"
}
variable "cluster_name" {
type = string
description = "FQDN cluster name"
}
// outputs.tf
output "vpc_id" {
value = module.vpc.vpc_id
}
output "vpc_cidr_block" {
value = module.vpc.vpc_cidr_block
}
output "public_subnet_ids" {
value = module.vpc.public_subnets
}
output "public_route_table_ids" {
value = module.vpc.public_route_table_ids
}
output "private_subnet_ids" {
value = module.vpc.private_subnets
}
output "private_route_table_ids" {
value = module.vpc.private_route_table_ids
}
output "default_security_group_id" {
value = module.vpc.default_security_group_id
}
output "nat_gateway_ids" {
value = module.vpc.natgw_ids
}
view raw shared_vpc_module.tf hosted with ❤ by GitHub

As you can see we are applying some specific tags to our AWS subnets so that kops can recognize them.

Now let’s really apply this configuration to our aws account:
In fact, let’s navigate to the root deployment folder, the vpc module deployment is defined as the following:

# VPC Module
module "vpc" {
source = "../modules/shared_vpc"
cidr = var.cidr
azs = var.azs
private_subnets = var.private_subnets
public_subnets = var.public_subnets
environment = "krypton"
ingress_ips = var.ingress_ips
cluster_name = var.cluster_name
}
// terraform.tfvars
cidr = "10.0.0.0/16"
azs = ["eu-central-1a", "eu-central-1b", "eu-central-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
ingress_ips = ["10.0.0.100/32", "10.0.0.101/32", "10.0.0.103/32"]
cluster_name = "aymen.krypton.berlin"
// variables.tf
variable "environment" {
type = string
default = "krypton"
description = "Name prefix"
}
variable "cidr" {
type = string
description = "vpc cidr"
}
variable "azs" {
type = list
description = "Avaibility zones list"
}
variable "private_subnets" {
type = list
description = "list of private subnets in the vpc"
}
variable "public_subnets" {
type = list
description = "public subnets list"
}
variable "ingress_ips" {
type = list
description = "List of Ingress IPs"
}
variable "cluster_name" {
type = string
description = "FQDN cluster name"
}
view raw deploy_shared_vpc.tf hosted with ❤ by GitHub

To run this deployment, let’s define some global variables in terraform.tfvars

KOPS AWS resources

Let’s also create a S3 bucket (with versioning enabled) where kops will save the configuration of our cluster.
And a security group to whitelist IPs access to the kubernetes API .
In our project layout the kops resources are defined in the terraform module /terraform/modules/kops_respources.

// main.tf
resource "aws_s3_bucket" "kops_state" {
bucket = "${var.environment}-kops-s3"
acl = "private"
versioning {
enabled = true
}
tags = {
Environment = var.environment
Application = "kops"
Description = "S3 Bucket for KOPS state"
}
}
resource "aws_security_group" "k8s_api_http" {
name = "${var.environment}-k8s-api-http"
vpc_id = var.vpc_id
tags = {
environment = var.environment
terraform = true
}
ingress {
protocol = "tcp"
from_port = 80
to_port = 80
cidr_blocks = var.ingress_ips
}
ingress {
protocol = "tcp"
from_port = 443
to_port = 443
cidr_blocks = var.ingress_ips
}
}
// variables.tf
variable "ingress_ips" {
type = list
description = "List of Ingress IPs"
}
variable "environment" {
type = string
default = "krypton"
description = "Name prefix"
}
variable "vpc_id" {
type = string
description = "the shared vpc id"
}
// outputs.tf
output "k8s_api_http_security_group_id" {
value = aws_security_group.k8s_api_http.id
}
output "kops_s3_bucket_name" {
value = aws_s3_bucket.kops_state.bucket
}

Output

The output we define below will be used by kops to configure and create our cluster.

terraform/deployment/output.tf:

output "region" {
value = "eu-central-1"
}
output "vpc_id" {
value = module.vpc.vpc_id
}
output "vpc_cidr_block" {
value = module.vpc.vpc_cidr_block
}
output "public_subnet_ids" {
value = module.vpc.public_subnet_ids
}
output "public_route_table_ids" {
value = module.vpc.public_route_table_ids
}
output "private_subnet_ids" {
value = module.vpc.private_subnet_ids
}
output "private_route_table_ids" {
value = module.vpc.private_route_table_ids
}
output "default_security_group_id" {
value = module.vpc.default_security_group_id
}
output "nat_gateway_ids" {
value = module.vpc.nat_gateway_ids
}
output "availability_zones" {
value = var.azs
}
output "kops_s3_bucket_name" {
value = "krypton-kops-s3"
}
output "k8s_api_http_security_group_id" {
value = module.kops.k8s_api_http_security_group_id
}
output "cluster_name" {
value = var.cluster_name
}
view raw aws_eks_output.tf hosted with ❤ by GitHub

Finally, we can now run Terraform magic (if you use my code, don’t forget to comment out all the other resource keep only the vpc and the kops_resources modules in main.tf )

$ cd /terraform/deployment
$ terraform init
$ terraform plan
$ terraform apply

$ terraform plan output examples:

Deploy EKS withTerraform and KOPS plan 1
Deploy EKS with Terraform and KOPS

Example of the $ terraform apply output

Deploy EKS with Terraform and KOPS

Bingo, our shared resources are done! ✅ We can verify in the AWS console that the krypton-vpc is created and available:

Kops: Deploy Kubernetes cluster on AWS with Terraform and KOPS

kops/template.yaml

apiVersion: kops/v1alpha2
kind: Cluster
metadata:
name: {{.cluster_name.value}}
spec:
api:
loadBalancer:
type: Public
additionalSecurityGroups: ["{{.k8s_api_http_security_group_id.value}}"]
authorization:
rbac: {}
channel: stable
cloudProvider: aws
configBase: s3://{{.kops_s3_bucket_name.value}}/{{.cluster_name.value}}
# Create one etcd member per AZ
etcdClusters:
etcdMembers:
{{range $i, $az := .availability_zones.value}}
instanceGroup: master-{{.}}
name: {{. | replace $.region.value "" }}
{{end}}
name: main
etcdMembers:
{{range $i, $az := .availability_zones.value}}
instanceGroup: master-{{.}}
name: {{. | replace $.region.value "" }}
{{end}}
name: events
iam:
allowContainerRegistry: true
legacy: false
kubernetesVersion: 1.16.0
masterPublicName: api.{{.cluster_name.value}}
networkCIDR: {{.vpc_cidr_block.value}}
kubeControllerManager:
clusterCIDR: {{.vpc_cidr_block.value}}
kubeProxy:
clusterCIDR: {{.vpc_cidr_block.value}}
networkID: {{.vpc_id.value}}
kubelet:
anonymousAuth: false
networking:
amazonvpc: {}
nonMasqueradeCIDR: {{.vpc_cidr_block.value}}
sshAccess:
0.0.0.0/0
subnets:
# Public (utility) subnets, one per AZ
{{range $i, $id := .public_subnet_ids.value}}
id: {{.}}
name: utility-{{index $.availability_zones.value $i}}
type: Utility
zone: {{index $.availability_zones.value $i}}
{{end}}
# Private subnets, one per AZ
{{range $i, $id := .private_subnet_ids.value}}
id: {{.}}
name: {{index $.availability_zones.value $i}}
type: Private
zone: {{index $.availability_zones.value $i}}
egress: {{index $.nat_gateway_ids.value 0}}
{{end}}
topology:
bastion:
bastionPublicName: bastion.{{.cluster_name.value}}
dns:
type: Public
masters: private
nodes: private
# Create one master per AZ
{{range .availability_zones.value}}
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: {{$.cluster_name.value}}
name: master-{{.}}
spec:
image: kope.io/k8s-1.11-debian-stretch-amd64-hvm-ebs-2018-08-17
machineType: t2.medium
maxSize: 1
minSize: 1
role: Master
nodeLabels:
kops.k8s.io/instancegroup: master-{{.}}
subnets:
{{.}}
{{end}}
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: {{.cluster_name.value}}
name: nodes
spec:
image: kope.io/k8s-1.11-debian-stretch-amd64-hvm-ebs-2018-08-17
machineType: t2.small
maxSize: 2
minSize: 2
role: Node
nodeLabels:
kops.k8s.io/instancegroup: nodes
subnets:
{{range .availability_zones.value}}
{{.}}
{{end}}
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: {{.cluster_name.value}}
name: bastions
spec:
image: kope.io/k8s-1.11-debian-stretch-amd64-hvm-ebs-2018-08-17
machineType: t2.micro
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: bastions
role: Bastion
subnets:
{{range .availability_zones.value}}
utility-{{.}}
{{end}}
view raw kops_template.yaml hosted with ❤ by GitHub

The above template will be used by the kops templating tool to create a cluster, with:

  • 3 master, each in a different availability zone
  • 2 nodes
  • 1 bastion to have SSH access to any node of our cluster (master and nodes)

Using it: the KOPS magic

We are going use our previous terraform output as values for the template (run this in the kops/ directory).

$$$ TF_OUTPUT=$(cd ../terraform/deployment && terraform output -json)
CLUSTER_NAME="$(echo ${TF_OUTPUT} | jq -r .cluster_name.value)"


$ kops toolbox template --name ${CLUSTER_NAME} --values <( echo ${TF_OUTPUT}) --template template.yaml --format-yaml > cluster.yaml

Now the cluster.yaml contains the real cluster definition. We are going to put in the kops state s3 bucket.

$ STATE="s3://$(echo ${TF_OUTPUT} | jq -r .kops_s3_bucket_name.value)"

$ kops replace -f cluster.yaml --state ${STATE} --name ${CLUSTER_NAME} --force

$ kops create secret --name ${CLUSTER_NAME} --state ${STATE} --name ${CLUSTER_NAME} sshpublickey admin -i ~/.ssh/id_rsa.pub

The last command will create use your public key in ~/.ssh/id_rsa.pub to allow you to access the bastion host.

Now that kops state as been updated we can use it to create terraform files that will represent our cluster.

$ kops update cluster \
--out=. \
--target=terraform \
--state ${STATE} \
--name ${CLUSTER_NAME}

And let’s deploy it on AWS 😃
Oops 😬, one thing is missing: KOPS is not yet compatible with Terraform 0.12x. In fact, you must downgrade to Terraform v0.11x in your terminal before continuing (pro tips: you can use the Terraform Switcher tool 😊)

$ terraform init
$ terraform plan
$ terraform apply

Congratulations! 🎉 our cluster is deployed on AWS with the bastion server and the Load Balancer and all the desired resources

EKS deployed

Wrapping up

You should now have a cluster with multiple nodes and multiple masters, running on a VPC you control outside of kops.
This cluster uses the AWS VPC CNI plugin (amazon-vpc-cni-k8s), so pod networking uses the native network of AWS.

You should be able to see all your nodes by running (don’t forget to add your public IP to the cluster security group)

$ kubectl get nodes

You also have a bastion host to connect to your cluster VMs 😄

Deploy a Kubernetes Application to the cluster

Now it’s time to deploy a simple application to our cluster. It’s just going to be a simple ngin server serving the index.html. We will create an ECR repository through terraform, create a container image serving the index.html (through a standard nginx container image), build it and push it to the newly created repository.

Create ECR repository with Terraform

As As usual, we follow Terraform best practices using Terraform modules to provision our Cloud resources, we defined the Terraform configuration of the ECR service in /terraform/modules/ecr

ecr.tf module example:

# * main.tf
// Create ECR Repo
resource "aws_ecr_repository" "krypton" {
name = var.image_name
image_tag_mutability = "MUTABLE"
image_scanning_configuration {
scan_on_push = true
}
}
# * variables.tf
variable "image_name" {
default = "krypton"
description = "ECR Repo name"
type = string
}
view raw ecr_module.tf hosted with ❤ by GitHub

Let’s deploy the AWS ECR resource using the root deployment folder: /terraform/deployment (you can just uncomment the code section)

# …. the rest of modules deployments
# Create ECR Repo and push the app image
# * main.tf
module "ecr" {
source = "../modules/ecr"
image_name = var.image_name
}
# * variables.tf
# …. the rest of modules variables
variable "image_name" {
type = string
default = "aymen-krypton"
description = "App Docker image name"
}
view raw ecr_deployment.tf hosted with ❤ by GitHub

Now let’s keep the rest on Terraform (don’t forget to upgrade Terraform v0.12 x 😉 again)

$ terraform init
$ terraform plan
$ terraform apply

Example of the Terraform apply execution output:

Deploy EKS with Terraform and KOPS

Build and Push The Docker Image

The source code already contains the Dockerfile needed for the Configs API. Build, tag and push the image using the command below (you can find the push commands in the ECR service console section)

$ cd /k8s-deployment/src && docker build -t <image_name> .

$ docker tag <image_name>:latest <ecr_uri/<image_name>:latest

$ docker push <ecr_uri/<image_name>:latest

Deploy and expose the application in K8S cluster

After that, we will deploy an example application to our Kubernetes cluster and expose the website to the outside world. We will use a public load balancer for that and the result should be reachable from the internet via hello.aymen.krypton.berlin (feel free to use your own domain name)
Since we want to expose our website securely, we need to get a valid SSL certificate from ACM (we will Terraform) and attach it to the load balancer

The following steps show you how to create a sample application, and then apply the following Kubernetes LoadBalancer ServiceTypes to your sample application:

Create a sample application

1.    To create a sample NGINX deployment, run the following command

$ cd k8s-deployment/deploy-app
$ kubectl apply -f deployment.yaml

Create a LoadBalancer service

1.    To create a LoadBalancer service, we created a file called service.yaml, and then set type to LoadBalancer. See the following example:

apiVersion: v1
kind: Service
metadata:
  name: aymen-krypton
spec:
  type: LoadBalancer
  selector:
    app: aymen-krypton
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

To apply the loadbalancer service, run the following command:

$ kubectl create -f service.yaml

Verify the Deployment

To verify the Application deployment, you can run the following kubectl cli

|--⫸  kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
aymen-krypton-7dc69c7d7d-5bp4w   1/1     Running   0          25h
aymen-krypton-7dc69c7d7d-gx87l   1/1     Running   0          25h
aymen-krypton-7dc69c7d7d-mvrbx   1/1     Running   0          25h
|--⫸  kubectl get services 
NAME            TYPE           CLUSTER-IP   EXTERNAL-IP                                                                  PORT(S)        AGE
aymen-krypton   LoadBalancer   10.0.3.6     afe022044489a44d8ae4a47c6f43c44c-2036026668.eu-central-1.elb.amazonaws.com   80:30770/TCP   25h
kubernetes      ClusterIP      10.0.0.1     <none>                                                                       443/TCP        40h

Create a DNS crecord and generate a valid SSL certificate

In order to finalise the deploy of the Kubernetes cluster on AWS using Terraform and KOPS, we should create a dns record for our deployed application using AWS Route 53 service, then we will generate a valid SSL certificate and attach it the application Load Balancer. To do all of that we will use Terraform of course 😍

As the other modules, we defined the Terraform configuration of the application environment resources in /terraform/modules/app_env as the following:

# * acm.tf
# Create an AWS certificate for hello.aymen.krypton.berlin
resource "aws_acm_certificate" "cert" {
domain_name = aws_route53_record.hello.name
validation_method = "DNS"
tags = {
Environment = "Krypton"
Terraform = "true"
}
lifecycle {
create_before_destroy = true
}
}
# * dns.tf
# Data source dns zone
data "aws_route53_zone" "zone" {
name = var.zone_name
}
# The Application public LB created by the K8S deployment in /k8s-deployment
data "aws_elb" "lb" {
name = var.k8s_app_lb_name
}
# Create hello.aymen.krypton.berlin route53 record
resource "aws_route53_record" "hello" {
zone_id = data.aws_route53_zone.zone.zone_id
name = "hello.${data.aws_route53_zone.zone.name}"
type = "CNAME"
ttl = "300"
records = [data.aws_elb.lb.dns_name]
}
# * variables.tf
variable "k8s_app_lb_name" {
type = string
description = "the K8S app public LB"
}
variable "zone_name" {
type = string
default = "aymen.krypton.berlin."
description = "Main zone name"
}

Now, let’s deploy the AWS ACM and Route 53 resource using the root deployment folder: /terraform/deployment.

# * deployment/main.tf
# …. the rest the modules deployments
module "app_env" {
source = "../modules/app_env"
k8s_app_lb_name = var.k8s_app_lb_name
zone_name = var.zone_name
}
# * deployment/variables.tf
# …. the rest the modules variables
variable "k8s_app_lb_name" {
type = string
description = "the K8S app public LB"
}
variable "zone_name" {
type = string
default = "aymen.krypton.berlin."
description = "Main zone name"
}

Now let’s keep the rest on Terraform

$ terraform init
$ terraform plan
$ terraform apply

Explore the Application

Excited to see the results of this long journey 😄? So am I.
Let’s navigate to the web browser and type: hello.aymen.krypton.berlin

BingoO 🥳Congratulations! our application has been successfully deployed on our Kubernetes cluster!

Updates and Clean Up

If you make changes to your code, running the plan and apply commands again will let Terraform use its knowledge of the deployed resources (.tfstate) to calculate what changes need to be made, whether building or destroying. Finally, when you want to bring down your infrastructure, simply issue a $ terraform destroy command and down it comes.

That’s all folks!

That’s all for this lab, thanks for reading 🙏
Later posts may cover how to deploy a configured cluster with hundreds of microservices deployed in one click!

Be the first to be notified when a new article, running it on Cloud or Kubernetes experiment is published.
Don’t miss the next article!

Leave a Reply

Related Post