<aside> ℹ️
Official documentation of AWS LCB’s installation guide: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.16/deploy/installation/
</aside>
Configure IAM role and k8s Service Account for the controller:
Setup an IAM OpenID Connect provider for the cluster:
/* modules/**eks**/main.tf */
# Load Balancer Controller
## OIDC Provider
data "tls_certificate" "eks_cluster" {
url = aws_eks_cluster.main.identity[0].oidc[0].issuer
}
resource "aws_iam_openid_connect_provider" "eks_cluster" {
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = [data.tls_certificate.eks_cluster.certificates[0].sha1_fingerprint]
url = aws_eks_cluster.main.identity[0].oidc[0].issuer
tags = {
Name = "${var.project_name}-eks-cluster-oidc"
}
}
Create the policy AWSLoadBalancerControllerIAMPolicy as instructed by the official guide (https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.16.0/docs/install/iam_policy.json) & the IAM role for the AWS LBC:
## LBC's IAM Policy & Role
resource "aws_iam_policy" "lbc_policy" {
name = "${var.project_name}-lbc-policy"
policy = <<EOT
# copy the whole policy in JSON format here
EOT
}
resource "aws_iam_role" "aws_lbc_role" {
name = "${var.project_name}-lbc-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRoleWithWebIdentity"
Effect = "Allow"
Principal = {
Federated = aws_iam_openid_connect_provider.eks_cluster.arn
}
Condition = {
StringEquals = {
"${replace(aws_iam_openid_connect_provider.eks_cluster.url, "https://", "")}:sub" = "system:serviceaccount:kube-system:aws-load-balancer-controller"
"${replace(aws_iam_openid_connect_provider.eks_cluster.url, "https://", "")}:aud" = "sts.amazonaws.com"
}
}
}
]
})
}
resource "aws_iam_role_policy_attachment" "aws_lbc" {
role = aws_iam_role.aws_lbc_role.name
policy_arn = aws_iam_policy.aws_lbc_policy.arn
}
Output the ARN of the AWS LCB role so we can annotate it in the k8s service account later (for IRSA purposes):
/* modules/**eks**/outputs.tf */
****
output "aws_lbc_role_arn" {
value = aws_iam_role.aws_lbc_role.arn
}
/* outputs.tf (root) */
output "aws_lbc_role_arn" {
value = module.eks.aws_lbc_role_arn
}
terraform validate && terraform fmt
terraform plan -out tf.plan
terraform apply "tf.plan"
We will create the k8s ServiceAccount for the AWS LBC later (modifying the existing manifest file from the official guide)
Install the cert-manager (resolving SSL/TLS certificates from Load Balancer):
<aside> ☝
Since the EKS cluster is private and has no access to the Internet, you cannot simply apply the manifest file like this:
kubectl apply -f <https://github.com/cert-manager/cert-manager/releases/download/v1.12.3/cert-manager.yaml>
The images in the manifest file is not from your AWS ECR private repository: For example, quay.io/jetstack/cert-manager-cainjector:v1.12.3
This will apply to all the other helper manifests used in this project (including the next steps)
</aside>
Download the manifest file: https://github.com/cert-manager/cert-manager/releases/download/v1.12.3/cert-manager.yaml
Create the necessary private ECR repositories according to the manifest file through console/CLI, then reference them if needed:
jetstack/cert-manager-webhook (in cert-manager.yaml)jetstack/cert-manager-cainjector (in cert-manager.yaml)jetstack/cert-manager-controller (in cert-manager.yaml)/* modules/**ecr**/main.tf */
# Helper repositories
## cert-manager
data "aws_ecr_repository" "cert_manager_webhook" {
name = "jetstack/cert-manager-webhook"
}
data "aws_ecr_repository" "cert_manager_cainjector" {
name = "jetstack/cert-manager-cainjector"
}
data "aws_ecr_repository" "cert_manager_controller" {
name = "jetstack/cert-manager-controller"
}
/* modules/**ecr**/outputs.tf */
****
output "helper_urls" {
value = {
cert_manager_webhook = data.aws_ecr_repository.cert_manager_webhook.repository_url
cert_manager_cainjector = data.aws_ecr_repository.cert_manager_cainjector.repository_url
cert_manager_controller = data.aws_ecr_repository.cert_manager_controller.repository_url
}
}
/* outputs.tf (root) */
output "ecr_helper_urls" {
value = module.ecr.helper_urls
}
terraform validate && terraform fmt
terraform plan -out tf.plan
terraform apply "tf.plan"
From the local machine pull the images from what the manifest file indicated, then push to your own private repositories:
docker pull --platform linux/arm64 quay.io/jetstack/cert-manager-webhook:v1.12.3
docker tag quay.io/jetstack/cert-manager-webhook:v1.12.3 **<aws_account_id>**.dkr.ecr.us-east-1.amazonaws.com/jetstack/cert-manager-webhook:v1.12.3
docker push **<aws_account_id>**.dkr.ecr.us-east-1.amazonaws.com/jetstack/cert-manager-webhook:v1.12.3
# Do the same for the other images (according to the manifest):
## quay.io/jetstack/cert-manager-cainjector:v1.12.3
## quay.io/jetstack/cert-manager-controller:v1.12.3
Modify the cert-manager.yaml manifest file:
# cert-manager.yaml
## Change the original images to the private ECR images
image: **<aws_account_id>**.dkr.ecr.us-east-1.amazonaws.com/jetstack/cert-manager-webhook:v1.12.3
image: **<aws_account_id>**.dkr.ecr.us-east-1.amazonaws.com/jetstack/cert-manager-cainjector:v1.12.3
image: **<aws_account_id>**.dkr.ecr.us-east-1.amazonaws.com/jetstack/cert-manager-controller:v1.12.3
Then apply the cert-manager to the cluster:
# From bastion host
mkdir helpers
mkdir helpers/cert-manager
# From local machine, copy the file through SCP to the bastion host
scp -i <**path_to_access_key**> ./cert-manager.yaml ec2-user@<**bastion_eks_instance_id**>:/home/ec2-user/helpers/cert-manager
# From bastion host
kubectl apply -f ./helpers/cert-manager/cert-manager.yaml
# Verify the certification manager server
kubectl get deployment -n cert-manager
Install the Load Balancer Controller:
Download the necessary manifest files:
Create the necessary private ECR repositories according to the manifest file through console/CLI, then reference them if needed:
eks/aws-load-balancer-controller (in v2_16_0_full.yaml)/* modules/**ecr**/main.tf */
## AWS LBC
data "aws_ecr_repository" "aws_lbc" {
name = "eks/aws-load-balancer-controller"
}
/* modules/**ecr**/outputs.tf */
****
output "helper_urls" {
value = {
# ...
aws_lbc = data.aws_ecr_repository.aws_lbc.repository_url
}
}
terraform validate && terraform fmt
terraform plan -out tf.plan
terraform apply "tf.plan"
From the local machine pull the images from what the manifest file indicated, then push to your own private repositories:
docker pull --platform linux/arm64 public.ecr.aws/eks/aws-load-balancer-controller:v2.16.0
docker tag public.ecr.aws/eks/aws-load-balancer-controller:v2.16.0 **<aws_account_id>**.dkr.ecr.us-east-1.amazonaws.com/eks/aws-load-balancer-controller:v2.16.0
docker push **<aws_account_id>**.dkr.ecr.us-east-1.amazonaws.com/eks/aws-load-balancer-controller:v2.16.0
Modify the v2_16_0_full.yaml manifest file:
# v2_10_0_full.yaml
## Change the cluster name and the image to the private ECR image
containers:
- args:
- --cluster-name=**eks-demo-cluster**
...
image: **<aws_account_id>**.dkr.ecr.us-east-1.amazonaws.com/eks/aws-load-balancer-controller:v2.16.0
## Add annotation to the ServiceAccount
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/name: aws-load-balancer-controller
name: aws-load-balancer-controller
**annotations:
eks.amazonaws.com/role-arn: <aws_lbc_role_arn>**
---
Then apply the LBC to the cluster:
# From the bastion host
mkdir helpers/aws-lbc
# From local machine, copy the file through SCP to the bastion host
scp -i <**path_to_access_key**> ./v2_16_0_*.yaml ec2-user@<**bastion_eks_instance_id**>:/home/ec2-user/helpers/aws-lbc
# From bastion host
kubectl apply -f ./helpers/aws-lbc/v2_16_0_full.yaml
## Wait a few seconds for first manifest file to load, then apply the ingress class manifest
kubectl apply -f ./helpers/aws-lbc/v2_16_0_ingclass.yaml
# Verify the LBC
kubectl get deployment -n kube-system aws-load-balancer-controller
Create a VPC Endpoint to connect the AWS ELB service to the private cluster:
elasticloadbalancing
/* modules/**eks**/main.tf */
resource "aws_vpc_endpoint" "elb" {
vpc_id = var.vpc_id
service_name = "com.amazonaws.${var.region_primary}.elasticloadbalancing"
vpc_endpoint_type = "Interface"
subnet_ids = [
var.subnet_ids.eks1,
var.subnet_ids.eks2
]
security_group_ids = [aws_eks_cluster.main.vpc_config[0].cluster_security_group_id]
private_dns_enabled = true
tags = {
Name = "${var.project_name}-endpoint-elb"
}
}
Create an ACM certificate to redirect your HTTP (80) traffic from your cluster to the internet with HTTPS (443), using SSL redirect:
eks-demo.bachhv.com/* modules/**eks**/locals.tf */
locals {
domain_name = "eks-demo.bachhv.com"
}
## ACM Certificate
resource "aws_acm_certificate" "cert" {
domain_name = local.domain_name
validation_method = "DNS"
tags = {
Name = "${var.project_name}-cert"
}
lifecycle {
create_before_destroy = true
}
}
/* modules/**eks**/outputs.tf */
output "acm_cert_validation_record" {
value = {
name = tolist(aws_acm_certificate.cert.domain_validation_options)[0].resource_record_name
value = tolist(aws_acm_certificate.cert.domain_validation_options)[0].resource_record_value
type = tolist(aws_acm_certificate.cert.domain_validation_options)[0].resource_record_type
}
}
output "acm_cert_arn" {
value = aws_acm_certificate.cert.arn
}
/* outputs.tf (root) */
output "acm_cert_validation_record" {
value = module.eks.acm_cert_validation_record
}
output "acm_cert_arn" {
value = module.eks.acm_cert_arn
}
terraform validate && terraform fmt
terraform plan -out tf.plan
terraform apply "tf.plan"
After request, ACM will give a CNAME record requiring validation. Go to your domain management site (e.g. Route 53, Cloudflare, etc.), and add the CNAME record based on the output acm_cert_validation_record