<aside> 🤔
In AWS Secrets Manager, there should already be a secret for each RDS instance.
If your apps require more key/value secrets, create them here before deploying them to EKS. (In this example, my backend apps require one more secret to store JWT secret values)
</aside>
Setup the secret module, we will need outputs from other modules (eks, rds):
/* modules/**eks**/outputs.tf */
output "oidc_eks_cluster_arn" {
value = aws_iam_openid_connect_provider.eks_cluster.arn
}
output "oidc_eks_cluster_url" {
value = aws_iam_openid_connect_provider.eks_cluster.url
}
### CREATE A NEW MODULE: **secret** ###
/* modules/**secret**/variables.tf */
# Referencing from root
variable "project_name" {
type = string
}
variable "oidc_eks_cluster_arn" {
type = string
}
variable "oidc_eks_cluster_url" {
type = string
}
variable "rds_db_customer_secret_arn" {
type = string
}
variable "rds_db_shopping_secret_arn" {
type = string
}
/* main.tf (root) */
module "secret" {
source = "./modules/secret"
project_name = local.project_name
oidc_eks_cluster_arn = module.eks.oidc_eks_cluster_arn
oidc_eks_cluster_url = module.eks.oidc_eks_cluster_url
rds_db_customer_secret_arn = module.rds.rds_db_customer_secret_arn
rds_db_shopping_secret_arn = module.rds.rds_db_shopping_secret_arn
}
<aside> 🤔
For this project, my 2 backend apps need to access JWT access & refresh secret values → I want to create one AWS Secrets Manager secret resource to store those values.
</aside>
(Optional) Create the secret resources and output their ARNs as needed:
/* modules/**secret**/main.tf */
# Additional Secrets
resource "aws_secretsmanager_secret" "token_secret" {
name = "${var.project_name}-token-secret"
}
/* modules/**secret**/outputs.tf */
output "token_secret_arn" {
value = aws_secretsmanager_secret.token_secret.arn
}
/* outputs.tf (root) */
output "token_secret_arn" {
value = module.secret.token_secret_arn
}
# Needed as we added a new module "secret"
terraform init
terraform validate && terraform fmt
terraform plan -out tf.plan
terraform apply "tf.plan"
After the secrets have been created, go to AWS console and fill in the secret values you needed:
//TODO: Screenshot (make sure to censor)
Create a VPC Endpoint to connect the AWS Secrets Manager service to the private cluster:
secretsmanager
/* modules/**eks**/main.tf */
resource "aws_vpc_endpoint" "secrets" {
vpc_id = var.vpc_id
service_name = "com.amazonaws.${var.region_primary}.secretsmanager"
vpc_endpoint_type = "Interface"
subnet_ids = [
var.subnet_ids.eks1,
var.subnet_ids.eks2
]
security_group_ids = [aws_eks_cluster.main.vpc_config[0].cluster_security_group_id]
private_dns_enabled = true
tags = {
Name = "${var.project_name}-endpoint-secrets"
}
}
Install the Secrets Store CSI Driver + AWS Secrets Manager provider:
<aside> ℹ️
The official installation guide: https://secrets-store-csi-driver.sigs.k8s.io/getting-started/installation
https://github.com/aws/secrets-store-csi-driver-provider-aws </aside>
Download the manifest files:
<aside> 📝
We will need:
5 main manifest files for SSCSI Driver
Optional rbac-secretprovidersyncing to sync secrets-store as k8s Secrets (used as environment variables for Deployments)
The AWS Provider for the SSCSI Driver </aside>
https://github.com/kubernetes-sigs/secrets-store-csi-driver/releases/download/v1.5.4/rbac-secretproviderclass.yaml
https://github.com/kubernetes-sigs/secrets-store-csi-driver/releases/download/v1.5.4/csidriver.yaml
https://github.com/kubernetes-sigs/secrets-store-csi-driver/releases/download/v1.5.4/secrets-store.csi.x-k8s.io_secretproviderclasses.yaml
https://github.com/kubernetes-sigs/secrets-store-csi-driver/releases/download/v1.5.4/secrets-store.csi.x-k8s.io_secretproviderclasspodstatuses.yaml
https://github.com/kubernetes-sigs/secrets-store-csi-driver/releases/download/v1.5.4/secrets-store-csi-driver.yaml
https://github.com/kubernetes-sigs/secrets-store-csi-driver/releases/download/v1.5.4/rbac-secretprovidersyncing.yaml
https://raw.githubusercontent.com/aws/secrets-store-csi-driver-provider-aws/main/deployment/aws-provider-installer.yaml
Create the necessary private ECR repositories according to the manifest file through console/CLI, then reference them if needed:
sig-storage/csi-node-driver-registrar (in secrets-store-csi-driver.yaml)csi-secrets-store/driver (in secrets-store-csi-driver.yaml)sig-storage/livenessprobe (in secrets-store-csi-driver.yaml)aws-secrets-manager/secrets-store-csi-driver-provider-aws (in aws-provider-installer.yaml)/* modules/**ecr**/main.tf */
## Secrets Store CSI Driver
data "aws_ecr_repository" "sscsi_node_driver_registrar" {
name = "sig-storage/csi-node-driver-registrar"
}
data "aws_ecr_repository" "sscsi_driver" {
name = "csi-secrets-store/driver"
}
data "aws_ecr_repository" "sscsi_livenessprobe" {
name = "sig-storage/livenessprobe"
}
data "aws_ecr_repository" "sscsi_aws_provider" {
name = "aws-secrets-manager/secrets-store-csi-driver-provider-aws"
}
/* modules/**ecr**/outputs.tf */
****
output "helper_urls" {
value = {
# ...
sscsi_node_driver_registrar = data.aws_ecr_repository.sscsi_node_driver_registrar.repository_url
sscsi_driver = data.aws_ecr_repository.sscsi_driver.repository_url
sscsi_livenessprobe = data.aws_ecr_repository.sscsi_livenessprobe.repository_url
sscsi_aws_provider = data.aws_ecr_repository.sscsi_aws_provider.repository_url
}
}
terraform validate && terraform fmt
terraform plan -out tf.plan
terraform apply "tf.plan"
From the local machine pull the images from what the manifest file indicated, then push to your own private repositories:
docker pull --platform linux/arm64 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0
docker tag registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0 **<aws_account_id>**.dkr.ecr.us-east-1.amazonaws.com/sig-storage/csi-node-driver-registrar:v2.13.0
docker push **<aws_account_id>**.dkr.ecr.us-east-1.amazonaws.com/sig-storage/csi-node-driver-registrar:v2.13.0
# Do the same for the other images (according to the manifests):
## registry.k8s.io/csi-secrets-store/driver:v1.5.4
## registry.k8s.io/sig-storage/livenessprobe:v2.15.0
## public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:2.1.0
Modify the manifest files:
# secrets-store-csi-driver.yaml
image: **<aws_account_id>**.dkr.ecr.us-east-1.amazonaws.com/sig-storage/csi-node-driver-registrar:v2.13.0
image: **<aws_account_id>**.dkr.ecr.us-east-1.amazonaws.com/csi-secrets-store/driver:v1.5.4
image: **<aws_account_id>**.dkr.ecr.us-east-1.amazonaws.com/sig-storage/livenessprobe:v2.15.0
# aws-provider-installer.yaml
image: **<aws_account_id>**.dkr.ecr.us-east-1.amazonaws.com/aws-secrets-manager/secrets-store-csi-driver-provider-aws:2.1.0
Then apply the Secrets Store CSI driver to the cluster:
# From the bastion host
mkdir helpers/sscsi-driver
# From local machine, copy the file through SCP to the bastion host
scp -i <**path_to_access_key**> \\
./rbac-secretproviderclass.yaml \\
./csidriver.yaml \\
./secrets-store.csi.x-k8s.io_secretproviderclasses.yaml \\
./secrets-store.csi.x-k8s.io_secretproviderclasspodstatuses.yaml \\
./secrets-store-csi-driver.yaml \\
./rbac-secretprovidersyncing.yaml \\
./aws-provider-installer.yaml \\
ec2-user@<**bastion_eks_instance_id**>:/home/ec2-user/helpers/sscsi-driver
# From bastion host
kubectl apply -f helpers/sscsi-driver/rbac-secretproviderclass.yaml
kubectl apply -f helpers/sscsi-driver/csidriver.yaml
kubectl apply -f helpers/sscsi-driver/secrets-store.csi.x-k8s.io_secretproviderclasses.yaml
kubectl apply -f helpers/sscsi-driver/secrets-store.csi.x-k8s.io_secretproviderclasspodstatuses.yaml
kubectl apply -f helpers/sscsi-driver/secrets-store-csi-driver.yaml
kubectl apply -f helpers/sscsi-driver/rbac-secretprovidersyncing.yaml
kubectl apply -f helpers/sscsi-driver/aws-provider-installer.yaml
# Verify the driver
kubectl get ds -n kube-system
Configure IAM role and k8s Service Account for each secret provider (a secret provider for each backend app):
<aside> ℹ️
We have already set up an IAM OpenID Connect provider for the cluster on step 6.1. Now, we only have to create IAM policies
</aside>
<aside> 🤔
Depending on your stack, you might have to create an IAM role for each app (backend).
For this project, I have 2 backend apps, each uses its own RDS database (separate DB secret) and the same custom token secret. I want to setup IAM role with inline policy for each, only providing the necessary access to the correct secrets, to each backend.
In my project, I will create two roles:
*eks-demo-sscsi-customer-role: access to Customer RDS DB secret and the access token secret*
aws-secrets-manager-sscsi-customer*eks-demo-sscsi-shopping-role: access to Shopping RDS DB secret and the access token secret*
The service account name will be: aws-secrets-manager-sscsi-shopping
</aside>
Create the role with inline policy for each secret provider:
/* modules/**secret**/main.tf */
# Secrets Store CSI
## IAM Role for Customer App
resource "aws_iam_role" "sscsi_customer_role" {
name = "${var.project_name}-sscsi-customer-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRoleWithWebIdentity"
Effect = "Allow"
Principal = {
Federated = var.oidc_eks_cluster_arn
}
Condition = {
StringEquals = {
"${replace(var.oidc_eks_cluster_url, "https://", "")}:sub" = "system:serviceaccount:**default**:**aws-secrets-manager-sscsi-customer**"
"${replace(var.oidc_eks_cluster_url, "https://", "")}:aud" = "sts.amazonaws.com"
}
}
}
]
})
}
resource "aws_iam_role_policy" "sscsi_customer_policy" {
name = "SSCSI-${var.project_name}-Customer"
role = aws_iam_role.sscsi_customer_role.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret"
]
Resource = [
var.rds_db_customer_secret_arn,
aws_secretsmanager_secret.token_secret.arn
]
}
]
})
}
## (Do the same for other backend apps if needed)
## (In my case, I need to add another role & inline policy for the Shopping App)
Output the ARN of each role so we can annotate it in the k8s service account later (for IRSA purposes):
/* modules/**secret**/outputs.tf */
****
output "sscsi_customer_role_arn" {
value = aws_iam_role.sscsi_customer_role.arn
}
output "sscsi_shopping_role_arn" {
value = aws_iam_role.sscsi_shopping_role.arn
}
/* outputs.tf (root) */
output "sscsi_customer_role_arn" {
value = module.secret.sscsi_customer_role_arn
}
output "sscsi_shopping_role_arn" {
value = module.secret.sscsi_shopping_role_arn
}
terraform validate && terraform fmt
terraform plan -out tf.plan
terraform apply "tf.plan"
Create a custom service account in the cluster for each secret provider, through the bastion host, either through a manifest file or other means:
Create the ConfigMaps for non-sensitive values such as URLs:
Using SecretProviderClass, create the secrets from AWS