eksctl
bastion host to access and manage a private EKS clusterCreate an IAM role for the bastion host, with the following policies:
eksctl
: https://eksctl.io/usage/minimum-iam-policies/
Create a security group for the bastion host:
Create the EC2 access instance:
Before create a session from the local machine to the bastion host:
Install SSM plugin for the AWS CLI on local machine: https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html
Enable SSH connections over SSM, following the guide here: https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-enable-ssh-connections.html
# Add this to SSH config file
# Linux / macOS: ~/.ssh/config
host i-* mi-*
ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"
# Windows: C:\\Users\\<username>\\.ssh\\config
host i-* mi-*
ProxyCommand C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters portNumber=%p"
Also make sure to make the downloaded EC2 access key not publicly viewable:
# Linux / macOS
chmod 400 <path_to_access_key>
# Windows
icacls.exe <path_to_access_key> /reset
icacls.exe <path_to_access_key> /GRANT:R "$($env:USERNAME):(R)"
icacls.exe <path_to_access_key> /inheritance:r
Then using the EC2 access key, SSH over SSM to the bastion host using instance ID:
ssh -i <path_to_access_key> ec2-user@<instance_id>
In the bastion host, install kubectl
and eksctl
:
kubectl
: https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.htmleksctl
: https://eksctl.io/installation/# Create a binary directory in home, then register it in PATH
mkdir -p $HOME/bin
export PATH=$HOME/bin:$PATH && echo 'export PATH=$HOME/bin:$PATH' >> ~/.bashrc
## Installing **kubectl** (Linux arm64 version) ##
# Download the binary
curl -O <https://s3.us-west-2.amazonaws.com/amazon-eks/1.31.0/2024-09-12/bin/linux/arm64/kubectl>
# Apply execute permission, and move the binary to PATH
chmod +x ./kubectl
# Move the binary to a folder in PATH
mv ./kubectl $HOME/bin/kubectl
# Verify kubectl
kubectl version --client
## Installing **eksctl** (Linux arm64 version) ##
# Download the gzip
curl -LO <https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_Linux_arm64.tar.gz>
# Extract the gzip, and move the binary to PATH
tar -xzf ./eksctl_Linux_arm64.tar.gz
mv ./eksctl $HOME/bin/eksctl
rm ./eksctl_Linux_arm64.tar.gz
# Verify eksctl and IAM role
eksctl info
aws sts get-caller-identity
Create a cluster config file for eksctl
, here is an example:
# cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: eks-demo-cluster
region: us-east-1
version: "1.31"
privateCluster:
enabled: true
vpc:
subnets:
private:
us-east-1a:
id: subnet-080267c9513e12135
us-east-1b:
id: subnet-067af50edb4e14159
managedNodeGroups:
- name: general
instanceType: t4g.medium
minSize: 2
maxSize: 6
desiredCapacity: 2
privateNetworking: true
volumeSize: 20
version: "1.31"
- Make sure the cluster has same version as the kubectl
installed on the bastion hostprivateCluster:
& enabled: true
- Make the cluster to be fully-private, by doing these things:
vpc
& subnets
- Use existing VPC and subnets configured on step 2
nodeGroups
****- Create node groups, specify the instance type and desired capacity (minimum 2 for having 2 AZs)
privateNetworking
- Must be set to true
(nodes are in private subnets)volumeSize: 20
- Size of the EBS volume attached to the each node = 20 GBUse eksctl
to create the EKS cluster:
# From local machine, copy the file through SCP to the bastion host
scp -i <path_to_access_key> ./cluster.yaml ec2-user@<instance_id>:/home/ec2-user
# From bastion host
eksctl create cluster -f ./cluster.yaml
eksctl
will start creating the cluster by running two CloudFormation stacks: the cluster stack and the node group stack. This process should take around 20 minutes<aside> ☝
Just right after the cluster stack starts running, do this!
This stack will create two security groups: ControlPlaneSecurityGroup and ClusterSharedNodeSecurityGroup. When the two security groups are just created, add this inbound rule to each of the security group before the cluster stack finishes.
HTTPS (port 443) - Source: The eksctl bastion host security group created in step 3.1
</aside>
<aside> ❓
Why adding this inbound rule immediately?
This is because eksctl
will add VPC endpoints in the ClusterSharedNodeSecurityGroup, causing the bastion host to ignore the NAT gateway and use the VPC endpoints instead → This makes the rest of the cluster creation process of eksctl
to fail, as the old security group doesn’t allow traffic from the bastion host.
Currently, in eksctl version 0.194.0, there are no ways to assign more inbound rules to ControlPlaneSecurityGroup and ClusterSharedNodeSecurityGroup through the cluster config file. There are ways to completely replace these two security groups, but I think it is better to use the default rules created by eksctl
, then add new rules later.
What adding the rule to the two security groups does:
ControlPlaneSecurityGroup: the rule allows the bastion host to communicate with the control plane through kubectl
ClusterSharedNodeSecurityGroup: the rule allows the bastion host to communicate with the newly created VPC Endpoints through eksctl
</aside>
The cluster stack mainly includes:
The node group stack mainly includes:
During the process, eksctl
will also add EKS add-ons and update the kubeconfig so we can manage the cluster through kubectl
Test connectivity to the private cluster after creation:
After the cluster has been created, you might want to give the account that you are using the console with an admin cluster role, with the Access tab on the cluster’s console