In this article, I will call kubernetes+cluster=Kluster
1. To start with EKS you need to fulfil the following prerequisites.
a. Own an AWS account (free tier will work)
b. Create a VPC (virtual private space that will not affect other components in your acc)
c. Create an IAM role with the security group. ( AWS user with a list of permissions to setup EKS)
2. To create a Kluster Control Plane you should have the following in place
a. Kluster name, kubernetes version
b. Region & VPC for kluster
c. Security for kluster
3. Create worker nodes for your kluster ( set of EC2 instances)
a. Create as a Node Group ( autoscaling enabled)
b. Choose kluster it will attach to
c. Define security group, select instance type, resources
d. Define max & min number of nodes.
Then we use kubectl from our local to access kluster and deploy resources on it
Alternatively, we can use eksctl an official CLI tool for creating & managing kluster on EKS that is written in go & uses cloudFormation to set up EKS fast & effectively.
# below command will do all the job mentioned above at runtime and will be using default values
$ eksctl create cluster
first, install eksctl to use this utility follow the instruction here
NOTE: Before creating a cluster using eksctl util it is important to connect and authenticate to your AWS account, follow the instructions to connect
$ aws configure // that will prompt you to provide
AWS Access Key ID [*********SIXH]: AKIAUYDUMMY73NXMF
AWS Secret Access Key [*********6oB6]: mbJBDUMMYfm09oqONa
Default region name [ap-south-1]:
Default output format [None]: json
// this user will be able to access the cluster
$ eksctl create cluster \
--name first-kluster \
--version 1.20 \
--region ap-south-1 \
--nodegroup-name linux-worker-nodes \
--node-type t2.micro \
--nodes 2
# create public node group
$ eksctl create nodegroup –cluster=first-kluster \
--region=ap-south-1 \
--name=node-group-name \
--node-type=t2.micro \
-–nodes=2 \
-–nodes-min=2 \
-–nodes-max=4 \
-–node-volume-size=20 \
-–ssh-access \
-–ssh-public-key=ssh-key-pair-name \
-–managed \
-–asg-access \
-–external-dns-access \
-–full-ecr-access \
-–appmesh-access \
-–alb-ingress-access
# List NodeGroups
$ eksctl get nodegroup –cluster=<clusterName>
# List Nodes
$ kubectl get nodes -o wide
# if fails to list nodes -
export KUBECONFIG=$LOCATIONofKUBECONFIG/kubeconfig_myEKSCluster
export AWS_DEFAULT_REGION=eu-west-2
export AWS_DEFAULT_PROFILE=dev
$ aws eks update-kubeconfig --name myEKSCluster --region=eu-west-2
Added new context arn:aws:eks:eu-west-2:295XXXX62576:cluster/myEKSCluster to $PWD/kubeconfig_myEKSCluster
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: first-kluster
region: ap-south-1
nodeGroups:
- name: linux-worker-nodes-1
instanceType: t2.micro
desiredCapacity: 5
- name: linux-worker-nodes-2
instanceType: m5.large
desiredCapacity: 2
$ eksctl create cluster -f cluster.yaml
# In-order to delete a cluster
$ eksctl delete cluster --name first-kluster
No comments:
Post a Comment