• AWS supports up to 3 versions of Kubernetes at once.
  • AWS aims to be in-line with the Kubernetes community.
  • AWS integrates Kubernetes with AWS IAM RBAC.
  • AWS automatically handles the built-in Kubernetes CA system
  • There is an EKS optimized Ubuntu Image.
  • AWS provisions an ENI in the VPC that is specific to the EKS cluster.
  • EKS does not use an overlay network like flannel, it uses the VPC networking directly.
  • AWS VPC CNI Plugin assigns a unique IP to every pod that gets run.


  • Runs one cluster per region
  • Uses Envoy Proxy


  • Worker nodes: EC2 Pricing
  • EKS Control Plane: $0.20/hr
Per Hour $0.20
Per Day $4.80
Per Week $33.60
Per Month $146.00
Per Year $1,752.00
  • Presuming that we compose a control pane of three (x3) t3.large instances, it will cost us $0.2496/hr.
  • Presuming that we compose a control pane of give (x5) t3.medium instances, it will cost us $0.208/hr.
  • Based on the cost of hosting the Kubernetes control plane ourselves, there will be a savings realized from using the EKS service control plan rather than rolling our own instances. To stay nothing about the cost of the manpower saved.

Security Certifications

  • HIPAA-eligible
  • PCI-DSS (coming soon)

Load Balancing

Support for all three load balancers:

  • Classic
  • NLB (Network Load Balancer)
  • CLB (Cluster Load Balancer)

Install tools for working with EKS

Tool Description
AWS CLI (aws) AWS Command Line Interface
EKS CTL (eksctl) Elastic Kubernetes Service Controller
Uses AWS CloudFormation to provision an EKS cluster
Kube CTL (kubectl) Kubernetes Controller
Controls the Kubernetes cluster once provisioned.
Use for provisioning pods and services, et. al.,


pip3 install awscli --upgrade --user

EKS CTL (EKS Cuttle)

  • A simple command line utility for creating and managing Kubernetes clusters on Amazon EKS.
  • https://github.com/weaveworks/eksctl
wget "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_darwin_amd64.tar.gz"
tar xf eksctl_darwin_amd64.tar.gz
cp eksctl ~/bin/
chmod 0700 ~/bin/eksctl

KubeCTL (Cube Cuttle)

  • Kubectl can be downloaded from the Kubernetes project, there aren’t any differences between it and the version on the AWS website.
  • Amazon distributes the kubectl version that matches the EKS version.
  • https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html
wget https://amazon-eks.s3-us-west-2.amazonaws.com/1.13.7/2019-06-11/bin/darwin/amd64/kubectl
cp kubectl ~/bin/
chmod 0700 ~/bin/kubectl


  • A plugin for kubectl.
  • Used for authenticating kubectl connections to EKS.
wget https://amazon-eks.s3-us-west-2.amazonaws.com/1.13.7/2019-06-11/bin/darwin/amd64/aws-iam-authenticator
cp aws-iam-authenticator ~/bin/
chmod 0700 ~/bin/aws-iam-authenticator

Create a Cluster using eksctl

  • eksctl creates a CloudFormation stack to create the new cluster.
export CLUSTER_NAME="test1"
export NODE_INSTANCE_TYPE="t3.small"
export NODES=1
export NODE_MIN=1
export NODE_MAX=4
export AWS_REGION="us-east-1"

eksctl create cluster \
  --name ${CLUSTER_NAME} \
  --version ${KUBERNETES_VERSION} \
  --nodegroup-name standard-workers \
  --node-type ${NODE_INSTANCE_TYPE} \
  --nodes ${NODES} \
  --nodes-min ${NODE_MIN} \
  --nodes-max ${NODE_MAX} \
  --node-ami auto \
  --alb-ingress-access \
  --appmesh-access \
  --full-ecr-access \
  --ssh-public-key "~/.ssh/id_rsa.pub"

eksctl utils describe-stacks --region=${AWS_REGION} --name=${CLUSTER_NAME}

Troubleshooting: CREATE_FAILED - the targeted availability zone does not currently have sufficient capacity to support the cluster

[✖]  AWS::EKS::Cluster/ControlPlane: CREATE_FAILED – "Cannot create cluster 'test1' because us-east-1e, 
the targeted availability zone, does not currently have sufficient capacity to support the cluster. 
Retry and choose from these availability zones:
us-east-1a, us-east-1b, us-east-1c, us-east-1d, us-east-1f 

(Service: AmazonEKS; Status Code: 400;
 Error Code: UnsupportedAvailabilityZoneException;
 Request ID: 5fe332bb-a335-11e9-993a-278300533054)"

Explicitly specify the target availability zones using the --zones switch

eksctl delete cluster --region=us-east-1 --name=test1

eksctl create cluster \
  --name test1 \
  --version 1.13 \
  --region us-east-1 \
  --zones "us-east-1a,us-east-1b" \
  --nodegroup-name standard-workers \
  --node-type t3.small \
  --nodes 1 \
  --nodes-min 1 \
  --nodes-max 4 \
  --node-ami auto \
  --alb-ingress-access \
  --appmesh-access \
  --full-ecr-access \
  --ssh-public-key "~/.ssh/id_rsa.pub"

Create a kubeconfig for Amazon EKS

  • Amazon EKS uses the aws eks get-token command with kubectl for cluster authentication.
  • This will create: ~/.kube/config
export AWS_REGION="us-east-1"
export EKS_CLUSTER_NAME="test1"

aws sts get-caller-identity
aws eks --region ${AWS_REGION} update-kubeconfig --name ${EKS_CLUSTER_NAME}

kubectl get svc
kubectl get nodes
kubectl get pods

ALB Ingress Controller

  • https://github.com/kubernetes-sigs/aws-alb-ingress-controller

To use an Internal Load Balancer or an NLB, use an Annotation:

service.beta.kubernetes.io/aws-load-balancer-internal: service.beta.kubernetes.io/aws-load-balancer-internal:

kubectl get nodes
apiVersion: apps/v1
kind: Deployment
  name: echo-deployment
  replicas: 3
      app: echo-pod
        app: echo-pod
      - name: echoheaders
        image: k8s.gcr.io/echoserver:1.10
        imagePullPolicy: IfNotPresent
        - containerPort: 8080
kubectl create -f deployment.yaml

kubectl get pods

kubectl create -f service.yaml

kubectl get service

kubectl describe svc echo-service

kubectl delete -f service.yaml

export AWS_REGION="us-east-1"
export AWS_USER_ID="664214954715"
export ACM_CERTIFICATE_ID="ad0a12e7-70a3-42a0-be84-c0370cc775cc"

## This service def will create a classic loadbalanacer (ELB)

cat <<EOF> service.yaml
apiVersion: v1
kind: Service
  name: echo-service
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:${AWS_REGION}:${AWS_USER_ID}:certificate/${ACM_CERTIFICATE_ID}
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
    app: echo-pod
  - name: http
    port: 80
    targetPort: 8080
  - name: https
    port: 443
    targetPort: 8080
  type: LoadBalancer

kubectl create -f service.yaml

kubectl delete -f service.yaml
kubectl delete -f deployment.yaml
kubectl get nodes

eksctl delete cluster --region us-east-1 --name test1
categories: AWS | EKS | EC2 | Kubernetes |