Skip to content

Kubernetes Cluster Setup on AWS EC2

  • AWS Account with appropriate permissions
  • 3 EC2 instances running Ubuntu 18.04 or later
    • Master node: t3.medium (minimum 2 CPUs, 2GB RAM)
    • Worker nodes: t3.micro or larger (minimum 1 CPU, 1GB RAM)
Terminal window
sudo apt-get update
Terminal window
sudo apt-get install -y docker.io
sudo systemctl enable docker
sudo systemctl start docker
Terminal window
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
sudo apt-get update
sudo apt-get install -y kubeadm kubelet kubectl
sudo apt-mark hold kubeadm kubelet kubectl
Terminal window
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Terminal window
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Terminal window
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

On each worker node, run the kubeadm join command from step 4:

Terminal window
sudo kubeadm join <master-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>

Check cluster status:

Terminal window
kubectl get nodes
kubectl get pods --all-namespaces

All nodes should show Ready status and all system pods should be Running.