Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / containers / Kubernetes

How to Setup Kubenetes Cluster Using Kubeadm in EC2 Instance

4.33/5 (5 votes)
3 Jul 2023CPOL3 min read 8.3K  
A step by step guide to setup the kubernetes cluster (one master + one worker) in EC2 instances
This tip provides a step-by-step guide to setting up a Kubernetes cluster using Kubeadm, with one master and one worker node, for development and testing purposes.

Introduction

In this blog post, I have covered the step-by-step guide to setup a kubernetes cluster using Kubeadm with one master and one worker node. Kubeadm is a tool to create a production like cluster locally on an Oracle virtual box or in cloud VMs like EC2 instances. Kubeadm is lightweight enough to get started in the local machine and easy to get started. So it's a good option for playing around with the cluster components or test utilities that are part of cluster administration. It's a cluster that can be used for development and testing purposes in a cost effective way. Kubeadm is a part of CKA and CKS exams. So it's a good option to install kubenetes cluster using kubeadm in local and practice for CKA, CKAD, or CKS exams.

Kubeadm: Behind the Scenes

Let's understand how kubeadm works in the background to setup the kubernetes cluster.

  1. Firstly, after initializing the kubeadm, it runs all the preflight checks to validate the system state and it downloads all the required cluster container images from the registry.k8s.io container registry.
  2. Then, it generates the required TLS certificates and stores them in the /etc/kubernetes/pki folder.
  3. It then generates all the kubeconfig files in the /etc/kubernetes folder.
  4. Next, it starts the kubelet service and generates all the static pod manifests in the /etc/kubernetes/manifests folder.
  5. Then, it starts all the control plane components from the static pod manifests.
  6. After that, it installs core DNS and Kubeproxy components.
  7. Finally, it generates the node bootstrap token.
  8. Worker nodes use this token to join the control plane.

Prerequisites

  1. Two Ubuntu EC2 instances [One master and one worker node]. (OS flavor: Ubuntu 22.04 LTS)
  2. Master node must have a minimum 2 vCPU and 2 gb RAM. (instance type: t2.medium)
  3. Worker nodes must have a minimum 1 vCPU and 2 gb RAM. (instance type: t2.medium)
  4. The network range with static IPs for master and worker nodes is 10.X.X.X/X. We will be using Calio network plugin for our demo, so the 192.x.x.x series as the pod network range. The Node IP range and pod IP range should not overlap.
  5. Control plane node must allow 6443*, 2379-2380, 10250-10252 in TCP protocol and worker node must allow 10250, 30000-32767 in TCP protocol.

Step 1

Run the below command in all the nodes to configure the master and worker nodes for kubernetes cluster to run on them.

Shell
//

set -euxo pipefail

# disable swap
sudo swapoff -a

#keep the swapoff while reboot
(crontab -l 2>/dev/null; echo "@reboot /sbin/swapoff -a") | crontab - || true
sudo apt-get update -y

# IPtables to see bridged traffic

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf 
      overlay 
      br_netfilter 
   EOF 

sudo modprobe overlay 
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf 
     net.bridge.bridge-nf-call-iptables = 1 
     net.bridge.bridge-nf-call-ip6tables = 1
     net.ipv4.ip_forward = 1 
   EOF 

# Apply sysctl params without reboot
sudo sysctl --system

cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/
stable/xUbuntu_22.04/ /
EOF
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:
cri-o:1.26.list
deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:
/cri-o:/1.26/xUbuntu_22.04/ /
EOF

curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:
cri-o:1.26/xUbuntu_22.04/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/
libcontainers.gpg add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/
xUbuntu_22.04/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.
gpg add -
sudo apt-get update

#Install docker
sudo apt install docker.io
sudo systemctl start docker
sudo systemctl enable docker

#Install CRI-O runtime
sudo apt-get install cri-o cri-o-runc -y

sudo systemctl daemon-reload
sudo systemctl enable crio --now
sudo apt-get update -y
sudo apt-get install -y apt-transport-https ca-certificates curl

#Retrieve the key for the Kubernetes repo and add it to your key manager
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

#Add the kubernetes repo to your system
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

# Install kubelet, kubectl and Kubeadm
sudo apt-get update -y
sudo apt-get install -y kubelet=1.26.3-00 kubectl=1.26.3-00 kubeadm=1.26.3-00
sudo apt-get update -y
sudo apt-get install -y jq

#Add the node IP to <code>KUBELET_EXTRA_ARGS</code>
local_ip="$(ip --json a s | jq -r '.[] | if .ifname == "eth1" then .addr_info[] | 
if .family == "inet" then .local else empty end else empty end')"
cat > /etc/default/kubelet << EOF
KUBELET_EXTRA_ARGS=--node-ip=$local_ip
EOF

//

Step 2

Run the below commands in the master node to start the control plane on it.

Shell
# Pull required images

sudo kubeadm config images pull

# Initialize kubeadm based on PUBLIC_IP_ACCESS

NODENAME=$(hostname -s)
MASTER_PUBLIC_IP=$(curl ifconfig.me && echo "")
sudo kubeadm init --control-plane-endpoint="$MASTER_PUBLIC_IP" 
--apiserver-cert-extra-sans="$MASTER_PUBLIC_IP" --pod-network-cidr=192.168.0.0/16 
--node-name "$NODENAME" --ignore-preflight-errors Swap

# Configure kubeconfig

mkdir -p "$HOME"/.kube
sudo cp -i /etc/kubernetes/admin.conf "$HOME"/.kube/config
sudo chown "$(id -u)":"$(id -g)" "$HOME"/.kube/config

# Install Claico Network Plugin Network 

curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O

kubectl apply -f calico.yaml

Step 3

On a successful kubeadm initialization, you must get an output as join command with the token. Please make a node of it. It is required for worker node to join the cluster. Run the below command for each of the worker nodes to join the cluster.

Shell
kubeadm join 44.201.158.244:6443 --token b4j75o.x2wkanhs0erwf67o --discovery-token-ca-cert-hash 
sha256:11d7edf331d778eff532e486ae23017873861cb23bdc96e2cf859b64951f7dba

#If the token is lost then plz run the below command to get a new token
kubeadm token create --print-join-command

Step 4

Run the below commands to setup the worker to run the kubectl command. If the admin.conf is not present inside the worker node, then copy it from master node location /etc/kubernetes/admin.conf

Shell
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo service kubelet restart

History

  • 27th June, 2023: Initial version
  • 4th July, 2023: 2nd version

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)