Click here to Skip to main content
16,004,507 members
Articles / Containers / Kubernetes

Building a Kubernetes Cluster from Scratch: A Detailed Guide

Rate me:
Please Sign up or sign in to vote.
5.00/5 (3 votes)
23 Aug 2024CPOL4 min read 9.1K   8   3
Kubernetes (K8s) has become the standard for container orchestration, enabling developers to manage and scale containerized applications efficiently. In this guide, we'll walk through the process of building a Kubernetes cluster from scratch, including examples, demos, and results.

1. Prerequisites

Before we dive into the process, ensure you have the following prerequisites:

  • Linux servers: At least three servers (1 master node, 2 worker nodes) running Ubuntu 20.04 LTS.
  • Access to the internet: The nodes will need internet access to download and install packages.

2. Environment Setup

Ensure that these 3 virtual or physical machines can see each other. If you don't have 3 static IPs for them, you can set them up on the same LAN as follows:

  • Master Node: master-node (IP: 192.168.1.100)
  • Worker Node 1: worker-node-1 (IP: 192.168.1.101)
  • Worker Node 2: worker-node-2 (IP: 192.168.1.102)

2.1 Update and Upgrade the Servers

Start by updating and upgrading all your servers:

Shell
sudo apt-get update
sudo apt-get upgrade -y

2.2 Install Docker

Kubernetes uses Docker as its container runtime. Install Docker on all three nodes:

Shell
sudo apt-get install -y docker.io
sudo systemctl enable docker
sudo systemctl start docker

2.3 Install Kubernetes Components

We need to install kubeadm, kubelet, and kubectl on all nodes:

Shell
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

2.4 Initialize the Kubernetes Master Node

On the master node, we'll initialize the cluster. This process sets up the control plane:

Shell
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

Once the initialization is complete, you'll see a command to join the worker nodes to the cluster. Save this command as we'll need it later.

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

Shell
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

Shell
export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/networking/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.100:6443 --token abcdef.0123456789abcdef
      --discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef

When I set up the initial master node, I encountered several errors.

container runtime is not running: output: level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"

So I do:

Shell
sudo apt remove containerd
sudo apt update
sudo apt install containerd.io
sudo rm /etc/containerd/config.toml
sudo systemctl restart containerd

When

E0616 21:27:18.529602   14900 run.go:74] "command failed" err="failed to run Kubelet: running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false.

The kubelet is operating with swap enabled, whereas Kubernetes explicitly discourages the use of swap.

So I do:

Shell
sudo swapoff -a
sudo sed -i '/swap/d' /etc/fstab
sudo systemctl restart kubelet
sudo systemctl status kubelet

If you encounter a port error

Shell
sudo fuser -k 6443/tcp
sudo fuser -k 10259/tcp
sudo fuser -k 10257/tcp
sudo fuser -k 10250/tcp
sudo fuser -k 2397/tcp
sudo fuser -k 2380/tcp

sudo rm /etc/kubernetes/manifests/kube-apiserver.yaml
sudo rm /etc/kubernetes/manifests/kube-controller-manager.yaml
sudo rm /etc/kubernetes/manifests/kube-scheduler.yaml
sudo rm /etc/kubernetes/manifests/etcd.yaml
sudo rm -r /var/lib/etcd

3.1 Configure kubectl

To start using your cluster, configure kubectl for the current user:

Shell
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

3.2 Deploy a Pod Network

Kubernetes requires a Pod network to allow communication between nodes. We'll use Flannel:

kubectl
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

4. Join Worker Nodes to the Cluster

On each worker node, run the command provided by the master node after the initialization. It should look something like this:

Shell
sudo kubeadm join 192.168.1.100:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>

5. Verify the Cluster Setup

To verify that all nodes have joined the cluster, run:

kubectl
kubectl get nodes -o wide

You should see all three nodes (1 master and 2 workers) listed as ready.

6. Deploying a Sample Application

Let's deploy a simple Nginx application to verify that the cluster is working correctly.

Create a Deployment

We'll create an Nginx deployment:

kubectl
kubectl create deployment nginx --image=nginx

Expose the Nginx deployment as a service:

kubectl
kubectl expose deployment nginx --port=80 --type=NodePort

Verify the Deployment

kubectl
kubectl get pods

The output should show one Nginx pod running

Find the NodePort assigned to the Nginx service:

kubectl
kubectl get service nginx

Access the Nginx service using the <node ip=""> : <nodeport> in your browser. You should see the Nginx welcome page.

7. Scaling the Application

Let's scale the Nginx deployment to 3 replicas to see how Kubernetes manages multiple instances:

kubectl
kubectl scale deployment nginx --replicas=3

Verify the scaling:

kubectl
kubectl get pods

You should see three Nginx pods running.

8. Cleaning Up

To clean up the deployment and service:

kubectl
kubectl delete service nginx
kubectl delete deployment nginx

9. Typical installation problems

9.1 When encountering the FORBIDDEN

Shell
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config
 kubectl get pv
 kubectl get deploy

9.2 Couldn't get current server API

E0703 22:36:32.702012    9430 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0703 22:36:32.702614    9430 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0703 22:36:32.705110    9430 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0703 22:36:32.705619    9430 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0703 22:36:32.707190    9430 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Solution:

Step 1: On “Worker node/Compute node” look for file "/etc/kubernetes/admin.conf ", If it is not available than “Copy paste/SCP” from your “Controller node/master node”.

Step 2: On “Worker Node/Compute node” create a directory called “.kube” on user home directory, if it is already exist than Not to worry and move inside this directory.

Shell
mkdir ~/.kube
or,
mkdir -p $HOME/.kube
cd ~/.kube

Step 3: Create file called “config” and paste the content of “/etc/kubernetes/admin.conf”.

vi config

Note: While Copy the content on config file make sure the below mention parameter has correct master IP mentioned, Some time it is localhost but try to update master IP.

10. Conclusion

In this guide, we've successfully set up a Kubernetes cluster from scratch, deployed a sample application, and verified its functionality. Building a Kubernetes cluster manually gives you a deeper understanding of the underlying infrastructure and how Kubernetes operates. This knowledge is essential for troubleshooting and optimizing your cluster in a production environment.

By following this guide, you're now equipped to create and manage your Kubernetes clusters, deploy applications, and scale them according to your needs. Happy clustering!

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Software Developer (Junior)
Vietnam Vietnam
This member has not yet provided a Biography. Assume it's interesting and varied, and probably something to do with programming.

Comments and Discussions

 
Questionwhy docker and not containerd? Pin
Alex Fihman27-Aug-24 6:45
Alex Fihman27-Aug-24 6:45 
AnswerRe: why docker and not containerd? Pin
Trần_Tuấn_Anh27-Aug-24 7:13
Trần_Tuấn_Anh27-Aug-24 7:13 
AnswerRe: why docker and not containerd? Pin
Russell de Pina27-Aug-24 12:03
Russell de Pina27-Aug-24 12:03 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.