Deploying Kubernetes Cluster

n00🔑
10 min readAug 16, 2023

--

While using a managed Kubernetes service like EKS or GKE provides simplicity, understanding how to manually build a Kubernetes cluster from scratch is valuable for really comprehending how Kubernetes works under the hood. In this blog post, I’ll go through step-by-step how to manually deploy a Kubernetes cluster without any abstraction or automation.

We’ll start from bare Linux servers and get hands-on experience with:

  • Installing container runtimes(containerd), kubelet, and kubeadm
  • Initializing the master using kubeadm init
  • Joining worker nodes to the cluster
  • Deploying a sample application on the cluster
  • Exploring core Kubernetes components like etcd, controller manager, scheduler etc.
  • Troubleshooting common issues and validating the deployment

Doing a manual install requires more effort than automated tools, but gives you operational experience with Kubernetes internals. You’ll gain insights into how the components fit together, networking requirements, and validating health/functionality.

The goal is to take generic servers and transform them into a fully operational Kubernetes cluster ready to run containerized applications. No shortcuts — just good old Linux, networking, Docker, and Kubernetes knowledge.

Follow along as I share my journey and learnings building Kubernetes from the ground up. We’ll celebrate at the end by running a containerized app on the finished cluster!

There will be 4 machines in our cluster-

1 Control Plane node and 3 worker nodes.

All nodes are running same kali linux distro-

Prerequisites:

  1. Disable swap on all nodes.
comment the line with swap

2. Configure static IP addresses for all nodes with unique MAC address of each network interface-

We will be using Vmware Workstation’s NAT network for our cluster-

Gateway IP- 172.16.94.2

Subnet- 172.16.94.0/24

Configuring static IPs in /etc/network/interfaces file on each node.

c1-cp1(172.16.94.10):

# eth0
auto eth0
iface eth0 inet static
address 172.16.94.10
netmask 255.255.255.0
gateway 172.16.94.2

c1-node1(172.16.94.11):

# eth0
auto eth0
iface eth0 inet static
address 172.16.94.11
netmask 255.255.255.0
gateway 172.16.94.2

c1-node2(172.16.94.12):

# eth0
auto eth0
iface eth0 inet static
address 172.16.94.12
netmask 255.255.255.0
gateway 172.16.94.2

c1-node3(172.16.94.13):

# eth0
auto eth0
iface eth0 inet static
address 172.16.94.13
netmask 255.255.255.0
gateway 172.16.94.2

3. Mapping hostnames with IP addresses in /etc/hosts file for dns resolution-

172.16.94.10    c1-cp1
172.16.94.11 c1-node1
172.16.94.12 c1-node2
172.16.94.13 c1-node3

Installing Kubernetes-

Note: Steps with (ALL NODES) means these actions are required to be performed on all nodes!

  1. (ALL NODES) Enabling “overlay” and “br_netfilter” kernel modules-
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF


sudo modprobe overlay
sudo modprobe br_netfilter

That command is adding some necessary kernel modules that need to be loaded for a Kubernetes deployment. Here’s a quick explanation:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf — This redirects the following text into a new file called /etc/modules-load.d/k8s.conf. The sudo tee command writes the text to the file with root permissions.

overlay — The overlay module provides overlay filesystem support, which Kubernetes uses for its pod network abstraction.
br_netfilter — This module enables bridge netfilter support in the Linux kernel, which is required for Kubernetes networking and policy.

modprobe overlay — Loads the overlay kernel module into the running kernel.
modprobe br_netfilter — Loads the br_netfilter module into the running kernel.
So in summary, this command ensures that the required kernel modules for Kubernetes are loaded and available at boot time by writing them into a config file in /etc/modules-load.d. The modprobe commands immediately load the modules into the running kernel as well. This setup ensures Kubernetes will have the necessary kernel support for its networking and storage.

Verify that the br_netfilter, overlay modules are loaded by running the following commands:

lsmod | grep br_netfilter
lsmod | grep overlay

2. (ALL NODES)

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

sudo sysctl --system

This command is configuring some sysctl kernel parameters that are required for Kubernetes networking to function properly. Here’s what it’s doing:

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf — Creates a new sysctl config file called /etc/sysctl.d/k8s.conf and writes the following parameters into it:
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1

The first two parameters enable bridged IPv4 and IPv6 traffic to be passed to iptables chains. This is required for Kubernetes networking policies and traffic routing to work.
net.ipv4.ip_forward = 1 enables IP forwarding in the kernel, which is required for packet routing between pods in Kubernetes.
sudo sysctl — system — Applies the sysctl parameters from the new /etc/sysctl.d/k8s.conf file to the running kernel. This enables the settings without requiring a reboot.

In summary, this command configures three key sysctl parameters needed for Kubernetes networking and traffic policies and loads them into the running kernel so they are active immediately. The /etc/sysctl.d/k8s.conf file will persist these settings across reboots as well.

3. (ALL NODES) Installing packages on all nodes-

a. Containerd container runtime.

apt update -y
apt install -y containerd

sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
  • apt install -y containerd

This installs containerd, which is a container runtime that Kubernetes can use to run containers. The -y flag tells apt to automatically answer yes to the installation prompt.

  • sudo mkdir -p /etc/containerd

This creates the /etc/containerd directory to hold the containerd configuration file. The -p flag tells mkdir to create any parent directories that don’t exist.

  • sudo containerd config default

This generates a default config for containerd and outputs it to stdout.

  • sudo containerd config default | sudo tee /etc/containerd/config.toml

The default config output is piped to tee which writes the output to the file /etc/containerd/config.toml. sudo is used so it has permissions to write to that path.

This gives containerd a default configuration file located at /etc/containerd/config.toml which it will use for settings like where to store container images, logging, etc. This config file can be customized as needed.

So in summary, it installs containerd, creates the config directory, generates a default config, and writes it to the config file that containerd will read when starting up. This prepares containerd to run Kubernetes workloads.

1.6.20
sudo sed -i 's/            SystemdCgroup = false/            SystemdCgroup = true/' /etc/containerd/config.toml

It finds the SystemdCgroup line, changes false to true, and saves the file.

Enabling SystemdCgroup allows containerd to integrate with the systemd init system for cgroup management. This is required for containerd to work properly with Kubernetes.

b. Kubernetes

sudo curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt update

apt-cache policy kubelet | head -n 20

This set of commands is adding the official Kubernetes apt repository to the system’s package sources list and installing the kubeadm, kubelet and kubectl packages from it. Here’s a detailed explanation:

This first command downloads the GPG public key for the Kubernetes apt repo hosted by Google and saves it as /etc/apt/keyrings/kubernetes-archive-keyring.gpg. The key is required to verify the packages from the repo.

  • echo “deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee /etc/apt/sources.list.d/kubernetes.list

This adds the Kubernetes apt repository to the /etc/apt/sources.list.d/kubernetes.list file. The [signed-by] parameter points to the key downloaded in the previous step to verify the repo.

VERSION=1.27.4-00
apt install -y kubelet=$VERSION kubeadm=$VERSION kubectl=$VERSION
sudo apt-mark hold kubelet kubeadm kubectl containerd

This command installs specific versions of kubelet, kubeadm, kubectl and containerd packages and holds them at that version.

Here’s a breakdown:

  • apt install -y kubelet=$VERSION kubeadm=$VERSION kubectl=$VERSION

This installs the kubelet, kubeadm and kubectl packages at a specific $VERSION. The $VERSION would be substituted with the actual Kubernetes version number to install.

  • sudo apt-mark hold kubelet kubeadm kubectl containerd

This uses apt-mark to put a “hold” on the kubelet, kubeadm, kubectl and containerd packages. A hold prevents these packages from being automatically upgraded when running apt upgrade or apt dist-upgrade.

By combining specific version installation and package holds, this ensures that the Kubernetes packages stay fixed at the desired version even during OS upgrades. The kubelet, kubeadm, kubectl and containerd binaries are tightly coupled, so locking their versions ensures compatibility.

In summary, this command installs Kubernetes at a specific release version and prevents accidental upgrades that could break the cluster. It gives fine grained control over the Kubernetes package versions on the system.

sudo systemctl enable kubelet.service
sudo systemctl enable containerd.service

These commands enable the kubelet and containerd services on the system to start automatically on boot.

The kubelet is the primary Kubernetes agent that runs on each node in the cluster. It registers the node with the master, starts/stops pods, performs health checks, etc. Enabling kubelet.service ensures the kubelet will start up whenever the server reboots.

containerd is the container runtime that the kubelet can be configured to use. Enabling containerd.service makes sure containerd is up and running so that the kubelet can start and stop containers as needed.

By enabling both services to auto-start, it ensures that the base Kubernetes infrastructure will be active when the server boots up. This allows Kubernetes pods and workloads to be scheduled and run on the node without needing any extra manual intervention after a reboot.

So in summary:

sudo systemctl enable kubelet.service — Auto start kubelet on reboot

sudo systemctl enable containerd.service — Auto start containerd on reboot

This sets up kubelet and containerd to function reliably as critical Kubernetes node components.

4. (Control Plane node) Creating control plane node-

Note: run these commands only on control plane node c1-cp1 in our case.

###IMPORTANT###
#If you are using containerd, make sure docker isn't installed.
#kubeadm init will try to auto detect the container runtime and at the moment
#it if both are installed it will pick docker first.


#0 - Creating a Cluster
#Create our kubernetes cluster, specify a pod network range matching that in calico.yaml!
#Only on the Control Plane Node, download the yaml files for the pod network.
wget https://raw.githubusercontent.com/projectcalico/calico/master/manifests/calico.yaml


#Look inside calico.yaml and find the setting for Pod Network IP address range CALICO_IPV4POOL_CIDR,
#adjust if needed for your infrastructure to ensure that the Pod network IP
#range doesn't overlap with other networks in our infrastructure.
vi calico.yaml


#You can now just use kubeadm init to bootstrap the cluster
sudo kubeadm init --kubernetes-version v1.27.4

#sudo kubeadm init #remove the kubernetes-version parameter if you want to use the latest.


#Before moving on review the output of the cluster creation process including the kubeadm init phases,
#the admin.conf setup and the node join command


#Configure our account on the Control Plane Node to have admin access to the API server from a non-privileged account.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


#1 - Creating a Pod Network
#Deploy yaml file for your pod network.
kubectl apply -f calico.yaml


#Look for the all the system pods and calico pods to change to Running.
#The DNS pod won't start (pending) until the Pod network is deployed and Running.
kubectl get pods --all-namespaces


#Gives you output over time, rather than repainting the screen on each iteration.
kubectl get pods --all-namespaces --watch


#All system pods should be Running
kubectl get pods --all-namespaces


#Get a list of our current nodes, just the Control Plane Node Node...should be Ready.
kubectl get nodes



#2 - systemd Units...again!
#Check out the systemd unit...it's no longer crashlooping because it has static pods to start
#Remember the kubelet starts the static pods, and thus the control plane pods
sudo systemctl status kubelet.service


#3 - Static Pod manifests
#Let's check out the static pod manifests on the Control Plane Node
ls /etc/kubernetes/manifests


#And look more closely at API server and etcd's manifest.
sudo more /etc/kubernetes/manifests/etcd.yaml
sudo more /etc/kubernetes/manifests/kube-apiserver.yaml


#Check out the directory where the kubeconfig files live for each of the control plane pods.
ls /etc/kubernetes

5. Joining nodes to cluster-

#At control plane node run
kubeadm token create --print-join-command

After Deployment-

kubectl get nodes -o wide
kubectl get pods --namespace kube-system -o wide
kubectl get all --all-namespaces -o wide
#API objects
kubectl api-resources -o wide | more
#kubectl help/manual
kubectl explain node | less
kubectl explain node.kind | less
kubectl explain node --recursive | less
kubectl describe node c1-cp1 --recursive
#kubectl auto completion
source <(kubectl completion zsh)

Thanks for reading!

— — — — — — — — — — — — — — -Work in progress — — — — — — — — — — — — -

Creating Namespace:

apiVersion: v1
kind: Namespace
metadata:
name: prabh-test-namespace

prabh_test_namespace.yml

Creating namespace/project

kubectl create -f ./prabh_test_namespace.ymlORkubectl create namespace <name of namespace>

--

--

n00🔑
n00🔑

Written by n00🔑

Computer Security Enthusiast. Usually plays HTB (ID-23862). https://www.youtube.com/@pswalia2u https://www.linkedin.com/in/pswalia2u/ Instagram @pswalia4u

No responses yet