PART 4
In this article, we will be configuring the previously created Ubuntu base-OS image in Part 3, with the necessary Kubernetes components and tools required for setting up the Kubernetes cluster.
We will install the following Kubernetes tools:
Kubeadm: kubeadm performs the actions necessary to get a minimum viable cluster up and running. By design, it cares only about bootstrapping, not about provisioning machines.
Kubectl: The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.
Kubelet: The kubelet is the primary “node agent” that runs on each node. It can register the node with the API server using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider.
Step1: Setting up the system kernel modules
Ensure that the swap memory is OFF.
$ sudo swapoff -a
$ sudo sed -i ‘/ swap / s/^\(.*\)$/#\1/g’ /etc/fstab
To ensure that there are no errors while installing the below kernel modules and Kubernetes components, we must ensure that necessary utilities & packages should be installed.
$ sudo apt-get update && sudo apt-get install -y apt-transport-https curl
For the next steps, log in as root for the kernel module configuration.
Load the required “containerd” modules. To do that open the configuration file for “containerd” using “vi” editor or any other text editor:
# vi /etc/modules-load.d/containerd.conf
Add the following two lines in the above file.
overlay
br_netfilter
Save and exit the file. Next, run the below commands as “root” user.
# modprobe overlay
# modprobe br_netfilter
Below are system-level settings to ensure that the Kubernetes network functions properly.
# cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
To ensure that the above configuration takes effect, run the below command:
# sysctl –system
Step2: Installing and configuring “contained”
“containerd” is a container runtime environment. We need to install “containerd” for Kubernetes and configure it.
Login as user “jenkins” for the below installation.
$ sudo apt-get update && sudo apt-get install -y containerd
Next, configure the “containerd” config file located in the “/etc/containerd/config.toml“. Use “vi” editor or any other editor to make the changes.
Comment the below line in the “config.toml “file
##disabled_plugins = [“cri”]
Add the below line in the “config.toml” file
[plugins.”io.containerd.grpc.v1.cri”.containerd.runtimes.runc.options]
SystemdCgroup = true
Save the file and exit. For these changes to take effect, “containerd” needs to be restarted
$ sudo systemctl restart containerd
Step3: Installing and configuring Kuberetes components
Note: The “kubelet“ and “kubeadm“ must be installed on all the nodes (Master & Worker) but the “kubectl” is optional. But we will add it to this image, so that we can create an AMI of the base-OS image which can be used for both Master and Worker node.
To install Kubernetes we need to first install the Kubernetes repository.
$ sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –
$ sudo cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF
$ sudo apt-get update
You can use the latest Kubernetes versions if required, but for this deployment, we will be using Kubernetes version ‘v1.24.3‘
Now we will install the Kubernetes components from the repository.
$ sudo apt-get install -y kubelet=1.24.3-00 kubeadm=1.24.3-00 kubectl=1.24.3-00
To ensure that during an upgrade or update, the versions of Kubernetes DON’T get upgraded, run the below command:
$ sudo apt-mark hold kubelet kubeadm kubectl
To check if the Kubernetes components are successfully installed, use the below commands:
$ kubeadm version
$ kubectl version
$ kubelet –version
This completes the Docker and Kubernetes installation.
IMPORTANT: We will now create an AMI (Amazon Machine Image) of this VM. This image can then be used for either a Master or Worker node
You can give this AMI any name, but it’s good practice to give it a name so that it is easily identifiable. Example: “ubuntu-docker-kubernetes-base-os“
Terminate AWS the EC2 instance once the AMI and snapshot are created.
In the next step, we will configure the Master & Worker nodes for the Kubernetes cluster deployment. We will use the above-created AMI for setting up the Master and Worker nodes.
Step 4: Configure specific ports on Master and Worker nodes
Launch separate EC2 instances, 1 for the Master and 1 for the Worker node.
Give appropriate names (you can give any name) so that it is easy to identify the Master and Worker nodes from the name.
We will use the above-created AMI for both nodes, as all the required components (docker and Kubernetes) have been installed on this AMI and the operating system has been configured for deploying a Kubernetes cluster.
We will use an EC2 instance type of “t3a.small” with 2 vCPU and 2 GB RAM, for the master and worker nodes.
NOTE: EC2 instance type “t3a.small” is sufficient for setting up the Kubernetes cluster. However, when you deploy any pods on this cluster, the minimum requirement should be “t3a.medium” with 2 vCPUs and 4 GB RAM.
Master Node
Login to the Master node (EC2 using the public IP using SSH or Putty as the user “jenkins“.
Once logged in to the master node, run the below commands to open the specific ports for Kubernetes operation & communication between the nodes and services.
$ sudo ufw allow in ssh
$ sudo ufw allow in 6443/tcp
$ sudo ufw allow from 172.31.0.0/16 ## This is the IPv4 cidr of the default VPC provided by AWS ##
$ sudo ufw default allow outgoing
$ sudo ufw default deny incoming
$ sudo ufw enable
Worker Node
Login to the Worker node (EC2 using the public IP using SSH or Putty as the user “jenkins“.
Once logged in to the worker node, run the below commands to open the specific ports for Kubernetes operation & communication between the nodes and services.
$ sudo ufw allow in ssh
$ sudo ufw allow from 172.31.0.0/16
$ sudo ufw allow in 30000:32767/tcp
$ sudo ufw allow in 30000:32767/udp
$ sudo ufw default allow outgoing
$ sudo ufw default allow routed
$ sudo ufw default deny incoming
$ sudo ufw –force enable
This completes the installation and configuration of all the Kubernetes components on the Ubuntu Master and Worker systems
.
Part 3 –> Installing and configuring Docker
Part 5 –> Creating a Kubernetes cluster on Ubuntu.
Comments are closed.