Part 4

 

 

In Part 3, of this 5-part blog, we installed docker components on our AWS EC2 instance. 

In this article, we will be configuring the previously created AMI image with the necessary Kubernetes components and tools required for setting up the Kubernetes cluster.

By the end of this article, we will create a new base image, which will have both docker components installed and will be the base image which will have the CentOS-7 operating system, docker and Kubernetes components, which can be used for either the control plane (master node) or data plane (worker nodes)

We will install the following Kubernetes tools:

Kubeadm: kubeadm performs the actions necessary to get a minimum viable cluster up and running. By design, it cares only about bootstrapping, not about provisioning machines.

Kubectl: The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.

Kubelet: The kubelet is the primary “node agent” that runs on each node. It can register the node with the API server using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider.

 

Installing kubeadm, kubectl and kubelet on EC2 instances (Master and Worker nodes)

 
 

 

STEP 1: Configure Kubernetes Repository

 

 

Kubernetes packages are not available from official CentOS 7 repositories. This step needs to be performed on the Master Node, and each Worker Node you plan on utilizing for your container setup. Enter the following command to retrieve the Kubernetes repositories.

    • $ sudo su            # login as root to perform the next set of actions

# cat <<EOF > /etc/yum.repos.d/kubernetes.repo

         [kubernetes]

name=Kubernetes

baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

EOF

 
 

 

Step 2: Install kubelet, kubeadm, and kubectl

 

 

These 3 basic packages are required to be able to use Kubernetes. Install the following package(s) on each node:

    • $ sudo yum install -y kubelet-1.24.3 kubectl-1.24.3 kubeadm-1.24.3 kubernetes-cni-0.6.1

    • $ sudo systemctl enable kubelet

    • $ sudo systemctl start kubelet

 

Next, update the IPtables settings:

$ sudo su                           # Login as root

# cat <<EOF > /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

EOF

# sysctl –system

 

Next, ensure that SELinux is “disabled” or set to “permissive”. The reason this is done is to allow containers to access the host filesystem, which is needed by pod networks for example.

    • $ sudo setenforce 0

    • $ sudo sed -i ‘s/^SELINUX=enforcing$/SELINUX=permissive/’ /etc/selinux/config

NOTE: If the above set command fails, then edit /etc/selinux/config and change SELINUX=permissive

 
This completes the necessary Kubernetes component installation and operating system configuration. 

 

Reboot the system, prior to any further configuration. 

    • $ sudo su                           # login as root/super user

    • $ reboot now                     # reboot the system. 

 
 
IMPORTANT: Once the system reboots create an AMI (Amazon Machine Image) of the system. This is the AMI that will be used for all future master and worker nodes for the Kubernetes cluster.
 
 
How to create an AMI is defined in this blog.
 
 

Step 3: Configure specific ports on Master and Worker nodes

 

 

All the configuration so far was being done on a single VM.

From now on we will have to create two EC2 instances, 

1 number EC2 instance for the Master node 

1 number EC2 instance for the Worker node. 

 

We will use the same AMI we created at the end of Step 2 for both EC2 instances. 

We will use EC2 instances of type “t3a.small” which has 2vCPU and 2 GB RAM. 

 

Important points to consider:

 

When creating the EC2 instances, give each instance a unique name, for example, master-node and worker-node.

You can give any name you like, and ensure that you are able to differentiate between the instances. 

Specific operating system ports that need to be opened for Kubernetes. The port numbers vary on the Master & worker nodes

On the Master node,  you can log in to the Master node using SSH or Putty. All the below operations will be performed using “jenkins” user. 

 

MASTER NODE

 

On the Master Node (EC2 instance started with the AMI created in Step 2) run the below commands.

Login as the “jenkins” user. 

    • $ sudo firewall-cmd –permanent –add-port=6443/tcp

    • $ sudo firewall-cmd –permanent –add-port=2379-2380/tcp

    • $ sudo firewall-cmd –permanent –add-port=10250/tcp

    • $ sudo firewall-cmd –permanent –add-port=10251/tcp

    • $ sudo firewall-cmd –permanent –add-port=10252/tcp

    • $ sudo firewall-cmd –permanent –add-port=10255/tcp

    • $ sudo firewall-cmd –permanent –add-port=8285/tcp

    • $ sudo firewall-cmd –permanent –add-port=8472/udp

    • $ sudo firewall-cmd –add-masquerade –permanent

    • $ sudo firewall-cmd –permanent –add-port=30000-32767/tcp

    • $ sudo firewall-cmd –reload

 

On the Worker node,  you can log in using SSH or Putty.

All the below operations will be performed using “jenkins” user. 

 

Worker Node

 

Login as the “jenkins” user on the Worker node.

On the Worker Node (EC2 instance started with the AMI created in Step 2) run the below commands.

    • $ sudo firewall-cmd –permanent –add-port=10250/tcp

    • $ sudo firewall-cmd –permanent –add-port=10251/tcp

    • $ sudo firewall-cmd –permanent –add-port=10255/tcp

    • $ sudo firewall-cmd –permanent –add-port=8472/udp

    • $ sudo firewall-cmd –permanent –add-port=30000-32767/tcp

    • $ sudo firewall-cmd –add-masquerade –permanent

    • $ sudo firewall-cmd –permanent –add-port=8285/tcp

    • $ sudo firewall-cmd –reload

 

 

This completes the installation and configuration of all the Kubernetes components on the Centos-7 Master and Worker systems. 

 

 

Part 3 –> Installing the docker components

 

Part 5 –> Creating the Kubernetes cluster.