Part 5

 

 

In this article, we will create a Kubernetes cluster, using the master & worker nodes created in the previous articles, which can be viewed here: (Part 1, Part 2, Part 3, Part 4)

 

Cluster Networking

 

Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work. There are 4 distinct networking problems to address:-

    • Highly-coupled container-to-container communications: this is solved by Pods and localhost communications.
    • Pod-to-Pod communications: this is the primary focus of this document.
    • Pod-to-Service communications: this is covered by Services.
    • External-to-Service communications: this is also covered by Services.
 

 

The Kubernetes network model

 

Every Pod in a cluster gets its own unique cluster-wide IP address. This means you do not need to explicitly create links between Pods and you almost never need to deal with mapping container ports to host ports.

This creates a clean, backwards-compatible model where Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.

Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):

    • pods can communicate with all other pods on any other node without NAT
    • agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
 

 

How to implement the Kubernetes network model

 

 

The network model is implemented by the container runtime on each node. The most common container runtimes use Container Network Interface (CNI) plugins to manage their network and security capabilities. Many different CNI plugins exist from many different vendors.

Some of the popular CNI add ons supported by Kubernetes are “Flannel”, “Calico”, “Weave net”, etc…

 
For this deployment we will be using the “flannel” CNI. 

 

 

Step 1: Create Cluster with kubeadm

 

 
Important:
 

When logging into the EC2 instance using SSH or PUTTY, we use the public IP of the AWS EC2 instance.

However, when we use IPs in the Kubernetes cluster we use ONLY the private IPs of the EC2 instances. These IPs cannot be directly accessed from the outside world, but are accessible only with the AWS VPC and are used for internal communication between the AWS services.

Assumptions:

 

172.31.23.213 is the PRIVATE IP of the master node. (The private IP of your EC2 instance may be different)

172.31.28.251 is the PRIVATE IP of the worker node. (The private IP of your EC2 instance may be different)

We will use “kubeadm” and “kubectl” utilities for the Kubernetes cluster creation, administration and operation. 

 

Login to the Master node server.

 

Initialize a cluster by executing the following command:

The below command is to be run ONLY on the master node.

The below is using the “flannel” virtual network add-on. The 10.244.0.0/16 network value reflects the configuration of the kube-flannel.yml file. 

$ sudo kubeadm init –pod-network-cidr=10.244.0.0/16 –apiserver-advertise-address=172.31.23.213

 

In the above command

cidr=10.244.0.0/16-> is the IP range that the flannel CNI uses. All pods that will be deployed in the cluster will get IPs within the range defined above. 

apiserver-advertise-address=172.31.23.213   –> This is the internal IP address of the Master node, where all the Kubernetes components (apiserver, etcd, etc) will be running. 

Once the above command completes successfully. We should get the below message:

 

kubeadm join 172.31.23.213:6443 --token c1hwhp.u07hh6yurd4t2xb4 
--discovery-token-ca-cert-hash sha256:152ba5c43eb52cfaae044fd409747e7eafdf121b47de9b8f89101d60c20785a3

 

Save this output, as this will be used for joining WORKER nodes. 

 

 

Step 2: Manage Cluster as a regular user.   

 

 

Once the Kubernetes cluster is created, to start using the cluster you need to run the below commands on the Master node as a Kubernetes user (“jenkins”):

  • $ mkdir -p $HOME/.kube

  • $  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  • $  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

 

Step 3: Set Up Pod Network

 

 

A Pod Network allows nodes within the cluster to communicate. There are several available Kubernetes networking options. 

IMPORTANT:

(DO NOT RUN THIS AS ROOT USER OR SUDO. In case there is an 8080 error, ensure that PORT 8080 is  OPEN in the Security group).

 Run this on the MASTER NODE ONLY.

Use the following command to install the flannel pod network add-on:

 
If successful, you should get the below output:

 

podsecuritypolicy.policy/psp.flannel.unprivileged created

clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created

configmap/kube-flannel-cfg created

daemonset.apps/kube-flannel-ds created

 

 

Step 4: Check the Status of the Cluster

 

 

Check the status of the nodes by entering the following command on the master server:

    • $ kubectl get nodes

Once a pod network has been installed, you can confirm that it is working by checking that the CoreDNS pod is running by typing:

    • $ kubectl get pods –all-namespaces

 

 

Step 5: Join Worker Node to Cluster (Run commands on Worker node)

 

 

IMPORTANT:

The below commands are to be run on the WORKER NODE.

As indicated earlier, we will use the “kubeadm” join command on each worker node to connect it to the cluster. The below command has to be RUN ON THE WORKER NODES ONLY.

 

    • $ sudo kubeadm join 172.31.23.213:6443 –token c1hwhp.u07hh6yurd4t2xb4 –discovery-token-ca-cert-hash sha256:152ba5c43eb52cfaae044fd409747e7eafdf121b47de9b8f89101d60c20785a3

 

If everything is OK, then after running the pre-flight checks, it should join the Master node (control-plane).

 

 

Step 6: Monitor the cluster (Run commands on MASTER node)

 

 

Check if the nodes are added to the cluster:

$ kubectl get nodes

You should get an output similar to below:

 

kubectl get nodes

NAME               STATUS   ROLES           AGE   VERSION

ip-172-31-23-213   Ready    control-plane   74m   v1.24.3

ip-172-31-28-251   Ready    <none>          53s   v1.24.3

 

IMPORTANT: The “tokens” for joining the kube cluster are valid for 24 hrs only. Incase, a new worker node joins the kube cluster after 24 hrs, new tokens have to be generated on the kube-master node.

 

THIS COMPLETES THE KUBERNETES CLUSTER CONFIGURATION & DEPLOYMENT.

 
 
CONGRATULATIONS !!! Your Kubernetes cluster is UP and RUNNING. 

 

 

Additional cluster monitoring.

 

Below steps are for adding a new worker node/nodes 24 hrs after creating the Kubernetes cluster. 

 

 

Step 7: Adding new worker nodes after 24 hrs (Run commands on MASTER node)

 
 

 

Important: Tokens created at the time of creating a Kubernetes cluster are valid only for 24 hrs. 
 
On Kube-Master node:
    • $ kubeadm token list        ### to get the existing tokens.

    • $ kubeadm token create –print-join-command    ### This will print the new tokens for joining the cluster.

    • $ kubeadm token list        ### to list the active token. 

 
On the Worker-node:

 

    • $ sudo kubeadm reset          ## This is to ensure that any prior configuration is removed and the new node can join the cluster cleanly.

    • $ sudo kubeadm join 172.31.23.213:6443 –token 8dho7i.g9dc5orpy0tbufrv     –discovery-token-ca-cert-hash sha256:580208ab04079054900b47737254c519e03051c1d31c99ee807c21fd3994c653

 
The above command to be run on the NEW worker node to join the cluster##
 
On Kube-Master node:
    • $ kubectl get nodes         

To list all the active nodes in the cluster. As below ###

    • $ kubectl get nodes

You should get an output similar to below:

    • $ kubectl get nodes

NAME               STATUS   ROLES           AGE   VERSION

ip-172-31-23-213   Ready    control-plane   74m   v1.24.3

ip-172-31-28-251   Ready    <none>          53s   v1.24.3

 

 

This completes the Kubernetes installation, cluster creation and adding the worker node to the master node.

 

 

Now you can start deploying applications on your cluster. 

 

 

Part 4 –> Installing and configuring Kubernetes