PART 5

 

 

In this article, we will create a Kubernetes cluster, using the master & worker nodes created in the previous articles, which can be viewed here: (Part 1, Part 2, Part 3, Part 4)

 

 

Cluster Networking

 

 

Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work. There are 4 distinct networking problems to address:-

    • Highly-coupled container-to-container communications: this is solved by Pods and localhost communications.
    • Pod-to-Pod communications: this is the primary focus of this document.
    • Pod-to-Service communications: this is covered by Services.
    • External-to-Service communications: this is also covered by Services.
 

 

The Kubernetes network model

 

 

Every Pod in a cluster gets its own unique cluster-wide IP address. This means you do not need to explicitly create links between Pods and you almost never need to deal with mapping container ports to host ports.

This creates a clean, backwards-compatible model where Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.

Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):

  • pods can communicate with all other pods on any other node without NAT
  • agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
 

 

How to implement the Kubernetes network model

 

The network model is implemented by the container runtime on each node. The most common container runtimes use Container Network Interface (CNI) plugins to manage their network and security capabilities. Many different CNI plugins exist from many different vendors.

Some of the popular CNI add ons supported by Kubernetes are “Flannel”, “Calico”, “Weave net”, etc…

For this deployment we will be using the “flannel” CNI. 

 

 

Step 1: Create Cluster with kubeadm

 
 
Important:
 

When logging into the EC2 instance using SSH or PUTTY, we use the public IP.

However, when we use IPs in the Kubernetes cluster we use ONLY the private IPs of the EC2 instances. These IPs cannot be directly accessed from the outside world, but are accessible only with the AWS VPC and are used for internal communication between the AWS services.

 
Assumptions:

172.31.21.21 is the PRIVATE IP of the master node. (The private IP of your EC2 instance may be different)

172.31.21.51 is the PRIVATE IP of the worker node. (The private IP of your EC2 instance may be different)

We will use “kubeadm” and “kubectl” utilities for the Kubernetes cluster creation, administration and operation. 

 

Initialize a cluster by executing the following command:

The “kubeadm” command to create the cluster, mentioned below, is to be run ONLY on the master node and as a superuser. 

The below is using the “flannel” virtual network add-on. The 10.244.0.0/16 network value reflects the configuration of the kube-flannel.yml file. 

$ sudo kubeadm init –pod-network-cidr=10.244.0.0/16 –apiserver-advertise-address=172.31.21.21

 

In the above command

cidr=10.244.0.0/16-> is the IP range that the flannel CNI uses. All pods that will be deployed in the cluster will get IPs within the range defined above. 

apiserver-advertise-address=172.31.21.21    –> This is IP address of the Master node, where all the Kubernetes components (apiserver, etcd, etc) will be running. 

Once the above command completes successfully. We should get the below message:

kubeadm join 172.31.21.21:6443 –token 13m47y.mbovw7ixuz5erz28

 –discovery-token-ca-cert-hash sha256:457e8898f9141f0e1e2eafcb19440391d30974e6db3958a2965413ecf013a914

Save this output, as this will be used for joining WORKER nodes. 

 

 

Step 2: Manage Cluster as Regular User  

 

 

Once the Kubernetes cluster is created, to start using the cluster you need to run the below commands on Master node as a Kubernetes user “jenkins“:

    • $ mkdir -p $HOME/.kube
    • $  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    • $  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 

 

 

Step 3: Set Up Pod Network

 

 

A Pod Network allows nodes within the cluster to communicate. There are several available Kubernetes networking options. 

IMPORTANT:

(DO NOT RUN THIS AS ROOT USER OR SUDO. INCASE THERE IS AN 8080 ERROR PORT 8080 SHOULD BE OPEN).

 Run this on the MASTER NODE ONLY.

Use the following command to install the flannel pod network add-on:

    • $ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

 

If successful, we should get the below output:

podsecuritypolicy.policy/psp.flannel.unprivileged created

clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created

configmap/kube-flannel-cfg created

daemonset.apps/kube-flannel-ds created

 

 

Step 4: Check the status of the Kubernetes Cluster

 

 

Check the status of the nodes by entering the following command on the master node:

    • $ kubectl get nodes

Once a pod network has been installed, you can confirm that it is working by checking that the CoreDNS pod is running by typing:

    • $ kubectl get pods –all-namespaces

 

 

Step 5: Join Worker Node to Cluster 

 

 

 
IMPORTANT:

 

The below commands are to be run on the WORKER NODE.

As indicated earlier, we will use the “kubeadm” join command on each worker node to connect it to the cluster. The below command has to be RUN ON THE WORKER NODES ONLY.

    • $ sudo kubeadm join 172.31.21.21:6443 –token 13m47y.mbovw7ixuz5erz28 –discovery-token-ca-cert-hash sha256:457e8898f9141f0e1e2eafcb19440391d30974e6db3958a2965413ecf013a914

 

If everything is OK, then after running the pre-flight checks, it should join the Master node (control-plane).

 

 

Step 6: Monitor the cluster (Run commands on MASTER node)

 

 

Check if the nodes are added to the cluster:

$ kubectl get nodes

 

You should get an output similar to below:

NAME           STATUS   ROLES                  AGE    VERSION

master-node    Ready    control-plane,master   130m   v1.20.2

worker-node1   Ready    <none>                 2m4s   v1.20.2

 

IMPORTANT: The “tokens” for joining the kube cluster are valid for 24 hrs only. Incase, a new worker node joins the kube cluster after 24 hrs, new tokens have to be generated on the kube-master node.

 

THIS COMPLETES THE KUBERNETES CLUSTER CONFIGURATION & DEPLOYMENT.

 

CONGRATULATIONS !!! Your Kubernetes cluster is UP and RUNNING. 

 

 

Additional cluster monitoring.

 

Below steps are for adding new worker nodes after 24 hrs. 

 

 

Step 7: Adding new worker nodes after 24 hrs (Run commands on MASTER node)

 

 

Important: Tokens created at the time of creating a Kubernetes cluster are valid only for 24 hrs. 

 
On Kube-Master node:

 

    • $ kubeadm token list        ### to get the existing tokens.

    • $ kubeadm token create –print-join-command    ### This will print the new tokens for joining the cluster.

    • $ kubeadm token list        ### to list the active token. 

 
 
On the Worker-node:

 

    • $ sudo kubeadm reset          ## This is to ensure that any prior configuration is removed and the new node can join the cluster cleanly.

    • $ sudo kubeadm join 172.31.0.21:6443 –token 8dho7i.g9dc5orpy0tbufrv     –discovery-token-ca-cert-hash sha256:580208ab04079054900b47737254c519e03051c1d31c99ee807c21fd3994c653

 
The above command should be run on the NEW worker node to join the cluster##
 
 
On Kube-Master node:

 

    • $ kubectl get nodes         

       

To list all the active nodes in the cluster. As below:

    • $ kubectl get nodes

 

You should get an output similar to below:

 

NAME           STATUS   ROLES                  AGE     VERSION

master-node    Ready    control-plane,master   15d     v1.20.2
worker-node1   Ready    <none>                 14d     v1.20.2
worker-node2   Ready    <none>                 14d     v1.20.2
worker-node3   Ready    <none>                 9m53s   v1.20.4

 

This completes the Kubernetes installation, cluster creation and adding the worker node to the master node.

 

Now you can start deploying applications on your cluster. 

 

Part 4 –> Installing Kubernetes components.