SETTING UP A SINGLE-NODE KUBERNETES CLUSTER ON AWS

 

 

KUBERNETES FUNDAMENTALS

 

 

 

 

DEPLOYING A SINGLE-NODE KUBERNETES CLUSTER ON AWS

A COMPREHENSIVE GUIDE

 

 

 

Introduction

 

In this article, we’ll create a Kubernetes cluster using the master and worker nodes we set up in the previous articles (Part 1, Part 2, Part 3, Part 4).

 

 

KUBERNETES CLUSTER NETWORKING

 

Networking is a critical aspect of Kubernetes, although it can sometimes be complex to grasp. Let’s focus on the specific networking challenges within Kubernetes:

1. Highly-Coupled Container-to-Container Communications: This is addressed by using Pods and local communications via localhost.

2. Pod-to-Pod Communications:

Our primary focus in this document.

3. Pod-to-Service Communications: Covered by Kubernetes Services.

4. External-to-Service Communications: Also handled by Services.

 

 

The Kubernetes Network Model

 

  • Every Pod in a cluster has a unique cluster-wide IP address.
  • You don’t need to explicitly create links between Pods, and mapping container ports to host ports is rarely necessary.
  • Pods can be treated like VMs or physical hosts in terms of port allocation, naming, service discovery, load balancing, configuration, and migration.
  • Kubernetes imposes fundamental networking requirements:
    • Pods can communicate with all other Pods on any node without NAT.
    • Agents on a node (e.g., system daemons, kubelet) can communicate with all Pods on that node.

 

 

Implementing the Kubernetes Network Model

 

  • The network model is implemented by the container runtime on each node.
  • Common container runtimes use Container Network Interface (CNI) plugins to manage network and security capabilities.
  • Various CNI plugins exist from different vendors.
  • Popular CNI add-ons supported by Kubernetes include “Flannel,” “Calico,” and “Weave Net.”
  • For this deployment, we’ll use the “Flannel” CNI.

 

 

 

PART 5

Setting Up Kubernetes Cluster Networking

 

 

 

Step 1: Creating a Kubernetes Cluster with kubeadm

 

Important Points:

  • When logging into the EC2 instance using SSH or PuTTY, we use the public IP.
  • However, within the Kubernetes cluster, we use ONLY the private IPs of the EC2 instances. These private IPs are not directly accessible from the outside world but are used for internal communication within the AWS VPC.

 

Assumptions:

  • Master Node Private IP: 172.31.21.21 (The private IP of your EC2 instance may differ.)
  • Worker Node Private IP: 172.31.21.51 (The private IP of your EC2 instance may differ.)
  • We’ll use the “kubeadm” and “kubectl” utilities for creating, administering, and operating the Kubernetes cluster.

 

Procedure:

1. Login to the Master Node:

  • Connect to the Master node server via SSH or PUTTY.

 

2. Initializing the Cluster:

  • Execute the following command only on the master node as a superuser:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.31.21.21
  • Explanation:
    • --pod-network-cidr=10.244.0.0/16: This sets the IP range that the Flannel CNI (Container Network Interface) uses. All pods deployed in the cluster will get IPs within this range.
    • --apiserver-advertise-address=172.31.21.21: This is the IP address of the master node, where all Kubernetes components (apiserver, etcd, etc.) will run.

 

3. Save the output:

Once the command completes successfully, note the output:

kubeadm join 172.31.21.21:6443 --token 13m47y.mbovw7ixuz5erz28 --discovery-token-ca-cert-hash sha256:457e8898f9141f0e1e2eafcb19440391d30974e6db3958a2965413ecf013a914

Save this output; you’ll need it for joining worker nodes.

 

 

 

Step 2: Managing the Kubernetes Cluster as a Regular User

 

Now that the Kubernetes cluster is created, follow these steps on the master node using the Kubernetes user “jenkins”:

1. Create the necessary directory:

$ mkdir -p $HOME/.kube

2. Copy the Kubernetes configuration file to the appropriate location:

$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

3. Set ownership of the configuration file:

$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

You’re all set! You can now manage your Kubernetes cluster using the configured credentials. 

 

 

 
 

Step 3: Setting Up the Pod Network (Flannel) on the Master Node

 

A Pod Network enables communication between nodes within the cluster. There are several available Kubernetes networking options.

IMPORTANT:

Do not run this as the root user or with sudo.

In case of an 8080 error, ensure that port 8080 is open.

Run this only on the master node.

1. Install Flannel:

  • Use the following command to install the Flannel pod network add-on:
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  • If successful, you should see the following output:
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

 

 

Step 4: Checking Kubernetes Cluster Status

 

To verify the status of your Kubernetes cluster, follow these steps on the master node:

1. Run the following command to get an overview of the nodes in your cluster:

$ kubectl get nodes

2. Confirm that all the nodes you expect to see are present and that they are all in the “Ready” state.

3. Additionally, check that the CoreDNS pod is running by typing:

$ kubectl get pods --all-namespaces

This will help ensure that your cluster is up and running correctly. 

 

 

 

Step 5: Joining a Worker Node to the Cluster

 

1. IMPORTANT:

  • The following commands are to be run on the WORKER NODE.
  • As previously indicated, we’ll use the “kubeadm join” command on each worker node to connect it to the cluster. Execute the command ONLY ON THE WORKER NODES.

2. Run the following command on the worker node:

sudo kubeadm join 172.31.21.21:6443 --token 13m47y.mbovw7ixuz5erz28 --discovery-token-ca-cert-hash sha256:457e8898f9141f0e1e2eafcb19440391d30974e6db3958a2965413ecf013a914

 

If everything is in order, after running the pre-flight checks, the worker node should successfully join the master node (control-plane).

 

 

 

Step 6: Monitoring the Kubernetes Cluster (Master Node)

 

To monitor your Kubernetes cluster, follow these steps on the master node:

1. Check if the nodes have been successfully added to the cluster:

$ kubectl get nodes

2. You should see an output similar to the following:

NAME           STATUS   ROLES                  AGE     VERSION
master-node    Ready    control-plane,master   130m    v1.20.2
worker-node1   Ready    <none>                 2m4s    v1.20.2

 

3. Important Note: The tokens used for joining the kube cluster are valid for 24 hours only. If a new worker node joins the cluster after 24 hours, new tokens must be generated on the kube-master node.

 
 

THIS COMPLETES THE KUBERNETES CLUSTER CONFIGURATION & DEPLOYMENT.

 

 

 

Congratulations!

Your Kubernetes cluster is up and running. 

 

 

Additional cluster monitoring.

 

 

 

Step 7: Adding New Worker Nodes after 24 hours 

 

Important: Tokens created during the initial Kubernetes cluster setup are valid for 24 hours only.

 

On the Kube-Master Node:

 

1. Get the existing tokens:

$ kubeadm token list

2. Print the new tokens for joining the cluster:

$ kubeadm token create --print-join-command

3. List the active tokens:

$ kubeadm token list

 

On the Worker Node:

 

1. Reset any prior configuration to ensure a clean join:

$ sudo kubeadm reset

2. Run the following command on the new worker node to join the cluster:

$ sudo kubeadm join 172.31.0.21:6443 --token 8dho7i.g9dc5orpy0tbufrv --discovery-token-ca-cert-hash sha256:580208ab04079054900b47737254c519e03051c1d31c99ee807c21fd3994c653

 

 

Back on the Kube-Master Node:

 

1. List all active nodes in the cluster:

$ kubectl get nodes

2. You should see an output similar to the following:

NAME           STATUS   ROLES                  AGE     VERSION
master-node    Ready    control-plane,master   15d     v1.20.2
worker-node1   Ready    <none>                 14d     v1.20.2
worker-node2   Ready    <none>                 14d     v1.20.2
worker-node3   Ready    <none>                 9m53s   v1.20.4

 

 

Congratulations!

Your Kubernetes cluster is now ready for deploying applications.