SETTING UP A SINGLE-NODE KUBERNETES CLUSTER ON AWS

 

 

KUBERNETES FUNDAMENTALS

 

 

 

 

DEPLOYING A SINGLE-NODE KUBERNETES CLUSTER ON AWS

A COMPREHENSIVE GUIDE

 

 

 

 

Introduction

 

This article is part of a series on Kubernetes cluster setup. For context, please refer to the previous instalments: [Part 1], [Part 2], [Part 3], and [Part 4].

 

 

Kubernetes Cluster Networking

 

Networking is a critical aspect of Kubernetes, although it can sometimes be complex to grasp. Let’s focus on the specific networking challenges within Kubernetes:

1. Highly-Coupled Container-to-Container Communications:

  • This is addressed by using Pods and enabling localhost communications.

 

2. Pod-to-Pod Communications:

  • Our primary concern in this article.

 

3. Pod-to-Service Communications:

  • Covered by Kubernetes Services.

 

4. External-to-Service Communications:

  • Also handled by Kubernetes Services.

 

 

The Kubernetes Network Model

 

  • Each Pod in the cluster has a unique cluster-wide IP address.
  • Explicitly creating links between Pods is unnecessary.
  • Container ports do not need to be mapped to host ports.
  • Pods can be treated like VMs or physical hosts for port allocation, naming, service discovery, load balancing, configuration, and migration.

 

 

Fundamental Networking Requirements

 

Kubernetes imposes the following requirements (unless specific network segmentation policies apply):

1. Pods Can Communicate Across Nodes Without NAT:

  • Pods on any node can communicate directly with other Pods.

 

2. Agents on a Node Can Communicate with All Pods on That Node:

  • System daemons (e.g., kubelet) can reach all Pods on the same node.

 

 

Implementing the Kubernetes Network Model

 

  • The container runtime on each node handles the network model.
  • Common container runtimes use Container Network Interface (CNI) plugins for network and security management.
  • Various CNI plugins exist (e.g., “Flannel,” “Calico,” “Weave net”).
  • For our deployment, we’ll use the “Flannel” CNI.

 

 

 

Part 5

Creating a Kubernetes Cluster

 

 

 

 

Step 1: Create a Kubernetes Cluster with kubeadm

 

Important Notes:

  • When logging into the EC2 instance using SSH or PUTTY, use the public IP of the AWS EC2 instance.
  • However, within the Kubernetes cluster, use ONLY the private IPs of the EC2 instances. These private IPs are accessible only within the AWS VPC and are used for internal communication between AWS services.

 

Assumptions:

  • Master Node Private IP: 172.31.23.213 (Your EC2 instance’s private IP may differ)
  • Worker Node Private IP: 172.31.28.251 (Your EC2 instance’s private IP may differ)

 

Procedure:

1. Login to the Master Node:

  • Connect to the Master node server via SSH or PUTTY.

 

2. Initialize the Cluster:

  • Run the following command ONLY on the Master node:
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.31.23.213
  • Explanation:
    • --pod-network-cidr=10.244.0.0/16: Specifies the IP range used by the Flannel CNI for pod IPs.
    • --apiserver-advertise-address=172.31.23.213: Sets the internal IP address of the Master node for Kubernetes components (apiserver, etcd, etc.).

 

3. Save the Output:

  • Upon successful completion, you’ll receive output similar to the following:
kubeadm join 172.31.23.213:6443 --token c1hwhp.u07hh6yurd4t2xb4 --discovery-token-ca-cert-hash sha256:152ba5c43eb52cfaae044fd409747e7eafdf121b47de9b8f89101d60c20785a3
  • Save this output; it will be used for joining the Worker nodes.

 

 

 

Step 2: Manage the Cluster as a Regular User

 

Here are the instructions for managing the Kubernetes cluster as a regular user (“jenkins”):

1. Create the Required Directory:

  • Run the following command on the Master node to create the necessary directory:
$ mkdir -p $HOME/.kube

 

2. Copy the Configuration File:

  • Copy the Kubernetes configuration file to the appropriate location:
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

 

3. Adjust Ownership:

  • Ensure the ownership of the configuration file is correct:
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

 

 

Step 3: Set up the Pod Network

Let’s set up the Pod Network using the Flannel add-on for your Kubernetes cluster. Follow these steps on the Master node:

1. Install Flannel:

  • Run the following command (DO NOT use root or sudo):
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  • Ensure that port 8080 is OPEN in the Security group.

 

2. Expected Output:

  • If successful, you’ll see output similar to this:
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

 

 

Step 4: Verify the status of the cluster

 

Let’s verify the status of the Kubernetes cluster:

1. Node Status:

  • Run the following command on the master server:
$ kubectl get nodes
  • Ensure that all the nodes you expect to see are present and that they are all in the “Ready” state.

 

2. Pod Network Verification:

  • Confirm that the pod network is functioning:
$ kubectl get pods --all-namespaces
  • Check if the CoreDNS pod is running.

 

 

 

Step 5: Join the worker node to the cluster

 

Let’s proceed with joining the Worker node to the cluster. Follow these steps on the Worker node:

1. Important Note:

  • The following commands are specific to the WORKER NODE.
  • We’ll use the “kubeadm join” command to connect the Worker node to the cluster.

 

2. Join Command:

  • Execute the following command (on the Worker node ONLY):
$ sudo kubeadm join 172.31.23.213:6443 --token c1hwhp.u07hh6yurd4t2xb4 --discovery-token-ca-cert-hash sha256:152ba5c43eb52cfaae044fd409747e7eafdf121b47de9b8f89101d60c20785a3
  • This command will perform pre-flight checks and join the Master node (control-plane).
 

 

 

Step 6: Monitor the cluster

 

Let’s verify the status of your Kubernetes cluster:

1. Node Status:

  • Run the following command on the master server:
$ kubectl get nodes
  • Ensure that all the nodes you expect to see are present and that they are all in the “Ready” state.
NAME               STATUS   ROLES           AGE   VERSION
ip-172-31-23-213   Ready    control-plane  74m   v1.24.3
ip-172-31-28-251   Ready    <none>         53s   v1.24.3

 

2. Completion Message:

  • You’ve successfully completed the Kubernetes cluster configuration and deployment!

 

3. Important Note:

  • The tokens used for joining the Kubernetes cluster are valid for 24 hours only.
  • If a new worker node needs to join the cluster after this period, new tokens must be generated on the kube-master node.

 

 
 

CONGRATULATIONS!

 

Your Kubernetes cluster is UP and RUNNING. 

 

 

 

 

Additional cluster monitoring.

 

 

 

Step 7: Adding New Worker Nodes After 24 Hours

 

Important Note:

  • Tokens generated during the initial Kubernetes cluster setup are valid for 24 hours.

 

 

On the Kube-Master Node:

 

1. View Existing Tokens:

  • Check the existing tokens:
$ kubeadm token list

2. Generate New Token:

  • Create a new token for joining the cluster:
$ kubeadm token create --print-join-command

3. List Active Tokens:

  • Verify the active tokens:
$ kubeadm token list

 

 

On the Worker Node:

 

1. Reset Configuration:

  • Run the following command on the NEW worker node to ensure any prior configuration is removed:
$ sudo kubeadm reset

2. Join the Cluster:

  • Execute the join command provided by the Kube-Master node (replace with your specific token):
$ sudo kubeadm join 172.31.23.213:6443 --token 8dho7i.g9dc5orpy0tbufrv --discovery-token-ca-cert-hash sha256:580208ab04079054900b47737254c519e03051c1d31c99ee807c21fd3994c653

 

Back on the Kube-Master Node:

 

1. Verify Nodes:

  • List all active nodes in the cluster:
$ kubectl get nodes
  • You should see an output similar to this:
NAME               STATUS   ROLES           AGE   VERSION
ip-172-31-23-213   Ready    control-plane  74m   v1.24.3
ip-172-31-28-251   Ready    <none>         53s   v1.24.3

 

2. Deployment Ready:

  • With the new worker node successfully added, you can now start deploying applications on your cluster.

 

 

Now you can start deploying applications on your cluster.