SETTING UP A SINGLE-NODE KUBERNETES CLUSTER ON AWS

 

 

Kubernetes Fundamentals

 

 

 

DEPLOYING A SINGLE-NODE KUBERNETES CLUSTER ON AWS

A COMPREHENSIVE GUIDE

 

 

 

 

Introduction

 

In the second part of our 5-part blog series, we installed the Ubuntu operating system and created an AMI. Now, let’s install Docker and other essential OS components for our Kubernetes cluster. In this section, we will install the docker and all the other OS components and tools required for the Kubernetes cluster.  However, before we dive in, it’s important to note that we’ll need an EC2 instance with increased vCPU and RAM capacity.

 

The minimum requirements for Docker and Kubernetes are 2 vCPUs and 2 GB RAM. To achieve this, we’ll utilize the previously created AMI named “ubuntu-base-image” and launch an instance using the “t3a.small” instance type. While other instance types with more resources are an option, opting for “t3a.small” will help keep costs minimal during the initial configuration.

 

 

 

 

Part 3

 

 

 

Step 1: Install and Configure Ubuntu Packages and Utilities

 

1. After creating a new instance with the “t3a.small” instance type, log in as the “jenkins” user.

2. To ensure proper functioning of the Kubernetes cluster, disable swap memory at the operating system level:

$ sudo swapoff -a

If there’s an entry for “swap” in the /etc/fstab file, use the following sed command:

$ sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

3. Update system packages and repository index:

$ sudo apt update

4. Install AWS CLI:

$ sudo apt install awscli -y

5. Configure “awscli” with your AWS account details:

$ aws configure

6. Install essential Linux utilities:

$ sudo apt install unzip net-tools jq -y

7. Install AWS SSM (useful for multi-master Kubernetes clusters):

$ sudo snap install amazon-ssm-agent --classic
$ sudo systemctl enable snap.amazon-ssm-agent.amazon-ssm-agent.service
$ sudo systemctl start snap.amazon-ssm-agent.amazon-ssm-agent.service

 

 

 

Step 2: Install and Configure Docker Engine and Components

 

1. Install Docker on the EC2 instance. Since Docker is required on both Master and Worker nodes, we’ll install it on the base image, which will serve both roles.

2. Log in as the “jenkins” user:

$ sudo apt-get update
$ sudo apt-get install ca-certificates curl gnupg lsb-release

3. Add Docker’s official GPG key:

$ sudo mkdir -m 0755 -p /etc/apt/keyrings
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

4. Set up the Docker repository:

$ echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

5. Update the package index:

$ sudo apt-get update

6. Install Docker:

$ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

7. Verify the installation:

$ sudo docker version

8. Add your user to the “docker” group:

$ sudo usermod -aG docker jenkins

9. If you encounter a permission denied error when connecting to the Docker daemon socket, log out and log in again as the “jenkins” user. Even though the user is added to the “docker” group, a new session is required.

10. Configure Docker to start on boot:

$ sudo systemctl enable docker.service
$ sudo systemctl enable containerd.service

11. Confirm the installed Docker versions:

$ docker version
 
 

 

This completes the Docker installation on the Ubuntu base operating system. You can now create an AMI of the Ubuntu system before proceeding with Kubernetes components. This Docker-enabled image can be reused for various Docker operations.

 

 

 

PART 4