Introduction

 

 

In our previous tutorials on deploying WordPress or Jenkins applications on a Kubernetes cluster, we downloaded the WordPress and Jenkins containers from the “docker hub” and configured the deployment and services so that the underlying containers communicate with each other and provide us with the desired output.

In this 5-part tutorial, we will create our own docker image to perform a specific activity on the containers deployed on our Kubernetes cluster.

In Part 1, we will set up the CentOS-7 system and install the necessary components or tools to create the docker image.

In Part 2, we will create the image and push it to the AWS ECR and Docker hub.

In Part 3,  We will deploy the remote-host pod on the Kubernetes cluster and take a manual backup to an AWS S3 bucket to ensure that the end-to-end system works fine.

In Part 4, we will integrate the remote-host pod deployed on our Kubernetes cluster with our Jenkins pod and configure the end-to-end system.

In Part 5, we will automate the entire backup and test it end-to-end. 

 

 

Part 1

 

 

To create our own customized docker images, we need a build a separate docker creator system.

We can do this on the same system where our Kubernetes cluster is deployed. However, it is not advisable, as we don’t want to perform any R&D actions on the system where we run the Kubernetes cluster. Moreover, the docker creator system will ONLY be used and powered on, when we need to create a new docker image or if we want to modify any existing docker image. Once we complete the configuration, we can take an AMI of the system and shut down/power off the system.

Once we create our customized image, we will push this image to 2 locations, AWS ECR (Elastic Container Registry) and “docker hub”. All container images should be stored in cloud docker registries.

While storing the container images, it is important that you understand some basic concepts of “registry” and “repository”.

Docker Registry

Docker Registry is a service that stores your docker images, but it could be hosted by a third party and even private if you need to. A couple of examples are:

Amazon Elastic Container Registry

Docker Hub

Azure Container Registry

 

 

Docker Repository

 

A Docker Repository is a collection of related images with the same name, that have different tags. Tags are alphanumeric identifiers attached to images within a repository (e.g., 1.1 or latest).

In addition to understanding the difference between a registry and a repository, you should also know what is “docker-compose” and “Dockerfile”.

 

Docker Compose

 

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services.

We will not be using docker-compose to bring up containers but is always a good practice to have this tool installed in your “docker-creator” system. 

 

Dockerfile

 

Docker can build images automatically by reading the instructions from a Docker file. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. 

We will be using the Dockerfile to build our customized images.  

 

To create our customized image, we will use the AMI (Amazon Machine Instance) we created in this blog (Centos-7 Part3). This contains the necessary SSH keys, a configured CentOS-7 operating system with the authorized user to perform all the actions and installed docker components. You can use an Ubuntu operating system or Centos-7 as your docker creator system. The Ubuntu system with the necessary configuration is mentioned here (Ubuntu Part3).

 

For our setup, we will use the CentOS-7 AMI for the docker creator system.

 

The EC2 instance of type “t3a.small” has sufficient HW  to create a docker image. You can use an instance type of “t3a.medium” (2 vCPU and 4 GB RAM) or higher if required. However, “t3a.small” EC2 instance types are cheaper and can create new docker images and push them to cloud registries. Once we complete the docker creation, we will take an AMI of the docker creator instance and re-use it for creating customized images as and when required.

Important: For the end-to-end system to work, you need to ensure that in addition to creating and deploying the “remote-host” pod, WordPress and Jenkins pods are deployed on the Kubernetes cluster. 

 

 

Step 1: Launch the EC2 instance

 

Using the CentOS-7 AMI, launch an EC2 instance with the type = “t3a.small”. This instance type has the below hardware specifications:

2 vCPU

2 GB RAM

If you have not created or saved an AMI, then you can follow the steps mentioned in these blogs where we have created EC2 instances on AWS based on CentOS-7 (blog 1, blog 2, blog 3) or Ubuntu (blog 1, blog 2, blog 3) and then proceed with the below steps.

Once the system boots up, log in to the system using the user “jenkins”. The user “jenkins” is the user which we will use for all the configuration. If you have used the AMI you saved earlier or created a new EC2 instance from the blogs mentioned, above, this user will exist in the system.

 

 

Step 2: Install docker-compose

 

 

Install docker-compose on the system.

In the AMIs that we have saved or the blogs mentioned above, we have not installed docker-compose. “docker-compose” is a tool which we will use to create the docker image.

This will install docker-compose on the system.

In addition to the above components, we will also install some additional Linux utilities. 

    • # sudo yum install -y unzip

    • # yum install bind-utils

Ensure that “aws-cli” is already installed on the system. If it is  not installed, install it using the below commands:

This will install the aws-cli (Amazon Command Line Interface) on the “docker-creator” system. 

 

 

Step 3: Create the Dockerfile and keys.

 
 
 
Important points to keep in mind:
  1. This remote host is a docker image that will be used to take MySQL database backups of the WordPress and/or other deployed containers.
  2. Once this container is created, it will be pushed to the ECR registry or docker hub. 
  3. We can pull it from ECR or docker hub. (We define this pull mechanism in the Kubernetes deployment file)
  4. While creating the MySQL backup container we will name it for the sake of convenience as “remote-host”. (You can give it any name)
  5. We will create a new user “remote_user” in the “remote-host” container and that user will perform all the actions from inside the container.

Log in as the “jenkins” user in the CentOS-7 system. 

    • $ cd ~

    • $ mkdir centos7

    • $ cd centos7

    • $ ssh-keygen -f remote-key    ### This will create 2 keys (private and public)

    • $ vi Dockerfile ### contents of docker file located (below)

 
Dockerfile
FROM centos:7
RUN yum -y install openssh-server
RUN useradd remote_user && \
   echo "1234" | passwd remote_user  --stdin && \
   mkdir /home/remote_user/.ssh && \
    chmod 700 /home/remote_user/.ssh

COPY remote-key.pub /home/remote_user/.ssh/authorized_keys

RUN chown remote_user:remote_user   -R /home/remote_user/ && \
    chmod 400 /home/remote_user/.ssh/authorized_keys

RUN /usr/sbin/sshd-keygen -A

RUN yum -y install mysql

RUN curl -O https://bootstrap.pypa.io/pip/2.7/get-pip.py && \
  python get-pip.py && \
    pip install awscli –upgrade 

CMD /usr/sbin/sshd -D

 
 
IMPORTANT: Understanding the contents of the Dockerfile

As mentioned above, “Dockerfile” is a set of text instructions/commands that tell Docker on how the build the container.  Below is a brief description of the commands that we have mentioned in the Dockerfile. 

FROM centos:7   –> Here we define which base OS should the docker image be built from. 

RUN yum -y install openssh-server   –> We are installing the openssh server in the container. 

RUN useradd remote_user && \

    echo “1234” | passwd remote_user  –stdin && \

    mkdir /home/remote_user/.ssh && \

    chmod 700 /home/remote_user/.ssh

–> We are creating a new “remote_user“, create a password “1234” for this user, create a directory “.ssh” in the home directory of “remote_user” and change the user permissions of the “.ssh” directory.

COPY remote-key.pub /home/remote_user/.ssh/authorized_keys

–> Copy the remote-key public key created in the ssh-keygen command. 

RUN chown remote_user:remote_user   -R /home/remote_user/ && \

    chmod 400 /home/remote_user/.ssh/authorized_keys

–> Assign the correct ownership to the “remote_user” and permissions to the “.ssh” directories.

RUN /usr/sbin/sshd-keygen -A

–> Create SSH server keys.

RUN yum -y install mysql

–> Install MySQL client in the “remote-host” container.

RUN curl -O https://bootstrap.pypa.io/pip/2.7/get-pip.py && \

    python get-pip.py && \

    pip install awscli –upgrade 

–> Install python pip to install the awscli inside the container. 

CMD /usr/sbin/sshd -D

–> Command to start the SSH server inside our remote_host container. 

Note: All the above commands/processes will only be applicable for the “remote_host” container and are specific only to “remote_host” container. It will not impact or affect any other pod in the Kubernetes cluster. 

 
This completes the set-up of the CentOS-7 system, installation of the required components and configuration of the Dockerfile. 

 

 

In the next section, we will create our Docker image, using the Dockerfile. 

 

Part 2 –> Creating the docker image.