Part 3

 

 

In Part 3 of this 5-part tutorial, we will actually deploy the customized “remote-host” container that we had created in Part 2. We will deploy the “remote-host” pod on a Kubernetes cluster.

On this Kubernetes cluster, you should have WordPress and MySQL pods deployed. The method of deploying WordPress and MySQL is defined in these tutorials (WordPress  and Jenkins). You can have any other deployment other than WordPress, but it should have a MySQL pod running in the Kubernetes cluster. In Part 2 of this tutorial, we had configured the “remote-host” pod with the MySQL client. So this version of “remote-host” pod is configured to take backups of MySQL database. If you want to take a backup of other RDS systems, then you should install the appropriate RDS client in the “remote-host” container, as mentioned in Part 2.

In this tutorial, we will introduce a new concept of Kubernetes “Namespace”.

Whenever we deploy an application on a Kubernetes cluster, it gets deployed in the “default” namespaces. Our WordPress deployment in previous tutorials (Part 1 and Part2 ) were on the “default” namespace. When we install a Kubernetes cluster, by default Kubernetes creates the below namespaces:

NAME              STATUS   AGE

default           Active   1d
kube-node-lease   Active   1d
kube-public       Active   1d
kube-system       Active   1d

When we create a deployment without specifying a namespace, Kubernetes deploys the pods in the “default” namespace. The other 3 namespaces, “kube-node-lease”, “kube-public”, and “kube-system” are used by Kubernetes for deploying Kubernetes-specific pods, required for the maintenance of the Kubernetes cluster.

You can create a new namespace and deploy your applications on that namespace. If you don’t specify a namespace, then the pods get deployed in the “default” namespace.

For deploying the “remote-host” pods, we will create a namespace “remote-host” and deploy the remote host pod in that namespace. (You can give any name to your namespace, just ensure that you edit the Kubernetes “yaml” files accordingly.)

 

 

Step 1: Configuring and Deploying remote host pod

 

 

In this deployment, we will continue to use the AWS EFS. We will mount 1 EFS storage on the master node for storing the “remote-host” config files on the Master node and 1 EFS storage on the worker node for storing the remote-host data and backup files.

 

On the Master node.

 

Login to master node as user “jenkins”.

  1. Configure the remote-host yaml files. The “yaml” files can be cloned from the github location here.
  2. Ensure that the correct path is set for the hostPath in the “remote-host-pv.yaml” file.
  3. The remote-host pod will run its in own namespace “remote-host”. The same pod can be used to take backups of multiple dbs running in different namespaces.\
 
IMPORTANT: To ensure that the remote host always runs on a static cluster IP, make sure that the service of the remote-host is configured as cluster-IP. Any static IP in the range “10.244.0.0/16” can be provide. Below is the service-yaml file sample.

remote-host service “yaml” file

apiVersion: v1
kind: Service
metadata:
name: remote-host
namespace: remote-host

spec:
type: ClusterIP

ports:
– protocol: TCP

name: remote-host
port: 22
targetPort: 22
selector:
app: remote-host
clusterIP: 10.97.129.185

In the above “yaml” file, we have hard coded the ClusterIP of the “remote-host”. This is a free IP in the Kubernetes cluster which is in the IP range “10.244.0.0/16

 

If you want to use the same EFS mount point for all the Kubernetes configuration files, then you can skip the below step of mounting. We have used the same mount point (for WordPress and Jenkins to store our “remote-host” config files. We are ONLY using a different directory)

    • $ cd ~

    • $ sudo yum install -y git

    • $ mkdir simple-remote-host

However, if you want to create a separate mount point for each configuration file then perform the below step. 

 

Mount the EFS created for the config files on this directory. The command to mount the directory is mentioned in the AWS EFS console and has been explained in the EFS blog.

    • $ sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-011a40c8f24e5b94d.efs.ap-south-1.amazonaws.com:/ /home/jenkins/simple-remote-host

/home/jenkins/simple-remote-host” is the complete path of the directory we have created for the WordPress config files.

    • $ df -k                    (should show the wp-config-files mounted on the EFS)

    • $ cd simple-remote-host

 

Download the “remote-host” yaml files from github location.

This will download the required remote-host “yaml” files for deploying on the Kubernetes cluster in the current “simple-remote-host” directory

Now let’s set up the Worker node for the “remote-host” data files.

 

Step2: On the Worker node

 

 

Login to the Worker Node as user “jenkins”.

    • $ cd ~

    • $ mkdir remote-host-data-files           (you can give any directory name you like)

    • $ cd remote-host-data-files

    • $ mkdir tmp     (We will use the “tmp” directory in the later part of this tutorial. But for the time being, we have to create this directory)

If you want to use the same EFS mount point for all the Kubernetes data files, then you can skip the below step of mounting. We have used the same mount point (for WordPress and Jenkins to store our “remote-host” datafile files. We are ONLY using a different directory)

 

Now mount the EFS created for the WordPress data files on this directory. The command to mount the directory is mentioned in the AWS EFS console and has been explained in this article.

    • $ sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-09b70377fe373b9f6.efs.ap-south-1.amazonaws.com:/ /home/jenkins/remote-host-data-files

/home/jenkins/remote-host-data-files” is the complete path of the directory we have created for the WordPress data files.

    • $ df -k                    (should show the wp-data-files mounted on the EFS)

 
 
THE DIRECTORY SETUP ON BOTH THE MASTER NODE AND WORKER NODE IS COMPLETE.

 

 

 

Step 3: Create the remote-host deployment on the Master node

 

 

Create the “remote-host” namespace and update all the remote-host yaml file.

    • $ cd ~

    • $ vi remote-host-namespace.yaml

“remote-host-namespace.yaml”

apiVersion: v1
kind: Namespace
metadata:
name: remote-host
    • $ kubectl create -f remote-host-namespace.yaml

This will create the namespace “remote-host”

 

Next, we will deploy the “remote-host” using the below command.

    • $ kubectl create -f simple-remote-host/

This will create the remote-host deployment (pod and service). If you view the “remote-host-deploy.yaml” file, we are pulling the “remote-host” container from the AWS ECR repository. 

 

To check if “remote-host” is successfully deployed, use the below command:

    • $ kubectl get all -n remote-host

“-n” = namespace (In this case “remote-host“)

 

Once the deployment is created, test the connections are working, by logging into the remote-host pod.

    • $ kubectl exec -ti <remote-host-pod-name> -n remote-host /bin/bash     (From the Jenkins pod we should be able to ssh to the remote-host)

<remote-host-pod-name> = name of the deployed pod

“-n” = namespace

/bin/bash = When logging into a pod we need to provide an argument.

 

Once you are logged into the pod, you should be able to SSH from the “remote-host” pod, as ssh-server has been installed, when we created the container in Part 2 of this tutorial.

    • # ssh remote_user@<cluster-ip-remote-host> -p 22     (use the cluster IP instead of hostname, as the pods are running in different namespaces)

Provide the remote_user password when prompted.

If the connection is successful. We should be logged into the remote host.

 

 

Step 4: Take a MANUAL BACKUP

 

 

Now that the setup is complete, we have to test if the remote host is working end-to-end and if we are able to connect to the WordPress MySQL database and take a backup of the MySQL database to s3.

Create a S3 bucket using AWS console or AWS CLI. We discuss this in detail in a separate blog. However, if you need help you can search on the internet on how to create a S3 bucket in AWS. 

NOTE: S3 bucket names are unique across AWS, so give a name which is unique and you can understand it. Example: mywp-backup-10apr23. (You can give any name that you like and that is available in AWS)

 

Login to the remote-host pod on the Master Node:

    • $   kubectl exec -ti remote-host -n remote-host /bin/bash

 

Run the below command inside the remote-host pod to take a backup of the MySQL DB

    • # mysqldump -u root -h 10.96.0.101 -p wordpress > /tmp/wordpress_db.sql                                                        (10.96.0.101 = ClusterIP of the WordPress Mysql service.  “wordpress” is the db that has to be backed up)

Provide the DB password when prompted.

This will back up the DB to /tmp directory inside the “remote-host” pod.

 

Next, we will manually move the database “SQL” file to AWS s3 bucket using the “awscli” commands. We had already configured the “awscli” with the necessary inputs in Part 2 of this tutorial. Below cmd copies the file from the container to our S3 bucket.

    • # aws s3 cp /tmp/wordpress_db.sql s3://mywp-backup-10apr23/wordpress_db.sql

 

If everything is OK, you will see an SQL file in your S3 directory.

This completes the manual integration of the “remote-host” pod with the MySQL database of WordPress. This shows that the end-to-end system is OK and the communication between the “remote-host” pod and WordPress MySQL database is working. 

In the next tutorial, we will automate the entire process of taking the backup of the  MySQL database, by integrating the “remote-host” pod with Jenkins

 

Part 2 –> Creating the customized container.

Part 4 –> Integrating “remote-host” with Jenkins.