Building Custom Docker Images - Part1
Kubernetes Fundamentals
Introduction
In this comprehensive tutorial series, we’ll delve into the process of creating custom Docker images specifically tailored for our Kubernetes environment. Whether you’re enhancing existing containers or building entirely new ones, this step-by-step guide will equip you with the necessary knowledge and practical insights.
Building Custom Docker Images for Kubernetes: A 5-Part Tutorial
In our previous tutorials, we explored deploying WordPress and Jenkins applications on a Kubernetes cluster by leveraging existing Docker containers from the Docker Hub. However, in this comprehensive 5-part tutorial, we’ll take a different approach: creating our own custom Docker image tailored to a specific task within our Kubernetes environment.
Tutorial Overview:
1. Part 1: Setting Up CentOS-7:
- We’ll begin by preparing a CentOS-7 system and installing the necessary components and tools required for creating our custom Docker image.
2. Part 2: Creating and Pushing the Image:
- In this step, we’ll build our Docker image and push it to both AWS Elastic Container Registry (ECR) and Docker Hub.
3. Part 3: Deploying the Remote-Host Pod:
- Deploying a remote-host pod on our Kubernetes cluster, we’ll manually back up data to an AWS S3 bucket to verify the end-to-end system functionality.
4. Part 4: Integrating with Jenkins:
- We’ll integrate the remote-host pod deployed in our Kubernetes cluster with our Jenkins pod, configuring a seamless end-to-end system.
5. Part 5: Full Automation and Testing:
- Finally, we’ll automate the entire backup process and thoroughly test the end-to-end solution.
Stay tuned for detailed instructions and practical insights in each part of this tutorial!
Part 1
Overview and Concepts
Setting Up the Docker Creator System
- To create customized Docker images, we’ll establish a dedicated “docker-creator” system.
- While it’s possible to perform these tasks on the same system as your Kubernetes cluster, it’s advisable to keep them separate. The “docker-creator” system will only be powered on when creating or modifying Docker images.
- Once the configuration is complete, we’ll create an Amazon Machine Image (AMI) and shut down the system.
- Our goal is to push the resulting custom images to both AWS Elastic Container Registry (ECR) and Docker Hub.
Understanding Key Concepts:
- Docker Registry:
- A service that stores Docker images, which can be hosted by third parties or kept private.
- Examples include Amazon Elastic Container Registry, Docker Hub, and Azure Container Registry.
- Docker Repository:
- A collection of related images with the same name but different tags (e.g., 1.1 or latest).
- Tags serve as alphanumeric identifiers within a repository.
Essential Tools:
- Docker Compose:
- Although we won’t use it for container deployment, having Docker Compose installed on your “docker-creator” system is good practice.
- Dockerfile:
- A text document containing instructions for building Docker images automatically.
- We’ll rely on Dockerfiles to create our customized images.
Choosing the Base System:
- We’ll use the CentOS-7 AMI (Amazon Machine Instance) created in a previous blog post (CentOS-7 Part 3).
- This AMI includes necessary SSH keys, a configured CentOS-7 OS, and an authorized user with Docker components installed.
- Alternatively, you can use an Ubuntu system with similar configuration (as mentioned in Ubuntu Part 3).
Hardware Considerations:
- An EC2 instance of type “t3a.small” provides sufficient resources for image creation.
- If needed, consider using a higher instance type (e.g., “t3a.medium” with 2 vCPUs and 4 GB RAM).
- Remember that “t3a.small” instances are cost-effective and suitable for our purposes.
- After creating the Docker image, take an AMI of the “docker-creator” instance for future use.
Important Note:
- To ensure end-to-end functionality, deploy the “remote-host” pod alongside WordPress and Jenkins pods on your Kubernetes cluster.
Step 1: Launching the EC2 Instance
Follow these steps to launch an EC2 instance using the CentOS-7 AMI:
1. Instance Type:
- Choose the “t3a.small” instance type, which provides the following hardware specifications:
- 2 vCPUs
- 2 GB RAM
2. AMI Selection:
- If you haven’t already created or saved an AMI, refer to the relevant blog posts (e.g., CentOS-7 or Ubuntu) for detailed instructions on creating EC2 instances.
- Once the system boots up, proceed to the next step.
3. User Login:
- Log in to the system using the user “jenkins.”
- This user will be used for all subsequent configuration tasks.
- If you’ve used a previously saved AMI or created a new EC2 instance based on the mentioned blogs, the “jenkins” user should already exist.
Stay tuned for the next steps in our Docker image creation process!
Step 2: Installing Docker Compose and Additional Utilities
To proceed with creating our custom Docker image, follow these steps to install necessary components and utilities on the “docker-creator” system:
1. Install Docker Compose:
- Execute the following commands:
sudo su curl -L https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose usermod -aG docker jenkins
- This ensures that Docker Compose is available for image creation.
2. Install Additional Linux Utilities:
- Run the following commands:
sudo yum install -y unzip yum install bind-utils
- These utilities enhance the functionality of our system.
3. Check for AWS CLI:
- Verify if the “aws-cli” (Amazon Command Line Interface) is already installed.
- If not, use the following commands to install it:
cd ~ sudo curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" sudo unzip awscliv2.zip sudo ./aws/install
Now our “docker-creator” system is equipped with the necessary tools. Feel free to proceed to the next steps!
Step 3: Creating the Dockerfile and SSH Keys
In this step, we’ll create the Dockerfile for our “remote-host” container, which will be responsible for MySQL database backups of deployed containers. Follow these instructions:
1. Log In as “jenkins”:
- Log in to the CentOS-7 system using the “jenkins” user.
2. Create the Dockerfile:
- Navigate to the home directory:
cd ~
- Create a directory named “centos7”:
mkdir centos7 cd centos7
- Generate SSH keys (private and public):
ssh-keygen -f remote-key
- Create a Dockerfile (use
vi
or any text editor):
FROM centos:7 RUN yum -y install openssh-server RUN useradd remote_user && \ echo "1234" | passwd remote_user --stdin && \ mkdir /home/remote_user/.ssh && \ chmod 700 /home/remote_user/.ssh COPY remote-key.pub /home/remote_user/.ssh/authorized_keys RUN chown remote_user:remote_user -R /home/remote_user/ && \ chmod 400 /home/remote_user/.ssh/authorized_keys RUN /usr/sbin/sshd-keygen -A RUN yum -y install mysql RUN curl -O https://bootstrap.pypa.io/pip/2.7/get-pip.py && \ python get-pip.py && \ pip install awscli --upgrade CMD /usr/sbin/sshd -D
- Let’s understand the contents of the Dockerfile:
FROM centos:7
: Specifies the base OS for the Docker image.RUN yum -y install openssh-server
: Installs the OpenSSH server in the container.- User-related commands:
- Creates a new user named “remote_user” with password “1234.”
- Sets up the “.ssh” directory for the user.
- Copies the public key from
remote-key.pub
to the authorized keys.- Adjusts ownership and permissions for security.
- Generates SSH server keys.
- Installs the MySQL client.
- Installs Python pip and awscli.
- Starts the SSH server inside the “remote-host” container.
Important Note:
These commands are specific to the “remote-host” container and won’t impact other pods in the Kubernetes cluster.
Now our Dockerfile is ready, and we’ve configured the necessary components. Feel free to proceed to the next steps!
This concludes the setup of the CentOS-7 system, including the installation of necessary components and the configuration of the Dockerfile.
In the next section, we will create our Docker image using the Dockerfile.
Part 2