A Simple Guide to Dockerizing Your Python Application

In the dynamic world of software development, deploying and scaling applications can be a daunting task. However, with the advent of containerization technologies like Docker, this once arduous process has become remarkably simplified and efficient. Dockerization is the process of encapsulating applications within lightweight, portable containers, each with its own isolated environment, ensuring seamless deployment across various platforms. When it comes to Python applications, Dockerization offers a plethora of benefits that not only streamline deployment but also pave the way for scalable and resilient solutions.

In this blog post, we will delve into the fundamentals of Dockerization and shed light on the numerous advantages it brings to Python applications. By understanding the importance of containerization in achieving easier deployment and effortless scaling, you will gain valuable insights into how Docker can revolutionize the way you develop and manage Python projects. Let's embark on this journey to unlock the true potential of Docker and discover the transformative power it holds for Python developers worldwide.

If you want to follow along I have pushed a Python Flask example code that can be found here.

Set Up Docker Environment

Let’s get started by describing an overview of what we will do to set up the Docker environment.

These are the step to setting up Docker for Ubuntu:

  • Update the Package List
  • Install gnupg to verify the docker packages
  • Import the Docker keyrings
  • Add in the Docker repository
  • Install Docker and associated tools
  • Finally Testing the installation

The first step is to Update the Package Lists, before installing Docker, it's essential to update the package lists to ensure that you are installing the latest versions of the required dependencies. To be able to validate the Docker packages we will also install the gnupg application so that we can import the repo’s GPG keys. First let’s open a terminal and execute the following commands to update the Ubuntu package repository.

sudo apt-get update
sudo apt-get install ca-certificates curl gnupg

Once we finish the above tasks we move on to the next step creating a directory for the keyrings and importing the Docker GPG key to ensure secure package verification.

sudo install -m 0755 -d /etc/apt/keyrings    
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg   
sudo chmod a+r /etc/apt/keyrings/docker.gpg

Now we configure, add the Docker Repository and the GPG key to the apt sources.

echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Once we have all of the above setup and the keys imported we can install the docker packages.

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Testing the Installation

After the installation is complete, it is a good idea to test the installation making sure that everything is installed properly. Open a terminal and run the following command to verify the Docker version

docker --version

This command will display the installed Docker version if everything is set up correctly.

Congratulations! You've successfully installed Docker on your Ubuntu Linux system. Now you can start creating and managing containers for various applications.

Please note that if you're using Windows or macOS, you have the option to install Docker Desktop, which provides a user-friendly graphical interface for managing Docker containers. However, I won't be going into detail about those installations in this guide. If you want to install Docker Desktop you can follow the instructions here.

With Docker now installed on your system, you're one step closer to experiencing the power of containerization and enjoying the seamless deployment and portability it offers for various applications.

Organize Your Python Application

To achieve seamless containerization with Docker, it's essential to organize your Python application in a structured and modular manner. Proper organization not only simplifies the Dockerization process but also enhances maintainability and scalability. In this section, we'll explore two key practices that will help you organize your Python application effectively for Docker containerization.

1. Structure into Modular Components

Dividing your Python application into modular components is a fundamental step towards achieving containerization success. Each component should have a well-defined responsibility and interact with others through clearly defined interfaces. This approach not only enhances the reusability of code but also allows you to encapsulate each component within separate Docker containers.

Consider breaking down your Python application into the following components:

  • Main Application: This is the core component that orchestrates the overall functionality of your application. It should be kept as lightweight as possible and focus primarily on handling interactions between other components.
  • API Endpoints or Services: If your application serves as an API or provides multiple services, consider separating them into individual components. Each service can then be containerized independently, making it easier to scale and maintain.
  • Database and Data Storage: If your application interacts with databases or requires data storage, treat them as separate components. This approach enables you to utilize specialized database containers, improving data management and isolation.
  • Worker Processes or Background Jobs: If your application involves asynchronous processing or background tasks, isolate these processes into separate components. This ensures efficient resource allocation and scalability.

2. Separate Dependencies and Configurations

In a well-organized Python application, it's essential to decouple dependencies and configurations from the core application logic. This separation allows for a cleaner Docker image and promotes flexibility in deployment.

  • Virtual Environments: Utilize Python's virtual environments to manage dependencies for each component separately. Docker can then install these dependencies within the container without interference from the host system's Python environment.
  • Configuration Files: Externalize configuration settings from your codebase and load them dynamically during runtime. This practice enables easy modification of configurations without altering the code, making your application more versatile across various deployment environments.

By adopting these practices, you'll create a Python application that is not only well-organized but also tailor-made for seamless Docker containerization. Modular components promote encapsulation, scalability, and reusability, while the separation of dependencies and configurations ensures a cleaner and more manageable Docker image.

In the next section, we will run through how to set up a Python virtual environment to prepare us to start to build our docker image from our python application.

Setting up Python Virtual Environment

Setting up the application environment is best approached by utilizing a virtual environment, as it offers valuable benefits in terms of application organization and dependency isolation from the main system.

To begin, install Python Venv by executing the following command:

sudo apt install python3.8-venv

Proceed to create the virtual environment named "testapp" with the following command:

$ python3 -m venv testapp/

After the environment is created, activate it by employing the following command:

$ source testapp/bin/activate
(testapp) user@host:~/docker_programs$

Once activated, navigate to the application directory:

$ cd testapp/

By adopting these steps, you ensure a well-structured and isolated application environment, facilitating smooth development and efficient management of dependencies for your project.

Creating a Dockerfile for Your Python Application

A Dockerfile serves as a blueprint for building a Docker image, which is the basis for a Docker container. It defines the environment and instructions to set up the necessary dependencies, configurations, and application code within the container. Let's go through the essential components of a Dockerfile and understand each instruction.

Purpose of a Dockerfile

The Dockerfile is a crucial element in the Dockerization process, as it allows you to define the exact specifications for your container's image. It provides a clear and reproducible way to package your Python application, ensuring consistency across different environments and simplifying the deployment process. By creating a well-structured Dockerfile, you can automate the image-building process, making it easier to distribute and share your application with others.

Basic Dockerfile Template

# Set the base image
FROM python:3.8-slim-buster

# Set some environment variables needed for the application
ENV PYTHONUNBUFFERED=1
ENV HA_SERVER=<SERVER Address>
ENV HA_SERVER_PORT=8123
ENV HA_TOKEN="<TOKEN>"

# Set the working directory inside the container
WORKDIR /app

# Copy the requirements.txt file and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy the rest of the application code
COPY . .

#Expose the ports needed to access the application
EXPOSE 8080

# Specify the command to run your Python application
CMD ["python", "main.py"]

Dockerfile Instructions Explained

  1. FROM: This instruction sets the base image for your container. In this example, we use the official Python 3.8 slim-buster image as the starting point. The slim-buster image contains essential Python libraries and a minimal Debian-based operating system.
  2. ENV: Create an environment variable that the container will use.
  3. WORKDIR: The WORKDIR instruction sets the working directory inside the container, where all subsequent commands will run. In this case, we set it to /app to maintain a clean directory structure.
  4. COPY: The COPY instruction copies files or directories from the host machine to the container. Here, we copy the requirements.txt file into the container's /app directory.
  5. RUN: The RUN instruction is used to execute commands during the image build process. In this case, we run pip install to install the Python dependencies listed in the requirements.txt file.
  6. COPY (again): After installing the dependencies, we copy the rest of the application code into the container's /app directory.
  7. EXPOSE: The EXPOSE instruction exposes port from the internal application to the external network.
  8. CMD: The CMD instruction specifies the default command to run when the container starts. Here, we run python main.py to start our Python application.

With this basic Dockerfile template, you can build an image that contains your Python application along with its dependencies, configurations, and code. As you progress with more complex applications, you can extend the Dockerfile to suit your specific needs.

In the next section, we'll explore how to install the necessary dependencies and configure your Python application within the Docker container, leading us closer to achieving a fully Dockerized solution. Let's continue our Docker journey!

Install Dependencies

When building an application there are always dependencies needed for the application to run properly. In our case we are using Python as the language and we use the Flask packages for our application.

To do this we will create a requirements.txt file which will be copied to the container first so they can all be installed as part of the build process.

Begin by creating a new file called requirements.txt with your favorite editor such as VIM or Nano.

$ vi requirements.txt

Our application will need the following Python dependencies, add each dependance to the file as needed and save it.

  • flask

Once we create the requirements.txt it will need to be installed as part of the docker build process, the following instructions from the above Dockerfile will copy the requirements.txt file to the container and runs pip to install all the dependencies.

COPY requirements.txt .  
RUN pip install --no-cache-dir -r requirements.txt

Copy Application Code

To be able to run our application code from the container, the code will need to be copied to the container as part of the build process. In the above Dockerfile the following instruction will perform the copy from the host system to the container file.

COPY . .

Expose Ports (if applicable)

When building an application there are a lot of times when you will need to expose ports outside of the container such as when you are developing an API or a web application. To do this you will need to add another line to the Dockerfile. The application that we developed listens on port 8080 so we will add the following line to our Dockerfile.

EXPOSE 8080

Environmental Variables

There will also be time that you want to pass information to the app such as tokens or other configuration items. To do this you can set environment variables that will store this information and make it available to your applications. In our example application we don't use any variables but we could pass the token and some server information to the application through variables using the ENV Dockerfile instruction.

ENV SERVER=<SERVER Address>
ENV SERVER_PORT=8123
ENV SERVER_TOKEN="<TOKEN>"

Build the Docker Image

Up until this point we have been setting up the application, its structure and creating the Dockerfile needed to build our docker image. Now we can issue the docker build command to create our image based on the Dockerfile. To do this make sure you are in the same directory as the Dockerfile then issue docker build to build the image.

$ sudo docker build --tag testapp .
[+] Building 16.1s (11/11) FINISHED                                                                      docker:default
 => [internal] load build definition from Dockerfile 0.8s
 => => transferring dockerfile: 479B                 0.0s
 => [internal] load .dockerignore                    1.2s
 => => transferring context: 2B                      0.0s
 => [internal] load metadata for docker.io/library/python:3.8-slim-buster                                          1.5s
 => [auth] library/python:pull token for registry-1.docker.io                                                      0.0s
 => [1/5] FROM docker.io/library/python:3.8-slim-buster@sha256:8799b0564103a9f36cfb8a8e1c562e11a9a6f2e3bb214e2adc  0.0s
 => [internal] load build context                    0.7s
 => => transferring context: 6.57MB                  0.2s
 => CACHED [2/5] WORKDIR /app                        0.0s
 => [3/5] COPY requirements.txt .                    1.3s
 => [4/5] RUN pip install --no-cache-dir -r requirements.txt                                                       7.8s
 => [5/5] COPY . .                                   1.8s
 => exporting to image                               1.0s
 => => exporting layers                              0.9s
 => => writing image sha256:2cca051fb934dacaecaf8745b99a542dee46fed8707d63b3c08812ea71bcce36                                  0.0s
 => => naming to docker.io/library/testapp           0.1s

Once the docker build command completes verify the image was built using the docker images command to list all of the images, look for the image you created in the list.

$ sudo docker images
REPOSITORY                              TAG       IMAGE ID       CREATED         SIZE
testapp                                 latest    2cca051fb934   4 minutes ago   135MB

Run the Docker Container

The docker image is built, you have verified it using the images command, now it’s time to run the application for the first time.

Before running the image for the first time, copy the IMAGE ID from the above command and use it to tag the image so that we know what it is and it's version information.

$ sudo docker tag 2cca051fb934 thetechnerd/testapp:latest

Run your new docker image, we want to make sure the container starts properly and we can move on to testing the application.

$ sudo docker run --publish 8081:8080 testapp
 * Serving Flask app 'main'
 * Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on http://127.0.0.1:8080
Press CTRL+C to quit

Test Your Dockerized Application

It is recommended and always a good idea to thoroughly test your application before pushing it to any repositories or using it in a production environment. Verify that the container started and is running using the container list command.

$ docker container list
CONTAINER ID   IMAGE   COMMAND  CREATED  STATUS
5121a791341f   testapp "python main.py"  About a minute ago

Push the Docker Image (Optional)

Now that we have fully run and tested our new application it would be nice to make it easy to deploy and to share with others that might benefit from this new application. To do so we would push the container image to a repository such as docker hub.

Before we can push the image to a docker hub repository we would need to create a new account and the repository that we will store the image in.

Navigate to Docker HUB signup URL.

Once your new account is created and you have verified your email address login to the account and create a new repository for your application.

Now we need to fill in the information for the repository.

Once it is filled in click the Create button

We now have a new repository to push our docker image to

When you have completed the repo creation we can push the docker image to that repository using the following commands.

If you are using a private repository you will need to issue the docker login command first, if this is a public repository skip this step.

Example login

$ sudo docker login --username thetechnerd

Push the image to the repository:

$ sudo docker push thetechnerd/testapp:latest
The push refers to repository [docker.io/thetechnerd/testapp]
0547f4935fab: Pushed
8bc0c83aa49f: Pushed
d7a93087a8d1: Pushed

When the image has been pushed you will see it in the repository.

Congratulations! You've reached the end of our journey through Dockerization for Python applications. By following the step-by-step guide, you've learned how to Dockerize your Python app effectively, turning it into a self-contained, portable, and scalable container.

We began by understanding the essence of Dockerization and its advantages for Python applications. The power of containerization lies in its ability to provide a consistent and isolated environment for your application, ensuring smooth deployment and scaling. Emphasizing the importance of containerization, we explored how organizing your Python app into modular components and separating dependencies from the core logic pave the way for seamless Docker integration.

The heart of the process was the Dockerfile, where we carefully defined the environment, dependencies, and configurations for the container. Copying only essential files into the container helped create lean and efficient Docker images, which are vital for faster builds and deployments.

Testing your Dockerized Python app was another critical phase, where we ensured correct functionality, validated environment variables, and checked for data persistence. Armed with debugging tips, we tackled common issues that might arise during the Dockerization process, ensuring a reliable and stable application inside the container.

Pushing your Docker image to Docker Hub marked the pinnacle of our journey. With a Docker Hub account and a new repository, you joined a vibrant community of developers, making your containerized app accessible and shareable with the world. The centralized image repository streamlined collaboration and version control, facilitating a smooth CI/CD pipeline integration.

As you embrace the benefits of Dockerization, I encourage you to explore more advanced Docker features and best practices. Dive into Docker Compose for multi-container setups, learn about container orchestration with Kubernetes, and keep discovering new ways to optimize and secure your Dockerized Python applications.

Remember, learning Docker is a continuous process. Embrace the challenges, experiment with various Docker features, and integrate containerization into your development workflow to unleash its full potential.

So go forth, confidently Dockerize your Python applications, and let the world witness the magic of containerization. With Docker, you have the tools to build, ship, and run your applications in any environment seamlessly.

Happy coding, and may your Dockerized Python applications conquer the realms of modern software development!

Previous
Previous

Demystifying ChatGPT: Your Friendly AI Language Model Companion

Next
Next

Home Assistant SkyConnect Dongle: Unboxing and Review