❮ Back to Blog

How to Optimize Docker Performance

Docker containers have revolutionized the cloud industry. While Docker containers already present remarkable benefits and plus-points over other virtualization methods, there are significant performance gains that developers can further squeeze out of Docker to get the most out of the technology.

This guide will cover different methods of optimizing Docker performance and answer some frequently asked questions about the technology. Feel free to navigate around this piece with the help of these links:

How Docker Works

Before we set down to optimizing Docker performance, it is essential to take a moment and run through the technology once again. Simply put, Docker is a top industry standard for containerization — one that helps in packaging and distributing applications in the most hassle-free way possible.

Containers: The Origins of Docker

Containers are a way of shipping software to multiple environments easily. They help you package your code along with your preferred environment settings and other platform-dependent configurations — so that it can easily be instantiated on other machines without any setup overhead. This avoids the difficulties of setting things up on varying hardware and software specifications of computing systems throughout an organization’s infrastructure.

To put it simply, Docker is an open-source solution that helps in managing these containers that we just discussed. Containers are platform-independent, and so is Docker – as it offers support for both Windows and Linux-based platforms.

The Docker Workflow

Let’s take a moment to understand the various components of Docker:

  • Docker Daemon: The daemon is a continually running background process that manages the multiple objects associated with a Docker instance. The daemon starts and runs the container instances and listens to instructions via the CLI and the REST API.
  • Docker CLI: The Docker CLI is a client used to interact with the Docker daemon. It makes the task of running commands, manipulating formations, and carrying out other relevant tasks super simple. 
  • Docker REST API: The REST API serves as an HTTP protocol compliant point of contact between the Docker daemon and a remote client.
  • Docker Image: A blueprint that essentially contains instructions on setting up a container on the Docker platform. These images comprise multiple layers,  each of which stores information used for building Docker containers. This provides a convenient way to package applications and prepare server environments to use or share with other users.
  • Networking: The Docker host is a full-fledged server with its own networking capabilities. It allows us to connect multiple Docker containers and services and even put them together with non-Docker workloads.

The typical Docker workflow goes like this:

A developer uses the Docker client to instruct the host to build, run and distribute Docker images as containers. The daemon then takes the requests up and provisions container instances to run the application remotely. The developer can then again use the CLI to stop, delete, or restart the containers. Using the REST API, developers can do the same work from a remote system, making working in a global pandemic easier.

Performance Optimization

Now let’s move to the primary topic of the post and discuss ways of quantifying and optimizing the performance of Dockerized applications –

How to Measure Performance: Important Metrics

Before setting down to improve something, it is always helpful to quantify your performance and expectations. Several dedicated metrics can help you monitor and gauge the performance of your Docker containers. Here are some of the most popular ones –

Host Metrics

As the name suggests, host metrics measure the host machine’s performance and not that of the containers. Because a slow host can become a potential bottleneck for the container’s performance, these metrics can be super helpful in evaluating your application’s health. Here are some of the most prominent host metrics used across the industry:

  • Host CPU: Measuring the host’s CPU usage helps get a fair idea about the host’s performance. If the CPU utilization is low, it indicates the under-use of the available resources. You can consider throttling down the host’s CPU capacity to save investments. On the other hand, unreasonably high CPU utilization could indicate a resource leak or shortage of computing resources.
  • Host Memory: Like the CPU, memory can be another bottleneck for a host system. Knowing the memory usage and the maximum capacity is vital in ensuring that no container runs out of memory.
  • Disk Space: Disk space is an aspect that usually does not fluctuate as much as CPU utilization and memory do. This is because there’s usually a cap on the storage that Docker containers can occupy on the disk. A container with a running image takes up 150-700 MB of disk space on average. However, if the container utilizes Persistent Docker Volumes, disk usage is likely to go up. It is important to identify if this is a requirement for your container or not. If not, watching the disk space closely and cleaning it up frequently can help save you a lot of money.

Docker Container Metrics

Container metrics are used to measure the performance of the running containers and are not concerned with the underlying hosts. These are usually limited to identifying issues in individual containers instead of the whole application. Here are some of the most popular container metrics used for Docker –

  • Container CPU: CPU utilization is a valuable metric for identifying containers using more than the expected CPU time. Docker is intelligent enough to place limits on CPU usage only when there is a lack of CPU time. In all other cases, it adjusts and manages the usage accordingly.
  • Container Memory: Memory utilization is similar to CPU utilization. A bleeding container can take the entire system down. So it is essential to monitor each of your containers’ memory utilization and cap their limits accordingly.
  • Container Swap: A container can swap its memory to the storage disk. While this can help conserve memory, things can get heavy on the disk. Metrics like swap rate and swap frequency help to determine how often your container swaps its memory to the disk. It is considered good practice to deactivate swapping unless you really need it.
  • Container Disk I/O: Disk I/O metrics help understand how frequently your containers are accessing the disk storage. Disk I/O operations are heavy and slow, so it is advisable to keep them under check. Even though the disk can be used for routine jobs like data manipulation and other batch operations, optimizing it for more essential functions like storing application data, serving the application to end-users, etc. 
  • Container Network Metrics: Network metrics consist of several data points that help understand how well your containers can carry out network-related operations. Metrics like dropped packets, network errors, network traffic, etc., can help you become aware of network issues your users could face and identify the parts of your application that might be experiencing overload or failure.

Container Count in Clusters

When working with large clusters of containers, even a tiny error in your setup can bring about a significant change in the number of provisioned containers, drastically impacting your budget. Therefore it is crucial to monitor and track the container counts closely. A good option is to set alerts based on anomalies in the trend. A sharp rise or fall in a short window of time should usually indicate something wrong with the Docker setup.

Measuring With Docker Stats

Docker also provides users with a command called `docker stats` to get quick information about the running containers. Here’s the output of the command –

Screen Shot 2021-05-27 at 12.22.30 PM...

Here is a quick guide to what some of these numbers mean –

  • CPU: This percentage reflects the CPU utilization. If this number is extremely low, it indicates an excess of CPU resources, and so you might want to reduce the number of running containers.
  • MEM USAGE / LIMIT: This shows the memory consumption of the host and the maximum limit allowed for the container. If you notice the usage to be dangerously close to the maximum limit, you may want to check your application for memory leaks or consider increasing your CPU memory.
  • MEM %: This represents the percentage of memory consumption for easier understanding and comparisons.
  • NET I/O: This metric indicates the amount of data being sent and received via Docker’s network.
  • BLOCK I/O: This is the amount of data being read from or written to the block devices of the host. A high number here indicates that the application could be over-utilizing the disk. In that case, you might want to consider other memory solutions such as in-memory cache, a remote database like DynamoDB, or a remote file storage service like AWS S3.
  • PIDS: This indicates the total number of threads created by the running container. If the number is unusually high, you may want to consider breaking down the container into multiple small containers, forming a microservice architecture.

Optimizing Container Performance

Now let’s look at some pointers that you can use to improve the efficiency and performance of your Docker-based applications.

Cut Down on Docker Image Size

Optimizing your Docker images can help you optimize your entire Docker application. This is because lighter Docker images can accelerate the building and deployment of your containers.

Every additional command line you add to your Dockerfile provisions a new layer in your image, leading to a heavier file. Therefore, keeping your Dockerfile (and consequently the container) as minimal as possible is essential. There are a bunch of ways to reduce the size of your Docker image:

  • Using a .dockerignore file to exclude unnecessary files from being included.
  • Using the multi-stage build feature to avoid unnecessary layers.
  • Squeezing multiple RUN commands into one using the && operator. 
  • Using tools like distroless to have images only contain the application and the minimum runtime dependencies (without needless package managers, shells, and other binaries).
  • Using a smaller base image (discussed next)
  • Cleaning inter-dependencies after installing packages (discussed later)

Choose a Lightweight Base OS

There is a wide range of host OSes available for Docker. The more pre-installed libraries and dependencies an OS comes with, the bulkier it is and the more resources it will likely consume. Using a minimalistic OS like Alpine Linux or RancherOS helps to trim unnecessary fat and reduce container and image size. However, at the same time, one should ensure that the base image OS you choose is secure and implements safety measures.

Clean Up Interdependencies

There is also some room to optimize Docker images on the dependency level. Debian-based images, like Ubuntu, can accumulate several additional binaries and files while installing dependencies. This is because several libraries often have their own set of dependencies that need installation. Many of these interdependencies (or sub-dependencies) are not required later and, therefore, can be cleaned up.

Because the system does not automatically remove these interdependencies, they might consume a lot of space, leading to increased image size and build time. Cleaning these up manually can therefore result in improved performance. You can use the following commands to do so:

  • apt-get clean: Clears the package files left in /var/cache.
  • apt-get autoclean: Clears the obsolete package files, which can not be redownloaded and are virtually redundant.
  • apt-get autoremove: Removes interdependencies of uninstalled packages.

Use Caching to Speed Up Builds

You can also cache existing layers of Dockerfiles to be reused by Docker while rebuilding other images. This significantly improves build speeds and is more efficient. 

Docker always checks if any layers with similar signatures and history already exist in your cache when building an image. If a match is found, the cached layers are used directly without rebuilding it. This saves CPU cycles as well as makes the build process faster.  If you make changes to your Dockerfile, the content of the layers will change, and thus will have to be rebuilt. However, there is still room for optimization here.

As you might know, Dockerfiles are processed sequentially – from top to bottom. This means that any change in the instructions at the top requires that all instructions below be rerun to have full effect. Therefore, it is advisable to place instructions that are most likely to change as low as possible in the file to decrease the number of instructions that need to be re-run and layers that need to be rebuilt. The instructions above do not need to be run at all, making it possible for Docker to reuse the layers from the cache, and speed up the build process.

Run Docker on Bare-Metal

Virtualization is a great technology that helps to create isolated environments within one another. These isolate virtual machines inside physical machines to host Docker applications. This might be helpful for security and efficient resource allocation but not for boosting your application’s performance.

Running Docker directly on a bare-metal server generally performs better than inside a virtual environment because virtual environments require additional resources for emulation. You can avoid this by running your Docker engine directly on a physical machine or consider opting for a system container hypervisor, such as LXD or OpenVZ. System containers create an abstraction layer between the guest environment and the host operating system without compromising performance.

Planning Resource Allocation

Container orchestration refers to handling and managing large groups of containers in large, dynamic environments. Planning is therefore highly crucial for these systems. For maintaining a cluster of containers for your application, you must plan to allocate your available resources appropriately. This is because the efficiency in the orchestration process directly affects the distribution of your application and its performance. 

Planning appears simple in theory but can get complex in practice. Understanding the current capacity requirements is an excellent first step to take. This entails planning the infrastructure of your server, storage, network, etc., and should include your short-term and long-term visions. The crux is to correctly understand the current capacity requirements and have a reasonable estimate of how this may change in the future.

Microservices

One of the best things you can do to your application is to compose it as a collection of microservices. Traditionally, several applications deploy as monolithic projects using containers, defying the purpose of using containers to some extent. The microservice architecture breaks down an extensive, monolithic application into smaller, independent units that can connect loosely and achieve the same results in a more modular and manageable setting. This helps create a plug-and-play system to monitor and control each software component independently.

Each microservice runs in an independent container. This isolates the containers, and as a result, any fault in one container does not affect the uptime of other microservices. Adopting such a modular paradigm is highly effective for debugging, troubleshooting, maintenance, and ensuring high uptime.

Application Performance Monitoring

Effective monitoring and alerting are foundational pillars of a reliable system. In systems involving simultaneous operations of multiple containers, things can go wrong in the blink of an eye. Therefore, it is essential to have active monitoring systems to report new issues promptly.

The data accumulated by monitoring tools help predict issues and suggests solutions before impacting your users. It can help you measure your application’s performance quickly and automate the analysis process to a great extent. You can imagine how laborious and cumbersome it could become to 24/7 monitor the performance of each of your containers manually. This is why automated monitoring tools like ScoutAPM can help you track the necessary metrics easily, alert you about potential issues in your application, and present actionable insights.

Frequently Asked Questions

Now that we have looked at various ways of optimizing Docker container performance let’s take a look at some of the most common questions that arise when working with Docker and containers –

Why is Docker so fast?

Docker is very different from conventional virtual machines in terms of structure and workflow. A Docker instance does not require you to assign system memory or storage space. The Docker engine manages it directly. It reserves a certain threshold amount of resources from the host operating system and allocates it to the running container instances as and when required. This gives Docker containers a significant boost in performance and resource efficiency.

Conversely, in a conventional virtual machine, there is an active requirement of managing and handling resources. Even if you have a single application running inside a VM, you need to run all the system processes and handlers to facilitate the running environment for the application. For containers, this is super simple. Each container is only concerned about running a single application instance, and the engine is responsible for managing resources for all of its containers in one place. The Docker engine allocates only the bare minimum resources needed for a particular container. This frees up the resources for other tasks, thereby improving the entire system’s performance.

How much RAM do I need for Docker?

The amount of RAM needed for Docker depends on your usage. If you are looking to run standard images, you can expect your container’s RAM requirement to be nearly the same as that required for running the application directly on your host machine. Docker has a very low overhead, which means that you can simply add up the individual memory usages of the applications you plan to run inside Docker to calculate your memory usage. 

For some context, a machine with 16GB RAM is likely to be sufficient for running a few standard-sized containers. If you plan to keep it for a long time or your workload is higher than usual, you can consider buying a machine with 32GBs of RAM. If you find yourself looking for anything higher than that, you should perhaps consider renting a virtual machine instance in the cloud at a much cheaper monthly subscription plan.

What are the disadvantages of Docker?

While Docker is considered a strong contender in the league of virtualization, it has a certain set of disadvantages. Here are some of them:

  • Docker containers are not the fastest: While Docker is faster than virtual machines, it is still not as fast as an actual, bare-metal machine. There are various performance overheads associated with containers, primarily due to networking, communication between containers and host systems, and more.
  • Managing persistent data: By default, files created inside your Docker container are saved on a writable container layer, i.e., all of the data inside a container gets deleted when the container doesn’t exist. Also, it can be challenging to retrieve the data if another process needs it. Additionally, it is not easy to move the data to external storage when the container is running because of the tight coupling between the writable layer and the host machine.
  • GUI and containers don’t go well with each other: Containerization was introduced for server-side applications that did not require a graphical interface for access. Therefore there is no native support in Docker for running applications with a graphical interface. Although you can use some creative ideas to brute-force your way, such methods are complex to implement in standard enterprise situations.
  • Not every application gets a performance boost with containers:
    Applications designed to run as a set of independent microservices are the ones that can reap the most benefits out of containers. Containerized applications improve efficiency by providing isolated, standalone, lightweight runtime environments for running your applications and their services.

How does Docker relate to Kubernetes?

Docker, as we discussed, is a container platform used to build, manage, and run container images. On the other hand, Kubernetes is a container orchestration solution – used to create and manage large groups of containers (called swarms) for handling massive application traffic in dynamic environments. 

A common analogy portrays Docker (container) as an airplane and Kubernetes as an airport. Kubernetes uses a container platform internally to load and start the container instances. Since its beginning, Kubernetes has utilized Docker for this purpose. But since the release of v1.20, Kubernetes is deprecating Docker as a container runtime. This is primarily because Docker is not compliant with the Container Runtime Interface (CRI) and therefore introduces the need for an abstraction layer between Kubernetes and Docker. However, this doesn’t change things a lot because Docker images can still be built and run with other runtimes, as they always have.

Optimizing Docker is Critical for Overall Performance

Optimizing your Docker applications is one of the most vital steps to improving your business as a cloud-based software solutions provider. More and more applications are switching to containerization, and it is crucial to keep them up with the best practices. Otherwise, you are likely to lose out on some of the significant benefits Docker offers over other methods of software distribution, which would, in turn, nullify the purpose of having opted for Docker containers in the first place.

Looking back at the article, we began our discussion by briefly covering some of the basics of Docker containers. We then discussed some of the areas in which you can easily optimize your Docker-based application’s performance. Finally, we addressed some of the most commonly asked questions on Docker and containerization.

For more in-depth content around web development and a reliable tool for optimizing your application’s performance, navigate our blog and feel free to explore ScoutAPM with a free 14-day trial!

Ready to Optimize Your App?

Join engineering teams who trust Scout Monitoring for hassle-free performance monitoring. With our 3-step setup, powerful tooling, and responsive support, you can quickly identify and fix performance issues before they impact your users.