The Problem
“Did you get the application I sent you?”
“Yeah, it doesn’t work.”
“It runs fine on my machine.”
I am sure we have all had multiple conversations that can be summed up with these three lines. When running applications within different environments, there are bound to be some issues with differing environments and other dependencies that could cause a program to work in one environment, but not on another. Problems tend to appear when your application requires a certain configuration or file that is not matched or found within a new environment. Obviously, when you bring your application from your development environment to your staging environment and from your staging environment to production, you want those transitions to occur without hiccups due to a multitude of possible issues such as differing versions of languages, differing configurations, or any other dependencies that your application may have.
Great, we have identified the problem that we are having; now how do we go about solving it? The answer is to use Docker to create Docker images that have everything necessary for the application to run properly. This includes the code, libraries, and any other dependencies that are required for the application to execute. We can then take these Docker images and deploy them to other environments and not have to worry about the dependencies of the application causing problems since all those dependencies are packaged up and included within the Docker image.
All of that may have sounded a little confusing. Containers? Docker? Docker images? If these words are new to you, that is completely fine. Let’s go over some terminology so we can wrap our heads around what was just said and what it can mean for you.
Containers
When you hear the word “container”, you probably think about something like a Tupperware container that you put your lunch in before going to work. Hopefully, when you put your burrito in that Tupperware container and take it to work, it does not go flying outside somewhere along the way. This could make everything outside of the container messy, which would be a problem and defeat the purpose of using the container. The burrito should be isolated in its container and have everything in it that is necessary to fulfill your hunger come lunchtime, and that container should be accomplishing its task whether it is still at home, work, or any other place you can think of.
In programming, containers are very much like the Tupperware container that our burrito is stored in. Usually, nothing should be able to get out of the container and ruin the environment outside of it. Within the container, there should be an environment that the burrito is happy to be in. If our burrito is a web application, it may need plugins, a certain version of PHP that can run those plugins, and any other number of things that are necessary to make the web application to run in the same way it was developed to run. These can all be included in the container. By packaging our applications with all the necessary code and dependencies that are required to make it run smoothly in any environment, it makes it easy to develop, move, and deploy applications.
Docker Images
A Docker image is something that we build that can be deployed as containers. Inside the Docker image, we have the application, the dependencies required to run the application, and so on. Put simply, the Docker image is an inactive version of a container. It is analogous to a class in Java, while containers are analogous to the objects.
Docker images are uploaded to hosting services called registries that host and distribute images. Registries include image repositories that you upload new versions of an image to. Registry services can both be hosted publicly, like on DockerHub, or on privately-owned servers on-prem or in the cloud. This sounds confusing, but it essentially means that you can pull current or previous versions of a Docker image from within a repository. If I wanted an image with PHP, for example, I could pull from a registry with a PHP repository and pull whatever version of PHP I need to make my application work.
Docker
Docker is the program that is going to allow you to build and share your images and then run your containers. It does this by setting up the Docker daemon which connects to the kernel of your host OS, and it is what allows you to create and manage your Docker images and containers. The other main piece of Docker is the Docker client, which is where you execute commands to communicate with the daemon.
Hopefully these explanations cleared up some questions you may have had with Docker and the terminology used in the example solution. If there are still questions about anything, feel free to leave a comment and we will make sure to clear up any confusion.
What Are the Advantages of using Docker Over Using Virtual Machines?
Now that you know the purpose of containers and Docker, a good question to ask is why use containers as opposed to the VMs for isolating applications, as in the big picture they can play similar roles.
Containers are much more lightweight than Virtual Machines. If you think about it, this makes a lot of sense. When you run a Virtual Machine, you have the host OS, hypervisor, and multiple guest OSs for every VM you run, which, as you could imagine, takes up a lot of memory even if you are not utilizing everything that comes along with a whole operating system. This is a lot of wasted memory that could be used for running more applications and processes.
When you run a Docker container, you rid yourself of the hypervisor and replace it with the Docker daemon, and all your applications are run on top of the host OS. Your containers have lightweight OSs that only have the files necessary to execute the application properly, as opposed to the whole OS running on a VM. Sometimes the OSs running on VMs are up to gigabytes in size while the sizes of containers are usually around 100 MB or less. This is what makes containers much more lightweight than VMs, allowing you to run many more containers on one machine than you could VMs, as they comparatively take up less resources.
Additionally, the increased file sizes of the OS for a VM make it take minutes to boot and execute the application, while it takes containers only seconds to be up and running. This prevents a lot of waiting around and allows developers to test everything much quicker. All of this is just to say that the overhead for a VM is much higher than the overhead for a container, which causes things to be slower and take up more space. Using containers and Docker solves a lot of these problems. This is not to say that containers should replace VMs; containers and VMs are often used together and have different use cases. That, however, is for another blog post.
It is also important to note that while there are many advantages to using Docker containers over VMs, there are also disadvantages. The main disadvantage of using Docker containers over VMs are the security concerns that can arise due to the container sharing the kernel of the host OS. If multiple containers are running on one system and the security of the host kernel is compromised, it could mean bad news for all the other containers running on the kernel.
What Are the Benefits to Using Docker in a Broad Sense?
- Low Overhead
As mentioned before, containers are only as big as they need to be, causing low overhead. Sometimes when using a VM, we take a bus to cross the street by loading an entire OS to execute the same processes that can be done with containers. With containers, you can be sure that you will only be using the resources necessary to execute your processes.
- If it works on my machine, it will work on yours
As long as a machine has Docker installed, your images and containers will work perfectly fine as the dependencies are held inside the container. Our earlier example of an application working on a development environment but not a staging environment or production environment should no longer ever be the case. This is a huge benefit as you no longer need to figure out which dependencies are causing problems between environments each time you move the application.
- Works with multi-cloud platforms like AWS and GCP
The fact that Docker containers are supported by multi-cloud platforms is vital because it increases the portability and scalability of applications. Had these multi-cloud platforms not supported containers, all of the benefits from using containers would not be utilized; luckily, that is not the case as most if not all multi-cloud platforms include support for Docker and containers.
- Supports CI/CD implementation
With Docker images, it is easy to build your SDLC with your CI/CD platforms (assuming they support Docker, which they should). This will allow you to not have any worry that the environments will not be compatible with the Docker image.
In Summary
With Docker, you can create your containers that have everything you need to execute your application, making life easier in transferring your product between different environments. This allows you to save money in the meantime as you can cut down on your VM usage, which will save a lot of memory. Docker containers are supported by multi-cloud services and support CI/CD implementation, making it easier to transition to using containers with your current cloud services and CI/CD tools. Hopefully from this post, you were able to gain a very basic understanding of what containers are, what Docker is, and how they can benefit you. If you have any unanswered questions, please leave them in the comments so we can clear up any confusion.