Containers – The Essentials
09
SEPTEMBER, 2021
by Eric Goebelbecker
Let’s talk about container essentials. Over the past few years, containers have transitioned from the hottest new trend to essential IT architecture. But are they are good fit for you? Are you wondering whether or not you’re using them effectively? Or have you been afraid to pull the trigger and add containers to your IT portfolio?
Maybe you’re not clear on how containers differ from virtual machines (VMs). What’s the difference? Why would you use one instead of the other?
Containers help you use your hardware more efficiently. They give you a way to fit more applications into a single system safely. They’re also a powerful packaging mechanism for moving applications from one system to another easily. Unlike the mythical boast of some programming languages, containers truly allow you to write once and run anywhere.
In this article, we’ll cover what containers are, what they’re not, and how you can use them to build a clean, efficient, and easy-to-maintain IT infrastructure.
Containers Are Not Virtual Machines
Containers and virtual machines are not the same things. They share some similarities, especially when you look at them from a distance, but the differences can’t be overemphasized. Containers provide applications with an isolated environment. Virtual machines emulate complete computer systems that usually run more than one application.
What’s the Difference?
Servers running containers have a single operating system. The containers share that server’s kernel and operating system resources. The shared portions are read-only (with copy-on-write semantics where necessary) and, depending on how the containers are configured, have shared access to the server’s networking interfaces. Meanwhile, the applications run just as they would on any other computer.
Servers that run VMs run a hypervisor that supports the operating system running in each VM. The virtual machines are well isolated from each other, while the applications inside them are not. Similar to the containers, though, the applications still run as they would on a physical computer.
The key difference is that containers are very lightweight when compared to virtual machines.
Starting a container is simply starting an application in an isolated environment. Starting a virtual machine, on the other hand, is booting an entire operating system.
Moving or copying a container from one system to another means moving the application and the libraries needed to support its environment. Still, these components are bundled in a single package. A virtual machine is, again, an entire operating system. You measure containers in megabytes and virtual machines in gigabytes. VMs are usually contained in a single package too, but they are significantly larger than a container.
Are Containers Better?
Are containers better? It depends on what you’re trying to accomplish.
Because containers only contain what they need to support a single application, they’re smaller, require less memory, and can be stopped and started very quickly.
Virtual machines come with all of the overhead required to support a complete operating system. They need more memory and take up more space, and while you can often start and stop a VM faster than the same operating system on commodity hardware, they’re still slower than a container.
Do these differences make containers a better choice? Only if your goal is to run individual applications. Sometimes you need the support of a complete operating system, or you need to run several apps together on the same system. If that’s the case, a VM makes more sense.
Both containers and virtual machines have come a long way in terms of portability. While there are only a few container implementations, the most popular, Docker, supports Windows, macOS, and all major Linux distributions. VMs have the open virtual machine format (OVF). This format allows you to move VMs between hypervisors, with some limitations.
That said, containers make it possible to package an application that lacks support for one operating system and run it on another. So, for example, you can containerize a legacy application and run it on a newer version of your operating system.
Why Use Containers?
Containers run applications in isolation. While containers running on the same host still share operating system resources, the operating system keeps them isolated from each other. This provides some important benefits.
Containers Are Portable
Containers can run Windows, Linux, FreeBSD, and Solaris applications. Docker itself runs on Windows, Linux, and macOS (the macOS version is actually using Linux in a VM, so it’s not as robust as the other two operating systems). This means you can use Docker to run applications across platforms without using a virtual machine.
But this is only the beginning of the portability containers have to offer.
Containers can also run applications from different versions of operating systems on the same host. So, if you need to build or test code for several different versions of a Linux distribution or even different distributions, you can set up your CI/CD pipeline with build containers instead of a set of build servers or VMs.
If you need to run an older version of an application in a new environment, a container is the way to go.
Containers Are Efficient
When you set up a virtual machine, you have to allocate memory and disk in advance. Both of those resources are permanently associated with that VM. In some circumstances, you can get away with a “sparse” disk that doesn’t use all of the space right away, but there’s a performance penalty for that. Memory, however, is a fixed resource. VMs can’t share it. When you set up a VM with 16 gigabytes of memory, you’ve used that memory, whether the VM needs it all the time or not.
Containers, however, don’t have this limitation. You can set a memory limit for a container, but that’s only a maximum. Containers share host memory just like other applications. They can also share a disk. You can set aside volumes for them if you want, but that’s up to you.
So, containers only consume resources as they need them. They’re also easier to move between systems since they don’t require dedicated resources. The onus is on you to make sure they have what they need, of course. But their flexibility and portability make that easy. You can also use orchestration systems like Kubernetes, and they’ll manage the resources for you.
Why Not Use Containers?
Containers are a powerful tool, but they’re not the solution to every problem. There are plenty of situations where a VM is the better option. The obvious case is when you need to virtualize an entire system.
For example, many companies have moved to virtual desktop infrastructure (VDI) as a cost-effective and secure solution for providing workstations to their employees. Containers are not a replacement for VDIs. Desktop users need an entire operating system and the services that it provides.
If you run an application that requires significant resources, it may work best when you allocate them in advance. In that case, a VM is the better option. Containers are flexible and efficient, but sometimes that flexibility isn’t what you need, and the relative rigidity of VMs is an asset.
Time to Look at Containers
We’ve taken a brief look at container essentials. Their flexibility and efficiency make them a powerful tool that you can use to save time, effort, and money. Can you add containers to your test environment? Do you have legacy applications that need to move to updated systems? It’s time to see how containers can help you upgrade your infrastructure.
Post Author
This post was written by Eric Goebelbecker. Eric has worked in the financial markets in New York City for 25 years, developing infrastructure for market data and financial information exchange (FIX) protocol networks. He loves to talk about what makes teams effective (or not so effective!).
Relevant Articles
Revolutionize Your IT Landscape with Digital Twins
In today’s fast-paced digital landscape, organizations seek innovative strategies to increase operational visibility, improve decision-making, and fuel business agility. One emerging powerhouse concept that addresses these needs is the Digital Twin—the practice of...
What makes a Good Deployment Manager?
Deployment management is a critical aspect of the software development process. It involves the planning, coordination, and execution of the deployment of software applications to various environments, such as production, testing, and development. The deployment...
DevOps vs SRE: How Do They Differ?
Nowadays, there’s a lack of clarity about the difference between site reliability engineering (SRE) and development and operations (DevOps). There’s definitely an overlap between the roles, even though there are clear distinctions. Where DevOps focuses on automation...
Self-Healing Data: The Power of Enov8 VME
Introduction In the interconnected world of applications and data, maintaining system resilience and operational efficiency is no small feat. As businesses increasingly rely on complex IT environments, disruptions caused by data issues or application failures can lead...
What is Data Lineage? An Explanation and Example
In today’s data-driven world, understanding the origins and transformations of data is critical for effective management, analysis, and decision-making. Data lineage plays a vital role in this process, providing insights into data’s lifecycle and ensuring data...
What is Data Fabrication? A Testing-Focused Explanation
In today’s post, we’ll answer what looks like a simple question: what is data fabrication? That’s such an unimposing question, but it contains a lot for us to unpack. Isn’t data fabrication a bad thing? The answer is actually no, not in this context. And...