Containers – The Essentials
09
SEPTEMBER, 2021
by Eric Goebelbecker
Let’s talk about container essentials. Over the past few years, containers have transitioned from the hottest new trend to essential IT architecture. But are they are good fit for you? Are you wondering whether or not you’re using them effectively? Or have you been afraid to pull the trigger and add containers to your IT portfolio?
Maybe you’re not clear on how containers differ from virtual machines (VMs). What’s the difference? Why would you use one instead of the other?
Containers help you use your hardware more efficiently. They give you a way to fit more applications into a single system safely. They’re also a powerful packaging mechanism for moving applications from one system to another easily. Unlike the mythical boast of some programming languages, containers truly allow you to write once and run anywhere.
In this article, we’ll cover what containers are, what they’re not, and how you can use them to build a clean, efficient, and easy-to-maintain IT infrastructure.
Containers Are Not Virtual Machines
Containers and virtual machines are not the same things. They share some similarities, especially when you look at them from a distance, but the differences can’t be overemphasized. Containers provide applications with an isolated environment. Virtual machines emulate complete computer systems that usually run more than one application.
What’s the Difference?
Servers running containers have a single operating system. The containers share that server’s kernel and operating system resources. The shared portions are read-only (with copy-on-write semantics where necessary) and, depending on how the containers are configured, have shared access to the server’s networking interfaces. Meanwhile, the applications run just as they would on any other computer.
Servers that run VMs run a hypervisor that supports the operating system running in each VM. The virtual machines are well isolated from each other, while the applications inside them are not. Similar to the containers, though, the applications still run as they would on a physical computer.
The key difference is that containers are very lightweight when compared to virtual machines.
Starting a container is simply starting an application in an isolated environment. Starting a virtual machine, on the other hand, is booting an entire operating system.
Moving or copying a container from one system to another means moving the application and the libraries needed to support its environment. Still, these components are bundled in a single package. A virtual machine is, again, an entire operating system. You measure containers in megabytes and virtual machines in gigabytes. VMs are usually contained in a single package too, but they are significantly larger than a container.
Are Containers Better?
Are containers better? It depends on what you’re trying to accomplish.
Because containers only contain what they need to support a single application, they’re smaller, require less memory, and can be stopped and started very quickly.
Virtual machines come with all of the overhead required to support a complete operating system. They need more memory and take up more space, and while you can often start and stop a VM faster than the same operating system on commodity hardware, they’re still slower than a container.
Do these differences make containers a better choice? Only if your goal is to run individual applications. Sometimes you need the support of a complete operating system, or you need to run several apps together on the same system. If that’s the case, a VM makes more sense.
Both containers and virtual machines have come a long way in terms of portability. While there are only a few container implementations, the most popular, Docker, supports Windows, macOS, and all major Linux distributions. VMs have the open virtual machine format (OVF). This format allows you to move VMs between hypervisors, with some limitations.
That said, containers make it possible to package an application that lacks support for one operating system and run it on another. So, for example, you can containerize a legacy application and run it on a newer version of your operating system.
Why Use Containers?
Containers run applications in isolation. While containers running on the same host still share operating system resources, the operating system keeps them isolated from each other. This provides some important benefits.
Containers Are Portable
Containers can run Windows, Linux, FreeBSD, and Solaris applications. Docker itself runs on Windows, Linux, and macOS (the macOS version is actually using Linux in a VM, so it’s not as robust as the other two operating systems). This means you can use Docker to run applications across platforms without using a virtual machine.
But this is only the beginning of the portability containers have to offer.
Containers can also run applications from different versions of operating systems on the same host. So, if you need to build or test code for several different versions of a Linux distribution or even different distributions, you can set up your CI/CD pipeline with build containers instead of a set of build servers or VMs.
If you need to run an older version of an application in a new environment, a container is the way to go.
Containers Are Efficient
When you set up a virtual machine, you have to allocate memory and disk in advance. Both of those resources are permanently associated with that VM. In some circumstances, you can get away with a “sparse” disk that doesn’t use all of the space right away, but there’s a performance penalty for that. Memory, however, is a fixed resource. VMs can’t share it. When you set up a VM with 16 gigabytes of memory, you’ve used that memory, whether the VM needs it all the time or not.
Containers, however, don’t have this limitation. You can set a memory limit for a container, but that’s only a maximum. Containers share host memory just like other applications. They can also share a disk. You can set aside volumes for them if you want, but that’s up to you.
So, containers only consume resources as they need them. They’re also easier to move between systems since they don’t require dedicated resources. The onus is on you to make sure they have what they need, of course. But their flexibility and portability make that easy. You can also use orchestration systems like Kubernetes, and they’ll manage the resources for you.
Why Not Use Containers?
Containers are a powerful tool, but they’re not the solution to every problem. There are plenty of situations where a VM is the better option. The obvious case is when you need to virtualize an entire system.
For example, many companies have moved to virtual desktop infrastructure (VDI) as a cost-effective and secure solution for providing workstations to their employees. Containers are not a replacement for VDIs. Desktop users need an entire operating system and the services that it provides.
If you run an application that requires significant resources, it may work best when you allocate them in advance. In that case, a VM is the better option. Containers are flexible and efficient, but sometimes that flexibility isn’t what you need, and the relative rigidity of VMs is an asset.
Time to Look at Containers
We’ve taken a brief look at container essentials. Their flexibility and efficiency make them a powerful tool that you can use to save time, effort, and money. Can you add containers to your test environment? Do you have legacy applications that need to move to updated systems? It’s time to see how containers can help you upgrade your infrastructure.
Post Author
This post was written by Eric Goebelbecker. Eric has worked in the financial markets in New York City for 25 years, developing infrastructure for market data and financial information exchange (FIX) protocol networks. He loves to talk about what makes teams effective (or not so effective!).
Relevant Articles
Technology Roadmapping
In today's rapidly evolving digital landscape, businesses must plan carefully to stay ahead of technological shifts. A Technology Roadmap is a critical tool for organizations looking to make informed decisions about their technological investments and align their IT...
What is Test Data Management? An In-Depth Explanation
Test data is one of the most important components of software development. That’s because without accurate test data, it’s not possible to build applications that align with today’s customers’ exact needs and expectations. Test data ensures greater software security,...
PreProd Environment Done Right: The Definitive Guide
Before you deploy your code to production, it has to undergo several steps. We often refer to these steps as preproduction. Although you might expect these additional steps to slow down your development process, they help speed up the time to production. When you set...
Introduction to Application Dependency Mapping
In today's complex IT environments, understanding how applications interact with each other and the underlying infrastructure is crucial. Application Dependency Mapping (ADM) provides this insight, making it an essential tool for IT professionals. This guide explores...
What is Smoke Testing? A Detailed Explanation
In the realm of software development, ensuring the reliability and functionality of applications is of paramount importance. Central to this process is software testing, which helps identify bugs, glitches, and other issues that could mar the user experience. A...
What is a QA Environment? A Beginners Guide
Software development is a complex process that involves multiple stages and teams working together to create high-quality software products. One critical aspect of software development is testing, which helps ensure that the software functions correctly and meets the...