What is a Container? Definition, Benefits, and Use Cases

Containers have seen a massive increase in popularity over the last decade. So much so, that you might even be investigating containers as a solution for improving the development lifecycle of your own applications. But what is a container? And how can it benefit your work? This post will cover everything you need to know about containers, their benefits, and how they came to be.

  • What is a container?
  • Benefits of containers
  • Why are containers used?
  • Containers vs. VMs
  • Types of containers

What is a container?

Simply put, a container is everything that you need to run an application packaged into its own little bundle of data. A container pulls in the application code, its libraries and dependencies, any configuration files, and additional system tools it is reliant on. There are several types of containers, and they are used everywhere!

Benefits of containers

There are many benefits of containerization.

First and foremost, using containers creates portability across environments. When the dependencies that are required to run an application coexist with the application itself, you can run that container almost anywhere. For example, say you started an application in a development environment. After the application is proven to work as expected, it would be promoted to a test environment and then finally exposed to users in production. All of these transitions can be burdensome, especially when the owners of each environment use different dependency versioning to configure and run the application. It can also introduce implementation failures that cause applications to break, resulting in a poor user experience. Application containers alleviate this pain by allowing for an easy flow between environments. Portability also enables migration to the cloud. Going from on-prem containers to cloud containers is relatively simple compared to moving a full application to the cloud.

Note: As background, containers require compatibility with the CPU architecture they are running on to work properly. Luckily, many tools such as Docker’s Buildx allow for multi-architecture build compatibility.

Application containers are also incredibly flexible. Maintaining infrastructure can be a demanding task for any development team. When creating and maintaining containers, the focus is solely on the application and how it gets built. As technologies advance and new requirements develop, you are not beholden to any one solution. If you want to get out of a data center, containers can help you do that. The entire application is ready to move at a moment's notice.

Finally, containerization empowers developers to create better applications. Within the last decade we have been transitioning away from the idea of creating one central application to run everything. This “monolithic” architecture creates unneeded technical debt that can be costly for developers and organizations trying to future-proof themselves. Separating applications into smaller parts that can be developed in parallel and without risk of impacting each other is now the standard for modern applications. Due to the small, compartmental nature of containers, this new architectural pattern isn’t just possible with containers but it is actually easier to implement. Each service can be isolated into its own containerized application and worked on independently without disruption to the rest of the services.

Why are containers used?

Hobbyists and enterprises alike are adopting containers more everyday. But how did we get here and why are containers used now?

Developing and hosting applications began on bare metal hardware. If you wanted to host an application you would have to go out and buy the physical machine and manually install the operating system, the application and all of its dependencies. Over time, you would update the machine alongside the application until the day the machine would die. Then you would have to go out and purchase a new machine.

From there we moved to virtual machines, stripping away all of the actual hardware that had to be constantly maintained. These virtual machines would take only a portion of the resources computers had to offer and could be wiped away with ease. Uncoupling the hardware from the operating systems and processes running on it, virtual machines provided a level of abstraction that enhanced application development. This is because virtual machines allowed for easy maintenance and simple application provisioning by allowing teams to focus on specific areas of the application—hardware vs. software. The abstraction didn’t stop there. Following a logical progression, the sharing of physical hardware was extended to include sharing of the operating system kernel, making the application in its container the unit of work that mattered. With the ability to isolate dependencies and thus separate maintenance cycles, containers allow the developer to focus on the application and the ops team to focus on the operating system.

what-is-a-container

Each step in the journey, illustrated in the graphic above, represents a different pain point that’s been solved by evolving infrastructure. Using a full PC, or bare-metal hardware, maintainability became cumbersome and scalability was near impossible. With virtual machines, applications were limited to the environment they were built on. Containers solve all of these issues. Containers are updated when an application is updated. They are small and scalable. Lastly, they are not dependent on any specific environment.

Containers are integral to applications you use every day. For example, many of the most popular search engines were developed using containers and have an average of billions of searches per day. Carrying out these searches uses an extreme amount of computing power and providing that would require hundreds if not thousands of machines. If a vulnerability was discovered with the application, an engineer would have to go to each and every machine to ensure it was patched. The same would have to have to happen for every update to the application. The maintenance overhead alone would hinder any progress they could make as a technology company.

To read more about containers vs. VMs, and how they each came to be, read this post by Liam Randall, VP, Tech Commercialization: The Evolution of Infrastructure: How We Got to Containers.

Containers vs. Virtual Machines (VMs)

Earlier in this article we talked about the evolution of computing infrastructure and how we got to containers. You might remember that before containers, people primarily used virtual machines. Virtual machines are still used, but they’re often confused with containers. When considering containers vs. VMs and how it relates to container virtualization, the main difference is that virtual machines include an operating system. Both containers and VMs contain the application and any libraries needed to run it, but the addition of an operating system makes the virtual machine much more heavy weight and harder to maintain. With each step in our container evolution, the deployment artifact for an application has become shorter-lived and more easily replaced. The ability to replace a virtual machine or container with ease is a great benefit as it allows us to be agile and adapt to an application’s needs at a moment's notice. If there is a spike in user activity, spinning up several more containers to handle the increased demand is easy; buying more hardware to accommodate these changes would be difficult, time-consuming, and expensive.

It Is worth noting that containers and VMs are considered to be complementary technologies. In fact, containers are deployed on virtual machines the majority of the time. The two technologies solve different but related problems when it comes to application development and deployment, so the question is not really containers or VMs. Instead, it’s a matter of containers and VMs, or just VMs.

Read Containers vs. VMs: What’s the Difference and When to Use Them for a full comparison of these two technologies.

Types of containers

So far we’ve answered what is a container, discussed the benefits of containers, and covered the most common use cases. What we haven’t talked about are the different types of containers and the implementations of them. Since the invention of containers, several variations have emerged to suit the needs of a developer.

Containers have been implemented in several different ways over the years. The earliest implementation of containerization was a system call made in 1979 by the name of “chroot,” which simply isolated filesystems of running processes. Over the next couple of decades, several new contributors arrived on the scene. Virtuozzo developed the first commercial container solution in 2000. Shortly after, FreeBSD, Solaris, and the Linux community all had their own solutions for implementing containers. Most of these solutions were built around the Linux kernel, but a need for Microsoft Windows containers was recognized.  

Docker is one of the most popular container implementations (also known as a container engine) to date and a great place to start when trying containers. Docker is a modern solution that incorporates a lot of the great features that have been developed over time by several of the container implementers discussed above. There are many containers Docker supports due to the help of Buildx, which is compatible with all CPU architectures.

Another tool you might have heard about is Kubernetes. The big problem that Kubernetes solves is the orchestration of containers. Container management is not difficult for a few containers, but maintaining hundreds of containers starts to create operational issues for developers. Kubernetes allows you to deploy and maintain multiple containers easily. For example, it lets containers communicate with each other and facilitates automatic scaling to support as many users as needed. However, Kubernetes can be difficult to use at enterprise scale.

At Capital One, we’ve actually built our own solution to manage Kubernetes effectively at such a large scale. Critical Stack is a simple, secure container orchestration platform built to balance what developers want with the needs of our organization. By combining improved governance and application security with easier orchestration and an intuitive UI, we’ve been able to transition to containers quickly, safely, and effectively.

To read more about why Kubernetes won’t solve all enterprise container needs, read this post by Liam Randall, VP, Tech Commercialization: Kubernetes at Enterprise Scale: What You Need to Know.

Looking forward

Containers are a powerful technology that empower developers to create their best work. Separating the development process, enabling portability, and creating more reliable applications are all possible with container implementation. Container adoption will continue to grow for years to come as more and more individuals recognize the benefits it provides to modern software development.


Daniel Levine, Solutions Architect, Critical Stack Team

Daniel Levine is a Solutions Architect on the Critical Stack team at Capital One. Daniel graduated from Penn State with a degree in Computer Science and has been working at Capital One ever since. He has worked on several projects, starting with public key infrastructure and now helping to deliver Kubernetes software solutions on an enterprise scale. Daniel loves talking with other technologists and learning about upcoming trends in the computing world. You can connect with Daniel on LinkedIn (www.linkedin.com/in/levine-daniel).

Related Content