Containerization vs. Virtualization – Start Time and Setup Orchestration

    

containerization

Our 9-part blog smackdown between containerization and virtualization continues today with a look at two topics: start time and setup orchestration.

For new readers, we are deep into our “wrestling match” – a friendly contest between Taras Shapovalov, (team container) Piotr Wachowicz (team VM). The battle royal includes:

  1. Image size and Overhead
  2. Overhead
  3. Start time, and Setup Orchestration
  4. Image formats
  5. Applications sharing
  6. Image version control 
  7. Package distribution
  8. Device access

Let’s go to Round 3 - Start Time & Setup

Taras – Let’s begin with a discussion of start time. Because a container image does not include the operating system, it is not started inside a container. In fact, that would be impossible! All the containers share the same Linux kernel, so they can share system libraries and utilities. Container start time is just about equivalent to process start time, so from the user’s perspective, the container starts immediately, without any noticeable delay. Speed-Up-Windows-8-Boot-Time.jpg

But when you start a new virtual machine (VM), you start the entire operating system from scratch, including the full boot process. Services are started with their entire dependencies hierarchy. The whole thing takes a noticeable amount of time.

Piotr – Yeah, that’s true, but frankly, how many of us really care? How many applications absolutely require a 1-3 second spin-up time (container) compared to 30 seconds (VM)? Let’s agree on one thing: CPU time is cheap, and developer time is expensive. I see containers as great tools for prototyping a solution in a development environment, because they save developers tons of time in the code-test-fix-test cycle.

But once you are ready to ship your software, it typically doesn’t matter whether the solution is started in the bare-metal container, in a VM, or in a container-in-a-VM. The startup process for all of these alternatives will be automated during production. This means that in most cases, those extra seconds will not cost your developers any time.

TarasBut even if running the start-time solution during production won’t matter that much to the developers, why not simply go with containers across the board? Also, why would people want to run containers within a VM?

Piotr – As we all know, containers rely on sharing the same kernel with other containers running on the same host. Container advocates might see this as a good thing. I agree that this has many benefits, but it can also have multiple drawbacks. Overall, it boils down to security: if containers share the same kernel, it can be relatively easy for a process to break out of a container – or at least disrupt other containers on the host – with a series of malicious kernel calls. Running a container in a lightweight VM solves that problem.

Also, running containers in VMs makes it easier to conduct prototyping on the container during the development cycle, while still not sacrificing security in a multi-tenant production deployment. (This is because of the isolation provided by a VM.) Furthermore, if you use the memory page de-duplication we discussed in our last blog, you won’t have to sacrifice high density, which is often associated with containers. What you gain in return is the ability to run the custom kernel of your choice, as well as full isolation from other tenants.

I have talked to several people from the Network Function Virtualization (NFV) space, and they consider the container-in-VM approach to be a perfect match for virtualizing some of the networking function in large teleconferencing datacenters – precisely because of requirements on specific kernel features. I’m sure they are not the only ones who really want to exert control over that.

TarasFair enough. In fact, after listening to your argument, I’ve thought of one more scenario that requires containers inside VMs – the need for older hardware virtualization environments to keep separate processes, running on the same VM, as isolated as possible. That type of requirement usually comes from organizations that need to run old software when it is not worth rewriting from scratch so it can run on modern hardware. But that’s probably quite a rare case.

So now let’s move on to Setup Orchestration. Which one is easier?

Taras – Containers can be used without much preparation or setup on modern operating systems. For example, Docker daemon is included by default in the operating system distribution. With just one command, you can create a new image using Docker, or other tools, and run it on the host. When you need to manage containers across a cluster, administrators can setup an orchestration software such as Kubernetes. While, it’s true that set up isn’t a trivial matter, but it’s definitely doable by the average system administrator in a reasonable amount of time.

wp_setup.jpgBy contrast, it is not as easy to manage VMs across your cluster. You need to setup a system like OpenStack – and OpenStack deployment and management is well known for being extremely complex. It takes a lot of time for an administrator to develop the proper OpenStack setup, and upgrading it can be an even more complex task. Of course, you can always outsource the OpenStack deployment, but this costs money for the initial setup as well as subsequent upgrades. And the specialists you hired have to have access to your cluster for each upgrade, which may be a concern.

Piotr – I have to agree that OpenStack is notorious for being difficult to deploy and configure. I like the saying I once heard, “OpenStack has the mother of all learning curves.” But keep in mind that solutions such as our Bright OpenStack make it easy to deploy OpenStack on “Day 1”, and easily manage it on “Day 2+.” In fact, you can deploy Bright OpenStack on top of Ceph with just a few mouse clicks -- literally.  This means the availability of in-house OpenStack deployment expertise is no longer a deciding factor in choosing to use OpenStack.

TarasGood point!

Join us again soon for Round 4 – Image formats