Containerization vs. Virtualization – Device Access

    

wrestlers-1.jpg

We have come to final round of our blog smackdown between containerization and virtualization. Read on to see if anyone inflicts a beat-down. Our final post looks at device access and concludes with a few words summing up the contest.

For new readers, we are at the end of our “wrestling match” – a friendly contest between Taras Shapovalov, (team container) Piotr Wachowicz (team VM). The battle royal includes:

  1. Image size and overhead
  2. Overhead
  3. Start time and setup orchestration
  4. Image formats
  5. Application sharing
  6. Image version control and package distribution
  7. Package distribution
  8. Device access

     

    Let’s go to Round 8, the Final Round - Device Access

    Taras – Any particular device’s access within a container can be restricted when required. However, in general there is no problem gaining access to any device attached to the host from within the container.

    By contrast, virtualization presents obstacles to accessing devices, because the hypervisor abstracts physical hardware when running a VM. Examples include GPUs, coprocessors, and InfiniBand (IB) host channel adapters (HCA). Several projects are in the works that promise to give access to a GPU within a VM, but they do not work perfectly for all hypervisors – and there is no unified way to pass any generic device to a VM.

    Piotr – For this one, I want to start at the end. There is actually a unified way to provide VMs with a passthrough to a peripheral component interconnect (PCI). Intel implements the support for it using containers.jpgwhat they call VT-d; AMD uses IOMMO. The only drawback is that, for the passthrough to work, the device must support certain PCI standards. This means the passthrough can’t be done with every PCI card, but it can be done with those that implement the standards.

    You mentioned InfiniBand earlier. In fact, exposing a single IB HCA to multiple VMs
    via PCI passthrough and SR-IOV is reasonably easy with Kernel-based Virtual Machine (KVM) and OpenStack. We actually did it some time ago. Exposing a GPU to VMs is bit trickier. As you point out, there are many ways to approach this. You could go with GPU “API Remoting,” like rCUDA, but that might be slow for some applications. Or, you could go with a generic PCI passthrough, or, its special case, a GPU passthrough.

    Both the passthrough-based solutions work, but I have to agree that they require quite a bit of prior setup and fiddling. Another drawback is that they lack the ability to share a single GPU between multiple VMs. If that is not a requirement, PCI passthrough gets you performance numbers only a few percent shy from those of bare metal. So, in my view, this solution is comparable to containers.

    To go one step further, sharing a single GPU among multiple users in KVM, you would need to use either GPU API Remoting, or have a GPU card that supports the SR-IOV standard. Incidentally, last year AMD did just that, unveiling AMD Multiuser GPU technology, based on the SR-IOV standard. However, I haven’t yet seen it used with KVM. Then again, I wasn’t exactly looking.

    export-2.jpgAs for Xeon Phi in VMs – I know some people claim they got it to work using PCI passthrough on some hypervisors. I’m not sure if KVM was one of those. From what I gather, the latest generation of Intel’s next-generation Xeon Phi coprocessors, Knights Landing, makes things easier. What do you think about that?

    Taras - Yes, I agree that the Knight’s Landing processor solves this, because the chips behave similarly to regular CPUs (they will just replace the CPUs), so passthrough technologies will no longer be needed. On the other hand, Intel is going to continue to produce PCI-versions of Xeon Phi, so passthrough technology will still be needed.

    Piotr - To sum up the discussion (aka wrestling match) on my end, I do not see VMs and containers as competitors. They simply have different use cases. However, I believe strongly that OpenStack can provide a perfect ecosystem in which containers can thrive, and I hope that the container and OpenStack communities will work even closer together in the future.

    Taras - I agree – Let’s hope for that.