By Piotr Wachowicz | July 13, 2016 |
Our blog smackdown between containerization and virtualization is rolling along. Today’s post looks at application sharing.
For new readers, we are in the middle of a “wrestling match” – a friendly contest between Taras Shapovalov (team container) and Piotr Wachowicz (team VM). The battle royal includes:
Taras – Containers are a good way to share applications among users. Because the images are usually small, it is easy and fast to download them from the Internet. For example, Docker supports Docker Hub, a publicly available registry of different container images. Using this tool, anyone can easily search and download images shared by other users. Thus, if one user creates a container image, he or she can easily upload it to Docker Hub and send a link (or just the image name) to another user, who will be able to start the container from that shared image with just one command. Pretty cool!
Piotr – Yes, the public Docker Hub is a great source of container images, but only if you do not care about security. As you just said, anyone can upload container images there. That includes potential attackers uploading container images with embedded exploits. Your end users can easily download them and deploy them within your infrastructure – then watch out!
You could, of course, set up your own internal Docker image repository that contains only verified and trusted images. But think about it – this would mean either that all container images submitted to that internal repository would have to be scanned/tested for exploits (slowing the add-container-image process), OR that only admins could add new images (which would be very inflexible for users. And we all know that if users have to jump through too many hoops to get something done, they will find a way around the established security policies – bringing you right back to square one.
Taras – Indeed, untrusted images can appear on Docker Hub or even in a locally installed image registry. Although Docker applies some Linux capability and SELinux rules and that restrict and isolate the application running inside the container, the processes are still executed by root user inside the Docker containers. This allows processes in the container, say, to mount directories by NFS, then read and write restricted files that should be out of user’s access. Thus if user runs a container, then from security perspective she must be confident enough about the image and the applications she runs. But if you (as cluster administrator) consider any user that is able to communicate with docker as fully privileged inside and outside the container, then the issue is reduced to similar issues that appear when administrator runs any code on the cluster as root user. So, don’t run unknown processes in Docker containers using unknown Docker images, otherwise just think of it like a password-less sudo. If you still need to allow regular users to get benefits of the containerization on your cluster, you should consider some other containerization solutions, like Singularity.
In favor of Docker I can add that now docker allows to map users running inside a container with users on the host (with --userns-remap option of the daemon). This should, for example, prevents the scenario when user mounts /home by NFS and has access to any of the subdirectories. Although this requires specific kernel patch, but hopefully soon all the official kernel builds from RH or SLES will allow to use this Docker feature.
I should also mention here that you can find two types of image repositories on Docker Hub: official and non-official. The official repositories should be more trustworthy than those created by unknown users. This does not mean that we should trust official images 100 percent of the time, but using official images should reduce the risk of exploits or vulnerabilities.
I also want to mention that you can use security protection software, like FlawCheck or Clair, to verify the images before starting the containers. Such tools can detect the top cybersecurity threats affecting the container environments. If you use a locally installed image registry, you can add the software as a hook for some user actions, ensuring that you can check all images.
Another way to increase your trust in the images is to use a hardware signing feature, like YubiKey (a USB device). Docker supports YubiKey, and image developers can sign their images to ensure they have not been tampered with when they make it to the end user. The hardware device is not a requirement; the developer can also sign the image with a digital signature. Docker developers call the feature Docker Content Trust. From this point of view, containers have gone farther than VMs in image security.
It is also important to point out that Docker has recently added image scanning to the Docker Hub. This means that source code taken from publicly available container repositories by users assembling container workloads will be checked for the correct release number and any vulnerabilities. If the code is a release with known vulnerabilities, the downloader and the supplier are notified, with the latter expected to fix it. So it’s clear that Docker developers are showing they understand how to improve protection of container images, which in turn shows that containerization is going in the right direction.
I also want to remind you that VMs are not a 100 percent secured solution. For example, researchers found a bug in microcode of AMD processors that allows users to control the host operation system from a virtual machine. And there are no guarantees that other hardware bugs related to virtualization will not crop up in the future.
There are several ongoing projects trying to mix containers and VMs, taking the best properties from both worlds. They are looking to combine the enhanced security available from virtualization with containerization’s small memory footprint. Examples include Unikernels or Hyper.
Also, keep in mind that sharing VM images is difficult, because they are usually heavy (in terms of image size) and there is no infrastructure like Docker Hub to let you easily upload and share images. Of course the VM images can be uploaded to some web site, ftp server, or similar storage that other have access to, but this process is not standardized – and it takes time. As mentioned earlier, it is inconvenient for users, a problem in and of itself.
Piotr – I agree that it can be difficult, but it does not have to be. In fact, OpenStack provides more than one way to do it.
The first and most obvious approach is to use the concept of an OpenStack project, also known as a tenant. A project is an entity inside OpenStack that aggregates and owns a pool of resources that are then shared among users, and/or user groups that belong to that project. In other words, VM image, volumes, volume snapshots, and VMs are shared by default and are accessible among all the users within the project. You need no additional configuration to share VM images among the users within a given project.
There are two different approaches you can use if you want to share images among different projects. One way is to create what is known as a VM snapshot, basically a VM image that gets uploaded to OpenStack’s VM image repository, called Glance. If that image is subsequently made public, it can be used by other projects within that OpenStack cloud.
Another approach is to use OpenStack’s Cinder, a block storage as a service component that allows you to easily create volume snapshots. You can later turn those snapshots into proper volumes, reuse them within a given project, and even transfer the volumes to other projects (share them). I have to be honest, though. There’s a drawback to this approach. Currently the volumes have to be explicitly transferred to other projects/tenants, and explicitly accepted by them. This makes the process a bit awkward if you want to share a volume with large number of other projects. But the OpenStack community it already working on making volumes publicly visible and accessible similar to the way Glance images are handled.
Taras – Great point!
Join us again soon for Round 6, the last in our series – Image Version Control and Package Distribution