New Containerization Technologies Let You Isolate Applications and Scale Up or Down the Services You are Running

    

Interest in containerization technologies continues to grow, and people want to know how to get the most out of them. We decided to take  a look at some of the use cases that are most relevant to Bright customers: process isolation; images; and long-running applications and services.  

Process isolation – One of the most interesting aspects of containers is that processes running within different containers can be isolated from each other. One container’s process cannot access resources used by another container. While this has always been possible (if the processes were executed by different users), with containers there is a deeper separation and resource restriction. In the past, different users’ processes could still share file systems, process IDs, network interfaces, and some other operating system resources. Processes running in containers can have their own copy of these items, which is simply a more secure way to run applications.containerization

Images – Containers can be used to ensure the application has everything it needs to run properly, no matter where it runs. In other words, if an application requires a particular set of libraries, directory structure, additional third party software, or configuration files, these can all be placed in the image file (together with the application). Then the file can be distributed to the many different nodes on which the application will be started. The image can also be shared with other users or copied between on-premise or cloud clusters. It all adds up to a better way to distribute software.

Long running applications/services – When Kubernetes container orchestration software is used, the container is treated as an “atomic” object. Such objects are united within a pod, which share resources that can be scheduled across the cluster. Kubernetes provides a variety of capabilities that are useful for managing pods. It helps to manage networking, user permissions, and resources restrictions. Kubernetes also allows easily scale up and down number of pods of the same type running on the cluster. Such pods usually have unique network address and Kubernetes allows to create a shared address (called service address) that can forward connections to a healthy pod when old pod (or node) stops to work properly. Such service addresses helps to easily implement load balancing for user applications. Its use shows great promise as a better way of managing long running applications and services running somewhere on a cluster.

Examples where I think this will be important include such sites as Twitter and LinkedIn, and other large websites that provide different applications on their back end. These sites must be able to scale up and scale down all the time – and it is far more convenient to do so by using containers.

With the new Version 7.2 of Bright Cluster Manager, administrators can now easily set up a web workload orchestration system (using Kubernetes and Docker as a backend), configure it, add new users with specific permissions, monitor it, and then easily update it from the Bright repository. With Version 7.2, users can quickly configure Kubernetes components and run them on the right hosts. This lets admins configure what should run where from inside a management infrastructure, allowing different computers to communicate in a secure way.

The future of computation in general is in containers – running containers allows you to isolate services from each other, and scale up and scale down the number of services you are running. Version 7.2 users who opt for containers now have what they need to manage containers efficiently.

High Performance Computing eBook