The adoption of OpenStack continues to grow unabated. According to Ben Rossi writing in an article (dated 13 April 2016) titled 65% of OpenStack deployments now in production, the spread of OpenStack has expanded one-third over last year. It is not hard to guess the reasons why. Foremost among those reasons is the avoidance of vendor lock-in and the great inherent flexibility and adaptability of OpenStack. As stated in the same article quoted earlier, “Being a flexible framework to build on is the most important aspect of the OpenStack platform," and, "OpenStack has experienced accelerated adoption in the past year with more diverse and larger deployments, particularly as organizations have recognized the flexibility and agility that OpenStack offers."
The amazing adaptability of OpenStack is shown by the many and varied hybrid (private + public cloud) environments it now supports. From Ben Rossi, quoted earlier, "OpenStack’s platform has helped drive innovation in a range of industries, enabling users to operate both legacy systems and cloud-native apps through a single framework...OpenStack is unique in its ability to support organizations managing legacy IT workloads while also adopting agile IT systems to drive competitive advantage through rapid iteration of software development."
Of late an exciting new development has been taking place involving OpenStack - the emergence of Network Function Virtualization and its ability to bypass expensive proprietary hardware. It turns out that OpenStack is well-suited for the deployment of Network Functions Virtualization. From Ben Rossi in an article titled How organizations can get started in OpenStack, "There are a growing number of use cases in OpenStack. Telcos especially are getting very excited about OpenStack as NFV (network functions virtualization) gives the ability to rapidly deploy virtualized appliances like routers and firewalls over software-defined networks, reducing the need for expensive proprietary hardware."
Finally for the data center that is running everything from private and public clouds to Hadoop clusters to supercomputers to whatever other HPC and big data technologies that exist nowadays, it has become obvious that OpenStack is a solution that can handle any or all of it. The very best OpenStack implementations, such as Bright OpenStack, have the ability to "build a data center infrastructure where they (IT and/or developers) can offer their own internal users a choice: today, do they want to run an HPC application, a big data application, or other workloads? They can also offer a choice of infrastructure: does this user want to run bare metal; does the other user want to run a virtual machine; and yet another user in a container?”, quoting Matthijs van Leeuwen, chief strategy officer of Bright Computing, in an article by Rich Brueckner titled HPC Finally Climbing to the Cloud.
For the modern data center it is impossible to beat OpenStack with its myriad possibilities of configuration with all of the modern cloud, HPC, and big data technologies.
Have you successfully integrated OpenStack alongside of HPC and big data? Tell us about it. What worked? What didn’t work? How would you advise others to tackle the issue in their own data centers?