By Lionel Gibbons | October 17, 2017 | Cloud
According to IDC FutureScape: Worldwide Cloud 2016 Predictions, the cloud will be the preferred delivery mechanism for analytics by 2018. For enterprises across sectors, turning that prediction into reality will be driven by HPC used to run parallel applications.
In order to take advantage of this in a way that optimizes cost and effectiveness, many enterprises are looking to deploy a mix of public and private clouds in a hybrid model. While a fundamental step in realizing the full potential of a hybrid cloud is via cloud bursting, too few organizations planning on embarking on the HPC-and-analytics path appreciate its importance.
Cloud bursting is all about dynamic deployment of applications that normally run on a private cloud into a public cloud to meet expanding capacity requirements and handle peak demands when private cloud resources are insufficient. Cloud bursting can make these private clouds more cost-efficient by eliminating the need to overbuild physical infrastructure to ensure enough capacity to meet fluctuating peaks in demand. Private clouds can be rightsized in terms of compute and storage to accommodate the ongoing demands, because the peaks can be handled by a public cloud and a pay-per-use model.
There are many scenarios where businesses can benefit from cloud bursting. For example, many sectors deal with seasonal spikes that put an extra burden on private clouds. Enterprise data centers may have geographic needs where one location experiences heavy loads and must meet application-specific performance needs.
Software development projects and analytics are two of the fastest-growing drivers of demand for cloud bursting. DevOps teams spin up numerous virtual machines for testing purposes that are only needed for a short time. For example, organizations in the health and financial sectors are heavy users of HPC and often have fluctuating analytics needs that require high-core-count applications such as:
Overall, the availability of a public cloud offers the chance to reduce the capital cost of owning and managing excess compute capacity and storage for all kinds of workloads. By combining it with on-premises cloud resources and using cloud bursting to manage it, the public cloud serves as on-demand, overflow capacity and eliminates the need for costly over provisioning to meet temporary demand.
Hybrid clouds are particularly useful in certain sectors. Life sciences workflows, for example, generate a great deal of data on premises for things like genomic sequencing but rely on the ability to analyze and compute the data in the cloud. This is a prime example of a temporary-use scenario. Cloud bursting is also integral to the financial sectors, which must develop predictive models based on stock market data for market risk analysis.
The ability to effectively and seamlessly manage demand and potentially bring thousands of additional cores to bear is the only way that many sectors can make this type of data analysis effective in terms of both cost and time. When demand spikes, these companies can do the cost/schedule math to figure out how much additional processing power is needed and just rent it from Amazon Web Services or Microsoft Azure.
To seamlessly manage and monitor cloud bursting of HPC workloads in a hybrid cloud requires a sophisticated cluster management solution that can integrate a wide variety of workload managers and eliminate the high learning curve across cloud platforms. This automation software solution can integrate across all workload management solutions by significantly reducing the complexity inherent to cloud bursting. This solution brings a great deal of agility, responsiveness, and simplification that saves money and time while opening up cloud compute vistas for enterprises across every sector.