High-performance computing (HPC) systems that once hosted a consistent set of modeling and simulation applications are now being pressed to take on a new breed of applications in the areas of machine learning (ML) and data analytics, making it necessary for organizations to expand the capacity and capabilities of their HPC systems.
When embarking on a hybrid cloud journey, experience tells us we should strive for several things:
- The hybrid system should present itself as a single (seamless) environment and interface to users and administrators, rather than two disparate systems.
- The system should be able to detect when public cloud resources are no longer needed and automatically turn them off to avoid on-going usage fees.
- The system should be able to automatically stage data on cloud servers where computation will be performed, and intelligently manage data traffic to reduce networking and cloud storage costs.
- The system should be able to automatically scale out an on-premises system with additional public cloud servers, based on demand and policies set.
- The skills and knowledge required to use the tools necessary for the hybrid system should reflect the staff’s capabilities, while also factoring in the learning curve new hires will face as staff turns over.
- The system should be able to leverage multiple/different public cloud environments and servers from different hardware vendors simultaneously to avoid vendor lock-in and ensure forward-looking flexibility.
To access this white paper, please fill out the form to the right.