In recent years, the focus of big data analytics has shifted from simple statistical inference to sophisticated Machine Learning algorithms. Machine Learning (ML) can be understood as a set of analytical tools that collectively derive a model based on a set of observations. Simple data modeling is now deemed insufficient because it is based on examining trends in data, but often ignores subtle features and can cause data analysts to miss the “big picture”.
For example, a human can easily read road signs under varying weather conditions, but that is very hard for a computer at the “heart” of a self-based driving vehicle. What is amazing is that current state-of-the-art ML algorithms, and especially Deep Learning algorithms, can achieve higher accuracy than the average human. This is because they are able to “learn” non-linear features without relying on explicit a-priori models.
In many fields, data is available in vast quantities and is so multi-dimensional in nature that it becomes virtually impossible for humans to tackle analytics without the assistance of a computer. With ML, the onus of making a prediction or a decision is shared with -- or even handed off to -- a computer. As this occurs, the software “learns” from changes to its environment and can even adapts to its human users’ preferences.
Machine algorithms can be modified to solve slightly different problems with minimal tweaking: e.g an algorithm for face recognition would be very similar to an algorithm that identifies the type of an aircraft in Synthetic Aperture Radar images. One of the big selling points of machine learning is that it is a general technology that does not require extensive domain knowledge to create effective solutions.
Big companies like Facebook, Google and Tesla Motors use ML as a connective tissue to support what their users find interesting; i.e. to create models that deliver features users specifically want, while at the same time being able to adapt to new situations. ML is transforming traditional enterprise processes and industry verticals in ways that would have been unthinkable few years ago.
ML is certainly no newcomer and has been around with us since the 80s. What has allowed it to ripen and demonstrate its full potential are the following factors:
ML will have a significant impact on the economy in coming years, as its usage migrates from academia to big companies. It will absorb certain tasks traditionally performed by humans, while allowing organizations to redefine and redistribute human value to what actually matters.
Building and deploying ML tools and all their library dependencies is time consuming. For this reason, Bright Computing is focused on making access to ML tools easier and smoother for our current and future customers. In our industry-leading Bright Cluster Manager platform, we are building the capability to include GPU-accelerated versions of common ML libraries, allowing end users to focus on studying the various, derived models.
At the same time, Bright Cluster Manager can handle the deployment of software, configuring the GPUs for optimal performance and also monitoring the jobs and overall health of cluster by providing detailed job-based and system-related metrics.
Bright can simultaneously configure and manage other components, such as Hadoop, Spark, Docker, Openstack and cloud-bursting to the Amazon AWS EC2 cloud, therefore opening several alternative or unique paths to combine seemingly different technologies. All of this can be deployed, managed and monitored from Bright’s feature-rich, “single pane of glass” unified interface.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 711315 Bright Beyond HPC