In recent years, machine learning (ML), especially deep learning (DL), has been applied to various domains—for example, computer vision, speech recognition, and video analytics. Emerging intelligent applications (IAs), such as image classification based on deep convolutional neural networks (CNNs);21 traffic-flow prediction based on deep recurrent neural networks (RNNs);42 and game development based on deep generative adversarial networks (GANs)20 are demonstrating superior performance in terms of accuracy and latency. Such performance, however, requires tremendous computation and network resources to deal with the increasing size of ML/DL models and the proliferation of vast amounts of training data.27
Cloud computing is indisputably attractive to IA developers as the pre-dominating high-performance computing (HPC) paradigm.5 Cloud providers typically offer services such as Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) to facilitate application implementation—resources such as high-performance computation, massive elastic storage, and reliable network services are allocated according to user requirements. Intuitively, mainstream IAs are deployed on the cloud to leverage centralized resources for computationally intensive artificial intelligence (AI) tasks, such as data processing, ML/DL model training, and inference. For instance, the distributed training of AlphaGo37 is a typical cloud intelligence (CI) representative.
No entries found