Building AI Apps for the Cloud ComputeMatrix
When one gets through the process of learning what Artificial Intelligence and Machine Learning is, the next step is to find out what AI DevOps tools and libraries are needed to create your AI application.. That however must be done in the context of the target client or cloud compute matrix that the app will run on. In today’s cloud world, DevOps developers don’t need to worry about having AI compute power, but they do need to know how to configure cloud compute matrices to minimize AI compute time, Although AI promises a lot, it is considered one of the most costly, data and compute intensive applications.
Beyond that underlying compute matrix concern, it is necessary to find out what AI tools are available, that is the AI SDK, Debug, Tracing and System Level Optimization tools, and what AI frameworks are available that is what AI API libraries are out there. From that you must then determine what AI tools and framework AI modules you will need for your AI application.
There are two sets of AI library and tool categories that AI DevOps needs to look at. One set is a group of open source AI application frameworks. The other set is a group of hardware vendor AI tools and libraries that allow developers to optimize their application for the vendor’s’ chip, IP core processors or processor boards, whether they be AI processors, Machine Language (ML) processors, analog or digital neural processors, classical RISC or CISC or hybrid microprocessors, network processors (NPU) or graphics processors (GPU). Besides the hardware vendors’ AI app development kits, it wll also benefit AI DevOps to examine those other independent AI support vendors that make up the $50 billion AI market. There are many AI vendors that provide very specialized libraries and tools for hardware AI specific targets.
When one searches for AI frameworks, open source or commercial, , one will quickly run across the most popular DevOps AI frameworks (Table One).
AIFrameworks |
---|
TensorFlow |
McrosoftCNTK – Microsoft Cognitive Toolkit |
Keras |
Theano |
Sci-kit |
Caffe |
Torch |
MXNet |
Caffe2 |
Chainer |
Matlab |
Wolfram Language |
Table 1: Popular AI Frameworks for AI Apps Development
Many, but not all of these AI libraries are built with Python. Some of these libraries operate best on graphic processors with single instruction multiple data (SIMD) architectures or variations of. This means you will have to configure or select your cloud target hardware judiciously, specifically consider Python optimized processors and graphic processor core switch matrices.
There are many developer types out there, but regardless, in the end, most developers must develop an AI app to run on a specific target hardware architecture or a set of target architectures . These target architectures could be a stand alone system, such as a Windows or MAC PC or a specific server or for the many different FPGA configurable or container architectures on the cloud. Servers, PCs and cell phones all are built with different types of processors, RISC, CISC, hybrids of and the emerging non Von Neumann AI weighed inference processor architectures (compute in memory and others).
Because AI DevOps, in order to optimize an AI app, must consider the end target architecture, or hardware, hardware processor IP core, chip and board vendors offer AI development kits and extensions for all the popular frameworks (listed in Table One above).
There are scores if not hundreds of companies that offer AI development kits, framework extensions and tools for developing AI apps for their or others hardware devices or systems. Looking at the leading processor vendors’ AI development kits will give you an idea of what you can expect from most other hardware device manufacturers. For example, Intel Corp, considered number one in the processor chip market, offers several extensions for the mainstay AI frameworks. These include Intel ‘s Extension for PyTorch,. which is available as a Python module or linked as a C++ module. .Intel also offers ApacheMXNet, PaddlePaddle, the Intel Extension for Scikit-learn. Intel XGBoost Optimized, and Intel TensorFlow Optimization.
NVIDIA, one of leaders in the SIMD graphic processor market, besides supporting all the main stream AI frameworks. also offers its Data Loading Library (DALI), the CUDA Deep Neural Network (cuDNN), the NVIDIA Collective Communications Library (NCCL), the NVIDIA Neural Modules (NeMo), its TAQ Toolkit, the NVIDIA Deep Learning GPU Training System(DIGITS) and the AI Assisted Annotation Toolkit, NVIDIA also offers several AI SDKs. These include TensorRT, Deepstrean, the NVIDIA Triton Inference Server (serves DL Models to maximize GPU utilization).and NVIDIA Riva. In support of AI development, NVIDIA also offers several generic DevOps tools to optimize an AI application’s performance.
Figure 1: There are many hardware targets an AI app can be designed for. At the root of this hardware are IP cores, that is neural network IP cores. Chip designers use these blocks to design their own hardware targets that meet the specific needs of their AI applications. The IPCoreMatrix: Neural Networks is available as a poster at the StatsiticsMatrix Redbubble store.
a