Edge Computing and the Evolution of  Edge AI and AIoT for Automation

Quite quickly and not so quietly, AI has become a standard part of everyday life. Cell phones respond seamlessly to voice commands, enhance captured images, and automatically gather and curate useful information such as favorite calls and texting patterns. Some household appliances can evaluate ongoing usage and modify their function to provide a better user experience. An outdoor security camera likely can differentiate a human intruder from a large dog. Home computers complete sentences automatically and read emails out loud.

The common link in these AI/machine learning examples is that the execution often is performed at the “edge,” or more generically stated, the AI is deployed on the local device itself without using a cloud connection.

AI, ML, and DL Introduction

AI, or artificial intelligence, a term first coined by John McCarthy in 1956, describes the broad science of developing intelligent machines and computer programs. AI can refer to a wide range of computing technologies and techniques that in some ways emulate human intelligence and decision making.

All AI currently in use today is classified as weak, narrow intelligence, or ANI (artificial narrow intelligence) because it performs only specific, single tasks with a high level of reliability. Familiar ANI systems include face recognition, virtual assistants, predictive maintenance, autonomous vehicles, and recommendation search engines.

Strong AI, or AGI (artificial general intelligence), is used to describe systems that would have comprehensive general knowledge, thought capabilities, and even consciousness. AGI does not currently exist and may not even be possible to create according to some experts.

Machine learning (ML) is a part (or toolset) of AI that utilizes various learning algorithms and programs to create models that solve problems based on data predictions and analytics.

A subset of machine learning, deep learning (DL) uses deep neural networks (DNNs) to learn patterns from input data (including images) to create models based on learning and deliver predictions (inferences) about new data using the learned information. Various DL neural network architectures exist, each with different targeted functionality.

Traditional Cloud Connection for AI

AI development and productization — including the use of machine learning, data analytics, and deep learning (generically MLOps, or machine learning operations) — traditionally have been implemented in the cloud. These implementations cover a wide range of AI-driven applications including computer vision, facial recognition, language generation, chatbots, virtual assistants, predictive analysis, and much more. Extensive third-party and self-hosted cloud computing platforms provide the hardware components and software tools necessary to develop and execute AI on a very large scale.

Storing huge amounts of data and then making deep and/or convolutional neural networks (DNNs and CNNs) for AI models require significant computing resources, which are readily available in cloud computing services. Generally, commercial cloud computing is delivered and used as an on-demand service (SaaS, or software as a service), implemented remotely, and includes the servers, CPUs, GPUs, databases, software, tools, and algorithms needed to operate complex applications that build and use models.

This readily available and continually expanding environment means that users can spend more time and resources on the important task of application development while reducing the impact of managing hardware and AI infrastructure. The MLOps functions of data ingest, data prep, model training, model tuning, deployment, and monitoring benefit significantly from the collaborative and scalable cloud environment. In implementation, or deployment of the AI application in the cloud, only a browser or other dedicated, lightweight application is needed to connect to the cloud, communicate data, execute the AI model inference, and receive the results.

The cloud environment also provides convenient centralized data storage and access with the ability to handle and process huge quantities of information, often a critical part of MLOps for AI. Using cloud processing in inferencing and analysis, however, has significant disadvantages for critical edge applications. Cloud inferencing has high latency (time required to receive results), high costs related to communications and networking, and even data privacy concerns. As a result, many companies have turned to edge computing solutions for applications involving AI.

Edge Computing

In contrast to cloud computing, edge computing refers to a device such as an industrial computer that can operate programs and applications as a stand-alone device without requiring a connection to or communication with another computer, a system, or the cloud. Such devices exist near or “at the edge” of the point of application execution instead of remotely accessing programs or applications in the cloud. This nearness means data is collected closer to its source; hence, the term “edge computing.”

In automation, the general concept of edge computing is not new. Edge computing is widely used for applications in automation over a broad range of markets including automobile manufacturing and autonomy, precision agriculture, aerospace, and many more. Automated processes have long leveraged computers that exist locally, including PLCs (programmable logic controllers), machine vision/computer vision inspection systems and smart cameras, robot controllers, and many others. Edge computing also plays a significant role in the implementation of IoT (Internet of Things) and IIoT (Industrial Internet of Things).

Edge computing devices include a broad range of local processors, platforms, and servers in most any form factor from cell phones to industrial computers connected to sensors, cameras, or other automation system components. These edge computers implement a variety of central processing units (CPUs) and graphics processing units (GPUs), memory, and other controllers and interfaces. Embedded single-board edge processors and systems on a chip (SoCs) execute programs within the product (some examples include a smart thermostat or security system camera) and deliver data either on demand or automatically for archiving or analysis.

Ruggedized and powerful general purpose edge computers have revolutionized industrial automation applications by providing a configurable and programmable platform that interfaces with external input from sensors, automation components, machining centers, robots, and cameras to execute and monitor processes. Low-power versions of edge computers are used regularly on board vehicles in mobile computing applications, and these devices feature interfacing capabilities unique to those use cases ranging from automobiles to autonomous mobile robots.

Edge AI: The Evolution of Artificial Intelligence for Automation

Edge AI is the confluence of edge computing and artificial intelligence (AI). The term refers to a computing platform at the edge that runs AI functions such as data analytics and model inferencing, ” collecting and processing data using a pre-trained AI, machine learning, or deep learning model to make predictions and decisions. In some cases, edge AI might also use real time data for ongoing performance improvement of existing trained models – all without dedicated, continuous reliance on cloud computing and storage.

High-powered edge AI computing platforms can in some cases even replace cloud processing for data ingest, preparation, and model development and training. A common architecture though is to use external cloud computing platforms for model development and then automatically or manually deploy the model to the edge AI computer for inference, decision making, and results processing.

This type of hybrid implementation of edge and cloud may provide a more seamless architecture for model maintenance and management and continuous learning. With edge AI, the amount of data that must be sent to the cloud can be significantly reduced, and results delivery due the edge computing environment can have very low latencies.

IoT, IIoT, and AIoT

The value of edge AI is readily apparent in the expansion of devices and components that are part of IoT and IIoT. Best characterized as “smart” devices, IoT refers to the billions of “things” worldwide — such as digital assistants, cell phones, and even refrigerators and coffee makers — that are connected to the internet to collect and process data.

IIoT is similar but focuses on industrial devices in an automation environment. These are also smart devices but in a broader sense. IIoT devices often perform critical tasks and handle high amounts of data used to make real-time decisions in a demanding, high-security environment that requires reliability. IIoT provides the backbone for smart manufacturing and Industry 4.0, making machines and processes better through connectivity and shared data analysis. It enables functions such as predictive maintenance, production tracking, and even energy optimization that reduce costs and increase productivity.

Enter AIoT, the Artificial Intelligence of Things. By using AI in a variety of forms, the IIoT environment can be trained based on the massive amounts of available data in the industrial environment and can improve the functions served by IIoT. While it is still evolving, many experts see AIoT and edge AI as the future of industrial automation.

Key Edge AI Benefits

Cloud computing and cloud SaaS platforms excel at the rigorous tasks of data preparation, model training, and model deployment. Edge AI provides significant value when used with computing at the edge for ML and DL applications. Following are some key capabilities where edge implementation outperforms cloud:

Very low-latency and deterministic results: Edge AI functions on compute devices using local processing for inferencing and results processing, eliminating delays that can occur in data transfer, provisioning, and executing on cloud computing resources. Depending on the edge computing platform, latency (delay) in delivering inference results can be measured in terms of milliseconds instead of seconds. Edge processors also have much more deterministic timing; that is, the variation or difference in latency in each inference is minimal given equivalent data. The amount of time required for cloud processing may vary significantly for each inference.

Reduction in data transfer and lower bandwidth/traffic: The time required for data transfer figures significantly into the overall process time and into the requirements on network connections due to high bandwidth usage and traffic. Cloud computing requires that all new data (or images, in the case of vision) be collected and uploaded to the cloud for processing and analysis. Over a large-scale and fast-moving process this could equate to large demands on networks and infrastructure. With edge AI, each inference is performed locally with little or no data transmission. In some architectures, local cloud platforms sometimes called “cloudlets” or “fog” computing can be used to collect and archive data with minimal network demand and even transfer data to the cloud platform on demand or at times of low network traffic.

Lower cost and better scalability: Edge AI and edge computing provide a lower cost solution based on several criteria. The reduction or elimination of high-bandwidth network communication provides significant potential cost savings in data services. The fundamental costs of large-scale data storage also factor into cloud computing costs. With edge AI, all images need not be sent to the cloud nor even archived in many cases. While not necessarily intuitive, edge AI is more scalable than the cloud in terms of local deployment. Expanding dedicated cloud data services is time consuming, costly, and often requires MLOps and DevOps support for implementation. Edge AI computing devices and platforms are relatively inexpensive and easy to propagate over many installations.

Better data and system security: An unambiguous benefit of edge AI is the native data and system security that exists because the inference can be performed on a local device that can be easily firewalled with respect to external connections and even taken off the grid completely. The entire system is easily protected as an on-premise enterprise. Privacy concerns are eliminated or minimized also as the absence of or reduction in data transfer and archiving helps to protect IP.

Higher/guaranteed system reliability and redundancy: By moving processes from the cloud to the edge, temporary disruptions in network service need not disrupt critical processes. Even if the edge AI is connected, loss of network will not halt processing.

Computing Technology Advances Drive Edge AI

Computers that perform AI at the edge must be able to execute the complex processing for inference of DNN- and CNN-based models and the execution of other AI analytical algorithms. These tasks are compute intensive and require powerful CPU/GPU/VPU processing or dedicated devices designed specifically for model inference. In addition, edge devices for IIoT and edge AI need to provide form factors suitable for the application and in some cases must be suitably robust to handle harsh environments that are common to industrial and other automation installations.

Edge AI computing platforms must further meet the diverse needs of interfacing and interconnectivity in an automation environment. External sensing devices including 2D and 3D cameras, digital and analog I/O interfaces, network communications, and many others must connect seamlessly and reliably to the computer and in some application cases, the whole platform must be capable of operating at low voltage and with low power consumption.

In today’s AI marketplace, edge computing takes on many forms. Popular dedicated AI devices such as the NVIDIA Jetson line of computers are single-board devices with custom processors specifically designed for doing inference with the deep-layered AI models. While these board computers can be used as embedded devices, they also are found in more advanced computing edge AI platforms configured for use in automation scenarios with features like advanced stereo camera support, 4G/5G Wi-Fi, and isolated digital or analog I/O.

Other computing platforms using GPUs and VPUs are specifically designed and built for computer-intensive applications such as deep learning and AI, machine vision, robotics, and video analytics. Often these feature flexible designs that can be scaled to match the processing needs of the application.

The very popular and widely deployed industrial PC, or IPC, remains a good choice in many AI situations. Some AI can be deployed directly to CPU architectures, but many IPCs also can be configured with powerful GPU add-on cards to enhance processing capability in model inference.

The evolution of AI has clearly taken the technology from the cloud to the edge for the most important functions impacting our daily lives, including IoT, IIoT, and AIoT. This migration will continue to be an important key to the future of AI.

CoastIPC offers a wide range of industrial computers from Advantech and Neousys Technology that will help take your AI applications to the edge. Contact our knowledgeable staff at [email protected] or call us at 866-412-6278 to discuss our wide variety of industrial computing platforms that will meet and exceed the unique requirements for your applications needs.