What is edge AI?
Recently, there has been a tremendous buzz surrounding the so-called edge AI phenomenon. It is widely described as the future of artificial intelligence. That is why we decided to explain what it is, and also describe the latest related technologies available on the market.
From this article you will learn:
- What is edge AI?
- What are its applications??
- What are the newest devices in this field? [including Google Coral and Nvidia Jetson].
What is edge AI?
Edge AI is a solution in which artificial intelligence algorithms are implemented directly on a device collecting data. For example drones, cameras or augmented reality glasses. In a breakaway from contemporary solutions, Edge AI does away with the need to transfer data to a server or the cloud.
Applications of edge AI
Changing the place of data processing to terminal equipment proves to be particularly important when:
- The speed of data processing is crucial, and we cannot even afford the milliseconds of data transmission delay
- There are problems with stable access to the network. Such a situation may occur, for example, in less populated areas or rooms deliberately blocking the signal.
- We work with sensitive data, that should not leave the device.
- When we care about low power consumption. In the case of edge AI, we don’t have to send data to the cloud regularly, so we automatically minimize the use of batteries.
There is a vast number of scenarios for edge AI applications.
These include autonomous vehicles and drones. The common theme is the need to make decisions in real time. Therefore, there is no time to send data to the cloud for further processing.
The Edge AI is also used in wearables. Smart watches and wristbands collect more and more accurate measurements of life parameters. This information, combined with artificial intelligence algorithms, makes it possible to plan training, improve health and take care of the elderly.
Edge AI also helps to fulfill the assumptions of Industry 4.0, enabling, among other things, quick prediction of future machine failures on the basis of sensor data, which examine, e.g., vibrations, temperature, and ultrasounds. It also supports the increasing automation of production processes and quality control. For example, image processing algorithms can be used to recognize, classify and track objects close to the equipment.
The possibilities are endless. These include all places and situations for which the reasons mentioned above for using the AI edge are essential. Just think about smart homes, smartphones, and augmented reality glasses… Furthermore, this is just the beginning, because these technologies are entering our lives and business more and more.
Technologies supporting edge AI
The edge AI market is developing very rapidly. Recently, more and more devices appeared, which allow running deep neural networks directly on them.
Many of them are in the form of AI accelerators, which are connected to the device via USB or in the form of whole boards, to which cameras and additional sensors can be connected.
In this article, however, we want to emphasize the possibilities offered by two particularly interesting solutions:
- <>Google Coral Beta (just released),
- The Nvidia Jetson series, with particular reference to the Nvidia GTC 2019 conference, Jetson Nano.
Edge AI in practice - Google Coral
Recently we had the opportunity to test Google’s latest solution in the area of edge AI – Google Coral.
It is the first TPU (Tensor Processing Unit) device. It uses a technology developed by Google that supports machine learning algorithms with the use of a dedicated integrated circuit. For our tests, we used a solution in the form of an AI to USB accelerator, which we connected to the Raspberry 3 board.
It’s shown below:
Coral Beta (this version was just released) focuses on scenarios connected to image processing. It allows building solutions related to the detection and classification of objects in photos and camera images. Examples of dedicated models allow for instance to recognize faces, bird species or thousands of objects of everyday use.
Google Coral allows you to work with Tensorflow Lite models. However, it is necessary to compile it properly so that it can use the Edge TPU technology. Currently, the compiler can work with four types of neural network architectures, to which they belong:
- Inception V3/V4: 299×299 fixed input size,
- Inception V1/V2: 224×224 fixed input size,
- MobileNet SSD V1/V2: 320×320 max input size; 1.0 max depth multiplier,
- Oraz MobileNet V1/V2: 224×224 max input size; 1.0 max depth multiplier.
However, the .tflite models delivered to the compiler should not exceed 100MB.
A significant advantage of the TPU API for Python supplied with the Edge TPU is the possibility of using Transfer Learning, i.e., using a learned model and training it on your data. It can take place directly on the device.
Due to working with quite large data (images) and long processing time, it was better to skip the model in our case on the computer and then move the trained model back to the device.
Graphics cards are commonly used to train deep learning models. That’s why Nvidia has decided to take advantage of its many years of leadership in this field and enter the Artificial Intelligence market.
The company plays a significant role in the AI edge market thanks to its Jetson product line. The Jetson product line is often used in robotics and drones. The series, with models such as the TX2, or the latest version of the AGX Xavier, provide excellent technical parameters that allow even very compute-intensive algorithms to be run directly on the device.
GTC NVidia also introduced Jetson Nano, a new member of the Jetson family, at last week’s conference. It is much smaller and cheaper than the other products in the series while providing quite impressive technical parameters. It is also supposed to enable the use of a wide range of neural network architectures and known machine learning frameworks, such as TensorFlow, PyTorch, Keras, Caffe.
An interesting addition will be the possibility of using a set of CUDA-X libraries dedicated to the Jetson series, supporting the use of the full potential of the GPU.
We are looking forward to testing Jetson Nano to see if it could handle more advanced applications such as image segmentation and pose estimation.
Nvidia Jetson Nano versus Google Coral - comparison of capabilities
Nvidia made an interesting comparison of Jetson Nano’s performance compared to its competitors: Google’s Coral, and the Neural Compute Stick from Intel, in terms of video processing.
As can be seen from the graph below, Coral achieves better results for the MobileNet architecture, but Jetson Nano offers a much broader range of available network architectures and frameworks.
You can find a more detailed comparison here.