Counting visitors locally - a test run built in few days!
Artificial intelligence technology is growing very fast lately. Many well-known and experienced companies deal with Natural Language Processing, Image Processing or Machine Learning. One of these companies is Nvidia, which has recently released Jetson Nano. We’ve decided to test this edge AI device by implementing and then testing algorithm, which allowed to detect and track objects. I want to share our experience with you in this article.
Our idea was to perform real-time analysis from the video stream to count customers if they meet certain conditions. We built a customer counter within a few days and used AI algorithms for it. In this case, we counted the passing visitors and those who stopped. In the following, I explain how we did this and what it is useful for.
Why do companies want to count customers? What are the use cases?
There are many use cases in which such an approach can be applied. Have you ever wondered how many people are actually interested in your trade fair stand or shop exhibition and how many people are just passing by? Do you want to find out what interests people who are passing by? It is possible by real-time detecting, tracking and counting those people who stayed in front of the camera at least a given period of time. Based on such analysis, it can be specified which exhibition is the most interesting for a specified group of people, what stand/boot is more attractive or what content keeps people attention better. This information will allow you to improve the reception of the stand/exhibition and to adjust the offer to potential customers. The possibilities are endless.
Watch a short video of our test run:
How to track objects using Artificial Intelligence?
Nvidia Jetson Nano - what is it?
The Nvidia Jetson Nano Developer Kit is a powerful edge AI computing device for embedded applications. It includes an integrated 128-core Maxwell GPU, quad-core ARM A57 64-bit CPU, 4GB LPDDR4 memory, along with support for MIPI CSI-2 and PCIe Gen2 high-speed I/O.
This pocket-sized computer is incredibly power-efficient (5-10W of power consumption), yet it delivers the compute capability to run multiple modern AI models in parallel at high performance. It is capable of 472 GFLOPS of computing performance which provides the user with quite extensive experience. The newest member of the Jetson family was designed as a platform for ‘AI on the edge’ – a solution in which artificial intelligence algorithms are processed locally on the device.
To start your experience with Nvidia Jetson Nano Developer Kit several things are needed:
- microSD memory card – card on which the operating system (JetPack) will be installed;
- power supply;
- HDMI cable – to connect the device with the display;
- Mouse and keyboard – to control and enter data;
- Connected internet – for downloading software.
Another thing worth to mention is the fact that Jetson Nano is supported by newly released software called JetPack 4.2 SDK which includes:
- complete desktop Linux environment (Ubuntu 18.04) with Nvidia drivers;
- libraries and APIs such as:
- CUDA Toolkit;
- cuDNN – CUDA Deep Neural Network library;
- TensorRT – deep learning inference runtime for image classification, segmentation and object detection neural networks;
- VisionWorks – software development package for Computer Vision and image processing;
- Multimedia API;
- developer tools – Nsight Eclipse Edition, debugging and profiling tools;
- documentation and sample code.
And what about other popular machine learning libraries and frameworks such as PyTorch, TensorFlow, Keras, Caffe or OpenCV? They are not provided by default, but they can be easily installed! And they are fully compatible with a released dev board.
Jetson Nano is ideally suited as an Edge AI device which allows a user to perform machine learning / deep learning on the edge. Reliable computer performance compared to small size, memory capability, numerous options and flexibility makes Nvidia’s developer kit a device with endless possibilities.
Nvidia Digits - what we get?
Nvidia shares their experience gathered over the years by providing a set of deep learning tools. One of them is DIGITS (The Deep Learning GPU Training System™), which allows multiple tasks:
- managing datasets;
- designing and training highly accurate deep neural networks for image classification, segmentation, object detection tasks, and more;
- monitoring performance of the model;
- validating and visualizing results;
- choosing the best model for deployment.
The Deep Learning GPU Training System is completely interactive – everything can be reached through an intuitive browser-based interface. Combination of Nvidia’s DIGITS and Jetson forms an effective pipeline for developing and deploying advanced neural networks for any application.
What kind of algorithms we can run on Jetson Nano?
Jetson Nano, in comparison to Google Coral Dev Board, supports a wide number of popular machine learning / deep learning libraries and frameworks like Keras, TensorFlow, Caffe, Torch/PyTorch etc. Support for so many libraries makes this device a powerful one – it can be used to design, to implement and to perform actions form areas such as computer vision, natural language or tabular data structures processing. In details, Nvidia’s Jetson Nano can run algorithms like:
- object detection;
- object tracking;
- pose and motion estimation;
- feature tracking;
- video enhancement (video stabilization);
Many of these algorithms can be processed in real-time, which means they can analyze video streams with high resolution e.g. from cameras on the production line / hall or from web and security cams. You can even analyze sounds or data from IoT devices in real-time using Nvidia’s Jetson Nano.
These capabilities of a developer kit, together and separately, can be used to build complex artificial intelligence pipelines and systems, which, in addition, can be used for many business and industrial applications.
Tracking people`s interest in real time during events
Jetson Nano was released not a long time ago, so it’s a relatively new device. Therefore, we’ve experienced several problems, however Nano has similar software to other members of the Jetson Family. And not to brag, we know TX2!
We’ve decided to realize a small proof of concept (PoC) to test and demonstrate Jetson Nano capabilities. Our idea was to perform real-time video stream analysis to count people during fairs, conferences or even through shop exposition in the mall. But how can be this used in real life? Why companies would like to count customers? Based on such analysis, it can be specified which exhibition is the most interesting for a specified group of people, what stand/boot is more attractive or what content keeps people attention better. There are many use cases in which such an approach can be used.
How we did it
Our object detection and tracking algorithm were implemented in Python programming language. We’ve used Logitech C920 HD Pro Webcam which allows for high-resolution video recording or streaming. For computer vision, faster computations and interference with deep networks we’ve decided to use several well-known libraries like CUDA, OpenCV, dlib and numpy. Here are the steps that describe the operation of our algorithm:
- Load libraries and initialize variables;
- Load object detecting model. We’ve used MobileNet model because of the performance;
- Start the video stream;
- For every frame perform these actions:
- Every 10 frames construct a blob from the current frame, pass it through the network to obtain predictions and initialize the list of the bounding boxes. Then loop over the detections and if detected object is currently not tracked initialize the tracker. OpenCV provides several built-in trackers;
- Update trackers;
- Loop over tracked objects and draw a rectangle around it;
- Count people;
- Display additional information;
- Stop the video stream.
We’ve considered two approaches for counting people. First, based on counting people who are on the recording more than a certain number of frames. Second, based on checking the area of the object. If the area increases over time, it means that the object is getting closer to our stand or exhibition. We’ve decided to implement the first approach first. The second approach will be implemented soon.
Jetson Nano vs Jetson TX2
As it is mentioned, we have some experience with Jetson TX2, so we’ve decided to compare the computing capabilities of these two edge AI platforms. Jetson TX2 is capable of 1.5 TFLOPS of computing performance that is almost three times more than Jetson Nano. On the other hand, it is bigger and less portable than smaller and newer member of the Jetson family. We’ve performed same tests on both devices using mentioned object detecting and tracking algorithm. We expected three times faster computations for TX2. What is surprising, the results for Jetson Nano were not so bad compared to Jetson TX2. This may indicate a very good optimization of the algorithm’s operation.
Jetson Nano is smaller and has less computing performance than bigger Jetson TX2. Does that mean it is worse? Not at all. For most applications it will allow sufficient performance. In the end, it depends on the application and the algorithms which device is to be used more sensibly.
Summary and conclusions
Nvidia’s Jetson Nano offers a great start to experience the power of artificial intelligence. It supports a wide number of well-known machine learning / deep learning libraries and frameworks. It also has decent computing performance and a collection of dedicated software. An additional advantage of this edge AI device is a small size which allows to carry out computations anywhere.
One of the Jetson applications can be video analysis in real-time. Detection, tracking and counting people or other objects will find application in many business and industrial fields. This will allow you to for example, evaluate people’s interest in a given fair stand or shop exhibition.
Jetson Nano is a device with many possibilities that will allow you to implement and realize many artificial intelligence approaches.