By Uri Guterman, Head of Product & Marketing, Hanwha Techwin Europe
We’ve become used to seeing artificial intelligence (AI) in almost every aspect of our lives. It used to be that putting AI at work involved huge server rooms and required vast amounts of computing power and, inevitably, a significant investment in energy and IT resources. Now, more tasks are being done by devices placed across our physical world, ‘at the edge.’ By not needing to stream raw data back to a server for analysis, AI at the edge, or ‘edge AI’, is set to make AI even more ubiquitous in our world. It also holds huge benefits for the video surveillance industry.
AI at the edge has several benefits compared to server-based AI. Firstly, there are reduced bandwidth needs and costs as less data is transmitted back to a server. Cost of ownership decreases, and there can also be important sustainability gains as a large server room no longer has to be maintained. Energy savings in the device itself can also be realised, as it can require significantly less energy to carry out AI tasks locally instead of sending data back to the server.
With edge AI devices, compared to a cloud-based computing model, there isn’t usually a recurrent subscription fee, avoiding the price increases that can come with this. Focusing on edge devices also enables end-users to invest in their own infrastructure.
Cameras using edge AI can make a video installation more flexible and scalable, which is particularly helpful for organisations that wish to deploy a project in stages. More AI cameras and devices can be added to the system as and when needs evolve, without the end-user having to commit to large servers with expensive GPUs and significant bandwidth from the start.
Improved operational performance and security
Because video analytics is occurring at the edge (on the device) only the metadata needs to be sent across the network, and that also improves cybersecurity as there’s no sensitive data in transit for hackers to intercept. Processing is done at the edge so no raw data or video streams need to be sent over a network.
As analysis is done locally on the device, edge AI eliminates delays in communicating with the cloud or a server. Responses are sped up, which means tasks like automatically focusing cameras on an event, granting access, or triggering an intruder alert, can happen in near real-time.
Additionally, running AI on a device can improve the accuracy of triggers and reduce false alarms. People counting, occupancy measurement, queue management, and more, can all be carried out with a high degree of accuracy thanks to edge AI using deep learning. This can improve the efficiency of operator responses and reduce frustration as they don’t have to respond to false alarms. AI cameras can also run multiple video analytics in the same device – another efficiency improvement that means operators can easily deploy AI to alert for potential emergencies or intrusions, detect safety incidents, or track down suspects, for example.
Video quality improvements
What’s more, using AI at the edge the quality of video captured is improved. Noise reduction can be carried out locally on a device and, using AI, can specifically reduce noise around objects of interest like a person moving in a detected area. Features such as BestShot ensure operators don’t have to sift through lots of footage to find the best angle of a suspect. Instead, AI delivers the best shot immediately, helping to reduce reaction times and speed up post-event investigations. It has an added benefit of saving storage and bandwidth as only the best shot images are streamed and stored.
AI-based compression technology also works to apply a low compression rate to objects and people which are detected and tracked by AI, whilst applying a high compression rate to the remaining field of view – this minimises network bandwidth and data storage requirements.
Using the metadata
Edge AI cameras can provide metadata to third party software through an API (application programming interface). This means that system integrators and technology partners can use it as a first means of AI classification, then apply additional processing on the classified objects with their own software – adding another layer of analytics on top of it.
There is no single point of failure when using AI at the edge. The AI can continue to operate even if a network or cloud service fails. Triggers can still be actioned locally, or sent to another device, with recordings and events sent to the back end when connections are restored.
AI is processed in near real-time on edge devices instead of being streamed back to a server or on a remote cloud service. This avoids potentially unstable network connections from delaying analytics.
Benefits for installers
For installers specifically, offering edge AI as part of your installations helps you stand out in the market, by offering solutions for many different use cases. Out-of-the-box solutions are extremely attractive to end-users who don’t have the time or resources to set up video analytics manually.
AI cameras like the ones in the Wisenet X Series and P Series work straight from the box so there’s no need for video analytics experts to fine tune the analytics. Installers don’t have to spend valuable time configuring complex server-side software. Of course, this also has a knock-on positive impact on training time and costs.
The future of AI at the edge looks bright too, with more manufacturers looking at ways to broaden classification carried out by AI cameras and even move towards using AI cameras as a platform, to allow system integrators and software companies to create their own AI applications that run on cameras.
Right now, it’s an area definitely worth exploring for both end-users and installers because of the huge efficiency, accuracy, and sustainability gains that edge AI promises.