2019’s top 100 journal articlesArtificial intelligenceIn the news

Wearing an adversarial patch can fool automated security cameras [Top 100 journal articles of 2019]

This article is part 11 (and the final part) of a series reviewing selected papers from Altmetric’s list of the top 100 most-discussed scholarly works of 2019.

Deep neural networks (DNNs) are a key pattern recognition technology used in artificial intelligence (AI). The applications of DNNs include:

  • facial recognition and person detection (for example for use in security and surveillance systems)
  • speech recognition (for example to identify different speakers).

A DNN finds the correct mathematical manipulation to turn the input into the output, whether it be a linear relationship or a non-linear relationship1. For example, in the context of facial recognition, a DNN creates a range of outputs correctly corresponding to the range of different facial inputs.

However, research shows that DNNs can be easily fooled2. For example, in a 2017 study3 the researchers found that by just applying simple black and white stickers to a stop sign they could cause misclassification in 100% of the images obtained in lab settings, and in 84.8% of the captured video frames obtained on a moving vehicle. Obviously this has very significant implications for the reliability and safety of autonomous vehicles and the other applications using DNNs.

But the image of a stop sign is very basic. What about when the objects are people? The #97 most discussed article4 in Altmetric’s top 100 list for 2019 looks at the potential for convolutional neural networks (CNNs) in automated surveillance camera systems to be similarly fooled. CNNs are a class of DNN5.

As shown in the abstract below and the following image, the researchers found that they were able to generate patches that a person could hold that significantly lowered the accuracy of a person detector.

Adversarial patch
Left: The person without a patch is successfully detected. Right: The person holding the 40cm square patch is ignored. (Source: Thys et al. 2019).

In conclusion, the researchers state that:

We believe that, if we combine this technique with a sophisticated clothing simulation, we can design a T-shirt print that can make a person virtually invisible for automatic surveillance cameras.

What does this mean for knowledge management?

Artificial intelligence (AI) is considered to have great potential to assist knowledge management (KM), and KM is already seen as having influenced AI in the form of cognitive computing.

However, the fact that AI systems can be so easily fooled highlights that knowledge managers need to consider not just the benefits of AI, but to also thoroughly examine and address the potential risks and disbenefits.

The last thing that a knowledge manager would want to do is to establish an AI-powered KM system that could be fooled into allowing unauthorised access, potentially exposed their organisation to an increased risk of corporate espionage.

Author abstract

Adversarial attacks on machine learning models have seen increasing interest in the past years. By making only subtle changes to the input of a convolutional neural network, the output of the network can be swayed to output a completely different result. The first attacks did this by changing pixel values of an input image slightly to fool a classifier to output the wrong class. Other approaches have tried to learn “patches” that can be applied to an object to fool detectors and classifiers. Some of these approaches have also shown that these attacks are feasible in the real-world, i.e. by modifying an object and filming it with a video camera. However, all of these approaches target classes that contain almost no intra-class variety (e.g. stop signs). The known structure of the object is then used to generate an adversarial patch on top of it.

In this paper, we present an approach to generate adversarial patches to targets with lots of intra-class variety, namely persons. The goal is to generate a patch that is able successfully hide a person from a person detector. An attack that could for instance be used maliciously to circumvent surveillance systems, intruders can sneak around undetected by holding a small cardboard plate in front of their body aimed towards the surveillance camera.

From our results we can see that our system is able significantly lower the accuracy of a person detector. Our approach also functions well in real-life scenarios where the patch is filmed by a camera. To the best of our knowledge we are the first to attempt this kind of attack on targets with a high level of intra-class variety like persons.

Header image source: Photo Mix on Pixabay, Public Domain.

References:

  1. Wikipedia, CC BY-SA 3.0.
  2. Heaven, D. (2019). Why deep-learning AIs are so easy to fool. Nature, 574(7777), 163.
  3. Evtimov, I., Eykholt, K., Fernandes, E., Kohno, T., Li, B., Prakash, A., … & Song, D. (2017). Robust physical-world attacks on deep learning models. arXiv preprint arXiv:1707.08945.
  4. Thys, S., Van Ranst, W., & Goedemé, T. (2019). Fooling automated surveillance cameras: adversarial patches to attack person detection. arXiv preprint arXiv:1904.08653.
  5. Wikipedia, CC BY-SA 3.0.
Rate this post

Also published on Medium.

Bruce Boyes

Bruce Boyes (www.bruceboyes.info) is a knowledge management (KM), environmental management, and education professional with over 30 years of experience in Australia and China. His work has received high-level acclaim and been recognised through a number of significant awards. He is currently a PhD candidate in the Knowledge, Technology and Innovation Group at Wageningen University and Research, and holds a Master of Environmental Management with Distinction. He is also the editor, lead writer, and a director of the award-winning RealKM Magazine (www.realkm.com), and teaches in the Beijing Foreign Studies University (BFSU) Certified High-school Program (CHP).

Related Articles

Back to top button