The spread of artificial intelligence into surveillance technology has given every CCTV camera the potential to turn into a spy for the state. And on the internet, images scraped from social media sites or videos can be used to build massive surveillance databases like Clearview AI.
A hoodie might change that.
Researchers from Facebook and the University of Maryland have made a series of sweatshirts and T-shirts that trick surveillance algorithms into not detecting the wearer. They’ve dubbed them “invisibility cloaks” for A.I.
The shirts exploit a quirk that was found in computer vision algorithms nearly five years ago. These algorithms use a simple, even naive approach to identifying objects: They look for patterns of pixels in a new image that resemble patterns they’ve seen before. Humans can use complex clues or real-world knowledge when they’re looking at something new, but algorithms just use pattern matching.
That means if you know the pattern the algorithm is looking for, you can hide it. In order to create the algorithm-fooling shirts, the Facebook and Maryland team ran 10,000 images of people through a detection algorithm. When a person was detected, they were replaced with randomized changes of perspective, brightness, and contrast. Another algorithm was then used to find which of the randomized changes was most effective at tricking the algorithm.
When those randomized patterns were printed on physical objects, like posters, paper dolls, and finally clothing, the detection algorithms were still tricked.
Researchers noted however that the accuracy of these real-world tests was lower compared to the purely digital tests. When a person is wearing the sweatshirt, the detector’s ability to recognize them goes from nearly 100% to 50%, the likelihood of a coin toss.
The work could benefit Facebook, too. The attack fundamentally works because image recognition algorithms lack any context or understanding of the images they analyze. Figuring out how they fail is a first step to make algorithms that don’t fall for these kinds of tricks. It’s the beginning of a research process that would not only make algorithms more resistant to attack but, in theory, greatly boost their accuracy and flexibility since their view of the images is less simplistic. In other words, the research could be a way of bolstering the strength of image-detection algorithms rather than destroying them.
You can actually buy a T-shirt or sweatshirt printed with the algorithm-fooling design, but right now, it wouldn’t likely protect your identity from surveillance technology.
The researchers tested the designs on popular open-source algorithms and not the proprietary algorithms built by surveillance firms like NEC.
The design also is meant to evade person detection, not facial recognition, which specifically targets aspects of a person’s face rather than their entire body. Person detection is used in public spaces for tasks like counting crowds or seeing if a person is approaching a smart doorbell and, in some cases, to augment facial recognition. But the research, and its turn into reproducible fashion, represents a shifting landscape in surveillance technology in which people can subvert a state-of-the-art algorithm with a simple piece of clothing and then manufacture the design for anyone who wants it.
And even if it doesn’t work, a sweatshirt plastered with an A.I.-generated surveillance spoofer is a great conversation piece.
This research continues a line of work being done by the University of Maryland computer science department, some of whom joined Facebook in 2018 and 2019. Previously, the lab researched how these same principles of tricking A.I. could be used to fool copyright detection algorithms, like the ones used by YouTube to prevent unauthorized use of copyrighted music, in order to call attention to how easy they were to evade.