News
Artificial Intelligence and Machine Learning

Blocking Facial Recognition

Posted
Concealing one's face to keep from being added to facial recognition databases.
Facial recognition technology is in trouble.

Facial recognition technology is in trouble. Following the in-custody killing of George Floyd in Minneapolis, and the Black Lives Matter movement's global protests against the ongoing police brutality that fueled it, tech industry giants IBM, Microsoft, and Amazon have all shelved sales of the technology. They want Congress to legislate on how face recognition can be used ethically, and in unbiased, equitable ways, to ensure it is not used to misidentify people of color, allow racial profiling or fuel mass surveillance.

On top of those surprises from Big Tech, facial recognition is also facing a fightback at the grassroots level, from technology researchers angry at the way unauthorized companies have been scraping mugshots from social media profiles to train massive facial recognition databases. This has has provoked two teams of computer scientists to fight back with some unusual countermeasures: they have developed image-perturbing camera apps that destroy the ability of deep learning-based classifiers to identify people from the photos the apps capture.

The researchers' aim is to give people back a degree of privacy at a time when their faces increasingly are being captured without their consent to train facial recognition models — whether it is via police closed-circuit television (CCTV) cameras in the street, as is done across China, and more recently in London trials by the Metropolitan Police, or through the mass harvesting of selfies from social media platforms by surveillance tech companies.

The issue was brought into sharp relief in January, when it emerged that New York City-based surveillance firm Clearview AI had quietly scraped an astonishing 3 billion photos from a swathe of social media platforms, including Facebook, Youtube and Twitter. Clearview AI then trained a deep learning network to recognize everyone in those pictures, giving its law enforcement customers (thought to include U.S. police departments, plus the Federal Bureau of Investigation and the federal Department of Homeland Security) a tool that can recognize people in mere moments.

The news of such a system, which no one had opted into, brought swift condemnation from politicians and social media platforms alike. "Clearview's product appears to pose particularly chilling privacy risks, and I am deeply concerned that it is capable of fundamentally dismantling Americans' expectation that they can move, assemble, or simply appear in public without being identified," said Senator Ed Markey of Massachusetts in a letter demanding that Clearview AI explain the breadth of its data scraping operations, what it is used for, and who has access to its data.

Meanwhile, social media platform Twitter demanded that Clearview AI delete all taken from its users, and "cease and desist" from any future scraping of it. At the end of May, the American Civil Liberties Union (ACLU) filed a lawsuit against Clearview, alleging it implements "privacy destroying surveillance."

Despite this furor, and ongoing protest campaigns from the Electronic Privacy Information Center in the U.S. and Big Brother Watch in the U.K. (which calculated the technology had an 93% misidentification rate in London's Metropolitan police trials), facial recognition technology continues to proliferate apace, though it remains to be seen to what extent the moratoriums from IBM, Microsoft, and Amazon might put the brakes on it.

Two unrelated artificial intelligence (AI) software development teams, led by machine learning specialist Kieran Browne at the Australian National University in Canberra, and neural network vulnerability researcher Shawn Shan at the University of Chicago, decided it was time to put some privacy-protecting imaging tools in people's hands. Quite independently of each other, these groups have developed two fascinating camera applications designed to thwart the recognition of scraped images.

ANU's system is called Camera Adversaria, and the University of Chicago's system is dubbed Fawkes.

"The goal of Camera Adversaria is to make a user's photographs harder to automatically surveil with today's machine vision algorithms," says Browne. Fawkes, says Shan's team in a research paper posted on the Arxiv.org preprint server, provides a way for people to "inoculate" themselves against "inclusion in unauthorized facial recognition models trained to recognize them without consent."

The approach of both systems is to exploit, in different ways, deep learning's Achilles heel: its brittleness.

Although the deep multi-layered neural networks that make up a deep learning (DL) model are great at recognizing patterns, ranging from applications spanning image and speech recognition to beating human Go champions, it does not take much to make them produce perverse results. This is why a few stickers on a roadside speed limit sign can make a car's onboard DL system recognize a 30mph limit as a 70mph one, or why markings on someone's face can fool a facial recognition algorithm.

"Even though we describe machine vision algorithms as 'seeing', what they are really doing is recognizing statistical distributions of pixels. If you can add particular kinds of noise to the pixels, you can disrupt this process without significantly changing the image for a human viewer," says Browne.

It's these adverse results that Camera Adversaria and Fawkes exploit, but in different ways.

Camera Adversaria looks and feels like a regular Android camera app. Indeed, it's already available on the Google Play Store, and an iOS version may follow if there's enough interest, says Browne.

Users take pictures as they normally would with Camera Adversaria, but then its perturbation code adds an array of imperceptible noise to the image file, such that it cripples the ability of a deep learning network to classify the picture if it is later scraped by a firm enrolling the image in a DL training regime.

To manipulate the images, Camera Adversaria performs a unique perturbation operation on each; otherwise, a vanilla, always-used image could be detected and easily removed by the image scrapers. ANU's perturbation adds a specially calculated pattern, called Perlin noise, to each image. This type of noise was developed for use in CGI systems so images could be smoothed to produce natural-looking textures in movie visual effects. The result? The perturbation appears natural, while wrecking the image's ability to be classified.

"Camera Adversaria allows the user to adjust the amount of noise applied to each image, and I am fairly confident that large perturbations would fool every DL network currently in use," says Browne. "This does not mean that any perturbation will work across any [deep learning] network. You can expect those used in large tech companies to be more resistant, but not immune, to adversarial perturbations. It also does not guarantee the perturbations will fool the next generation of machine vision algorithms. Unfortunately, this is something of an arms race."

Where ANU focused on developing an app, and the human-computer interaction (HCI) side of its use, rather than on perfecting a very strong perturbation algorithm, the Fawkes team worked the other way around. Their approach is currently software-centered and is designed to add imperceptible pixel-level changes, which they call "cloaks," to photos before people publish them online.

"When collected by a third-party tracker and used to train facial recognition models, these cloaked images produce functional [DL] models that consistently misidentify the user," the Chicago team write in their paper. In tests, they have found that Fawkes is between 95% and 100% successful at protecting images from DL classification and matching.

Shown each other's research, each team was impressed with the other's surveillance-resistance technology. Says Shan, "Camera Adversaria is interesting as it uses a similar technique to protect user face images. I agree with them that a mobile app is a great way to deploy such a system in the wild, and it will have a huge impact if more and more people start to use the app."

Says ANU's Browne, "Fawkes is a fascinating approach and we commend the authors of that paper." However, he adds, measures such as these tend to generate an arms race in technology, with the face recognizers continually attempting to get ahead of the perturbers. Yet it is an arms race the surveillance industry has brought upon itself, by using images without consent and opting them into massive facial recognition systems.

"This highlights a number of issues with today's approaches," says Peter Bentley, a computer scientist specializing in AI and evolutionary systems at University College London, whose research has involved deep learning on novel systems, such as AI-based airliner autopilots. "Data gathering by scraping websites is simply a bad idea. It's taking data without permission, and it will likely result in all kinds of biases, and it can be manipulated very easily by inserting confounding images.

"It's an arms race that should never have happened. We should only train our machine learning algorithms on legitimate, carefully prepared data. The old computer industry saying, 'garbage in, garbage out' is still true today."

Paul Marks is a technology journalist, writer, and editor based in London, U.K.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More