On December 18, 2013, a company called Facial network.com drew outcries from privacy advocates by announcing the release of the first real-time facial recognition app for Google Glass, a wearable computer being developed by Google. Called “Nametag,” the app, the company announced, would use Google Glass’s camera to spot a face in the crowd and then identify it within seconds, displaying the person’s name, additional photos, and social media profiles.
Facial recognition technology is already used in a variety of applications, such as preventing passport fraud or unlocking a smartphone simply by looking at it. Nametag, however, opens up a new and potentially paradigm-changing prospect: the idea of being able to immediately identify any stranger walking down the street, without his knowledge or consent.
Nametag’s announcement notwithstanding, this prospect is not necessarily a reality yet. Nametag is only in beta release, and Google Glass said last June it would not approve any facial recognition apps for Google Glass “at this time.” Nor is it yet clear whether this particular app’s algorithms are accurate enough, or its databases massive enough, to allow for peer-to-peer facial recognition on a truly global scale.
If the technological pieces have not yet fallen into place, they soon will, facial recognition experts predict. The huge databases of names and faces that would enable facial recognition algorithms to find a match for a stranger are already out there on social media sites such as Facebook and LinkedIn, though with limited third-party access. Facebook users, for example, have uploaded more than 250 billion photos to the site, many of which they have obligingly tagged with their own and friends’ names. “A biometric database such as Facebook’s has never existed before in the history of humankind,” says Alessandro Acquisti of Carnegie Mellon University (CMU) in Pittsburgh.
And the performance of facial recognition algorithms has improved by orders of magnitude over the past two decades. “There’s no reason to think that will stop,” says CMU’s Ralph Gross. “We are close to a point where the scenario of identifying strangers on a street is very realistic—probably within the next five years.”
“We are close to a point where the scenario of identifying strangers on the street is very realistic.”
It would be hard to overestimate the effect real-time identification of strangers would have on social mores, says Woodrow Hartzog, of Samford University in Birmingham, AL. “If ubiquitous, it would represent the throttling of firmly established public norms about the way we live our lives.”
A Steady March Forward
When it comes to mug shots (frontal photos in controlled lighting), facial recognition algorithms already perform better than humans at the “facial verification” problem: determining whether two photos represent the same person, says Jonathon Phillips of the National Institute of Standards and Technology (NIST) in Gaithersburg, MD. Between 1993 and 2010—the last year for which data is available—the error rate for such comparisons has dropped by half every two years, Phillips says.
Facial recognition algorithms are less accurate when it comes to matching faces in a variety of poses, illuminations, and expressions. But here too, there has been rapid progress over the past few years. In March, for example, Facebook unveiled a new facial verification algorithm that achieves near-human performance on a database of publicly available celebrity photos with varying poses and lighting, correctly determining that two faces are the same 97.25% of the time.
The facial identification problem—finding the right match for a stranger’s face from within a large gallery of faces and names—is much harder than the facial verification problem, since it involves comparing a face not to a single other face, but to potentially millions or billions of faces. There are indications, however, that this harder problem is also giving way. In 2010, NIST found that given a mug shot and a gallery of 1.6 million mug shots to compare it to, the best available commercial facial recognition algorithm found the correct match for the given face 93% of the time—a figure that is likely to have improved over the past four years, Phillips said.
In 2010 and 2011, Gross, Acquisti, and Fred Stutzman of Eighty Percent Solutions in Chapel Hill, NC, conducted a series of experiments showing that serious privacy intrusions are already possible using commercially available facial recognition technology and public databases of faces and names. In one experiment, they uncovered the identities of 10% of the anonymous users of a popular dating website in a North American city by comparing their profile photos to about 280,000 primary profile photos for Facebook members from the same city, using facial recognition software from a company called PittPatt, now owned by Google. Although Facebook users have the option of hiding most of their photos from the public, their primary profile photos cannot be hidden, Acquisti notes, and most members show their own faces in these photos.
In a second experiment, the researchers snapped webcam photos of 93 students at a North American college, and then were able to identify nearly one-third of the students by comparing their photos to the college’s Facebook network, which had about 25,000 members. Next, the team combined these identifications with earlier work by Acquisti and Gross to successfully predict the first five digits of the Social Security numbers of about 16% of the identified students.
“Your face can be a conduit between different databases and sources of information,” Gross says.
Lastly, the researchers created an iPhone app that could replicate their experiments in real time, uploading a cellphone photo of a person to the cloud and returning a name and Social Security number prediction in less than three seconds.
The work is just a proof of concept, the researchers caution. Their predictions involved hundreds of thousands of images, but a facial identification system on the scale of the entire U.S. population might easily have to work with billions of images, making the process slower and increasing the chance of false positives. However, false positives are likely to decrease as facial recognition algorithms become more accurate, Acquisti predicts. And the facial identification problem is highly scalable, so as cloud computing becomes more powerful and inexpensive, so will facial identification apps.
Protecting Privacy
Real-time identification of strangers on the street would be a clear danger to people who are the victims of domestic abuse or stalking, for example. There’s also the potential for a more subtle and widespread kind of harm, one that Hartzog describes as “death by a thousand cuts.”
“We’re living in a world in which our data trails are increasingly being specified by what other people have disclosed about us,” observes Evan Selinger, of the Rochester Institute of Technology in New York. “Innocuous things can aggregate and become part of our larger portrait, to create incredibly revealing maps of who we are.”
People take for granted the ability to walk unrecognized in public spaces, Hartzog observes, and it gives us “a healthy amount of control over our identity.” Eliminating this control “would impact society as a whole and our notions of autonomy, our ability to negotiate social spaces with protection,” he says.
Real-time facial recognition “would eviscerate what I take to be an essential part of our public obscurity,” Selinger agrees. “I think we radically underestimate its potential effect on social life.”
Given the rapid pace at which facial recognition algorithms are almost sure to improve, the most effective route to preventing privacy abuses may be to restrict access to databases of faces and names, Hartzog and Selinger propose. Only a few truly massive such databases currently exist, and for the most part, their owners—Google, Twitter, and Facebook among them—have pledged to work with the U.S. Federal Trade Commission before making any significant retroactive changes to their privacy policies. The fact that these databases are currently locked away means “there is time to talk about these things before the technology becomes adopted and entrenched, at which point it becomes much more difficult to do anything about it,” Hartzog says.
Real-time facial recognition “would eviscerate what I take to be an essential part of our public obscurity.”
The U.S. government is starting to explore a potential regulatory role in preventing privacy abuses by facial recognition apps. On Feb. 6, the National Telecommunications and Information Administration (NTIA) convened the first of a series of meetings on the topic, which included industry experts and civil liberties advocates. The preceding day, Senator Al Franken (D-MN) had written to Nametag’s creator, Kevin Alan Tussy, asking him to postpone the full launch of Nametag until the NTIA has completed its study and established best practices for the use of facial recognition technology—a request Tussy said his company would “seriously discuss.”
It is unlikely, Hartzog says, that the owners of massive biometric databases will make them readily available to third-party facial recognition apps. Facebook’s collection of tagged photos “is one of the most valuable databases in the world,” he observes. “They would probably be extremely reluctant to part with it.”
If the owners of massive photo repositories such as Facebook were all to outlaw their use for facial recognition apps, that would force such apps underground, significantly curtailing their availability, Hartzog says. “It is hard to get venture funding when your whole premise is based on breaching major companies’ terms of use,” he says.
Nevertheless, the very existence of these enormous databases is the “elephant in the room,” Acquisti says. “The real story here is that we have a database of biometrics we never had before, and there is an entity with access to it.”
Social media companies, Acquisti says, tend to do a “two steps forward, one step back” kind of dance about the use of private data. For example, in late 2010, Facebook started using facial recognition technology to suggest tags when users uploaded photos of people, but the company suspended use of this tool in 2012 to “make some technical improvements,” and later promised European regulators it would reinstate the feature in Europe only with their approval. Later, however, the company resumed making tag suggestions for users in the U.S.
Steps like these are likely intended to gradually habituate users to more and more intrusive services, Acquisti says. “These entities may push the envelope a bit, then when they meet resistance they stop and recede, and then maybe a year later they push again,” he says. “To me, the trend is clear—towards more and more usage of biometrics.”
Further Reading
Grother, P., Quinn, G., Phillips, J.
Report on the Evaluation of 2D Still-Image Face Recognition Algorithms. NIST Interagency Report 7709, August 24, 2011. http://www.nist.gov/customcf/get_pdf.cfm?pub_id=905968
Taigman, Y., Yang, M., Ranzato, M., Wolf, L.
DeepFace: Closing the Gap to Human-Level Performance in Face Verification. https://www.facebook.com/publications/546316888800776/
What Facial Recognition Technology Means for Privacy and Civil Liberties. Hearing before the Subcommittee on Privacy Technology and the Law of the Committee on the Judiciary, United States Senate. July 18, 2012. http://www.gpo.gov/fdsys/pkg/CHRG-112shrg86599/pdf/CHRG-112shrg86599.pdf
Join the Discussion (0)
Become a Member or Sign In to Post a Comment