Law enforcement agencies continually seek out ways to gain an advantage in their war against crime. From fingerprinting and forensic crime labs to portable radios, computers, and predictive analytics, technology has driven remarkable advances in police science.
Yet in the age of digital technology, there's a new sheriff in town: facial recognition. Around the globe, law enforcement agencies are tapping this technology—in some cases, in conjunction with augmented reality (AR) glasses—to identify people of concern and snag fugitives at sports events, concerts, festivals, and public demonstrations.
Although facial recognition could improve public safety, it has a significant potential downside. "These technologies raise all sorts of issues related to the balance between safety and freedom. The question is: what are the situations in which the technology is reasonable?" asks David Weisburd, distinguished professor in the Department of Criminology, Law, and Society at George Mason University and executive director of the university's Center for Evidence-Based Crime Policy.
A New Image
In recent years, police have adopted a slew of digital technologies: body-worn cameras, smartphones, drones, robotic devices, biometrics, virtual reality-based training, and predictive analytics. "Digital technologies are clearly on law enforcements' radar screen," explains Jim Bueermann, president of the National Police Foundation, an independent non-profit organization that aims to advance policing through innovation and science.
Among the most powerful and controversial of these tools is facial recognition. Using artificial intelligence (AI), mobile connectivity, and vast databases connected through clouds, the ability to spot people in a crowd is advancing by leap and bounds. With AR glasses, officers can simply walk into a public place and instantly identify "persons of interest."
In China, police now use augmented reality glasses manufactured by Beijing-based LLVision Technology Co. to spot individuals using fake travel documents, as well as those on watch or wanted lists. The system uses a built-in camera to capture facial images, and transmits them to a database via a smartphone and cellular connection. This entire process takes place within 100 milliseconds. The officer is notified if someone matches established criteria.
Meanwhile, in the U.K., police now rely on facial recognition technology to spot people on watch lists at concerts and sporting events. In the U.S., several police departments use an Amazon tool called Rekognition to check photographs of unidentified suspects in real time against a database of mug shots from the county jail. Two other companies, MorphoTrust USA and DataWorks Plus, incorporate third-party data sources in their facial recognition solutions for law enforcement.
All of this has prompted civil liberties and privacy groups, including the American Civil Liberties Union (ACLU) and Electronic Frontier Foundation (EFF), to openly question such tactics. EFF argues that a lack of regulation for biometrics and facial recognition opens the door to abuse and privacy violations. In May 2018, the ACLU asked Amazon to stop marketing its system, which it said could be used to "easily build a system to automate the identification and tracking of anyone."
To be sure, privacy advocates are deeply concerned. "It's part of a longstanding battle over whether people have any degree of privacy in public places," says Michael Zimmer, an associate professor in the School of Information Studies at the University of Wisconsin, Milwaukee, and director of the Center for Information Policy Research. Zimmer is concerned the technology will be used to thwart political opposition and usher in human rights abuses, saying, "The direction we are headed is extremely concerning."
Picture Imperfect
A big question, Weisburd says, is whether these technologies are truly making police more efficient and improving public safety. "We just don't know in most cases whether these 'tools' have reduced crime or improved police relationships with the public, or whether they are cost-effective," he says.
Zimmer also takes a wait-and-see attitude, even though he has "questions about the accuracy and effectiveness of the technology," as many others do.
For instance, the U.S. Federal Bureau of Investigation (FBI) utilizes what it calls the Next Generation Identification (NGI) system, which uses facial recognition to compare an unidentified image with a mugshot database, and then generates a gallery of two to 50 potentially matching "candidate" photos. A 2017 report said the NGI system returned a correct match among the "candidate" photos 85% of the time.
On the other hand, a U.K. organization, Big Brother Watch, claims Leicestershire Police, Metropolitan Police, and South Wales Police used the technology "to identify and monitor petty criminals," as well as "individuals with mental health issues, and peaceful protesters." The non-profit said 91% of automated facial matches generated by the technology for the South Wales Police wrongly identified innocent people, as did 98% of facial recognition matches generated for the Metropolitan Police.
Nevertheless, facial recognition continues to march forward, and more police agencies are testing and adopting these systems, and others (for example, image recognition is being used to identify license plates on vehicles).
Says Bueermann, "There are both positives and negatives associated with facial recognition and the use of augmented reality glasses. There are also unintended consequences. The bottom line is that the technology simply enables capabilities that can be used in both good and bad ways. It's all about finding the balance point between public safety and privacy."
Samuel Greengard is an author and journalist based in West Linn, OR, USA.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment