We know how artificial intelligence works in our lives: it helps in picking movies, choosing dates, and correcting misspellings. But what does it mean in policing? Is AI replacing traditional police tasks? Does the police use of AI present novel challenges? Should increasing police reliance on AI concern us? The answer to these questions is "Yes." In the past decade the increasing reliance by police on artificial intelligence tools raises questions about how to strike the right balance between public safety and civil liberties.
Think of policing and you are likely to imagine a uniformed patrol officer scanning the environment for suspicious activity. The most powerful tools an officer once possessed were a gun, experience, and training. But new technologies are changing the way the police approach the streets. Automated license plate readers that identify hundreds of plates a minute are commonplace. The Chicago Police Department uses an algorithm that identifies which city residents may be at especially high risk as perpetrators or victims of gun violence.1 The police in Fresno, CA, piloted an alert system that tells an officer whether the driver the police officer just pulled over to the side of the road poses a threat.4 To this list we can also add facial recognition, suspect profiling, and financial anomaly detection.
Policing has always relied upon large amounts of information. But the scale and speed of its processing is different.
These technologies are transforming the police. There is the sheer amount of data now potentially available to the police, including all our online activity, digitized analog information, and our movements through space and time. And artificial intelligence transforms this data into actionable predictions and identifications. Policing has always relied upon large amounts of information. But the scale and speed of its processing today is different, and therefore warrants new scrutiny.
We might say policing is becoming increasingly automated.2 The identification of suspicious activity—a skill we typically associate with police officers—can increasingly be handed off to artificial intelligence. Just as companies use AI to identify bad credit risks and good employment prospects, police departments are using these tools to figure out which people and places they think deserve scrutiny. Dozens of police departments, for instance, use PredPol, which uses a machine learning algorithm to predict those 500-by-500 foot sections of the city where crime is more likely to happen.a We live immersed in a world of scores and predictions—we should not be surprised the police do, too.
But when the police turn to artificial intelligence, we have far different concerns. After all, the police can stop and question even the unwilling, and perform searches and seizures that can begin the criminal process. And in a democratic society, we expect accountability and oversight over these government actors who have so much power over our lives. In the 20th century, that oversight could have been as simple as a bystander reporting potentially abusive behavior. Even the resource limitations of the police themselves once served as a potent check; it is impossible for most police departments to conduct around-the-clock surveillance of the population.
Artificial intelligence removes these checks. Technological tools powerful enough to gather every bit of available data around us and to make inferences about us as a result do what no human police department could ever do. Every purchase, trip, online post, and more can be endlessly identified, sorted, and combined cheaply. In this sense, artificial intelligence vastly expands the potential pool of people and activities the police can watch.3
AI also allows policing to be less visible. The unrelenting collection of information is made possible because of both the digital trails we leave online and the sensors that capture all our physical world selves. Neither of these things requires the presence of the police. This poses unique challenges for how we regulate policing.
In criminal investigations, police must abide by the Fourth Amendment’s prohibition on unreasonable searches and seizures. The Supreme Court has adapted its interpretation of the Fourth Amendment as the world has changed, but two core concepts have shown themselves to be particularly ill-equipped to address the transformations in policing.
First, since the 1960s, the Supreme Court has reiterated that what we knowingly expose to the public is not protected by the Fourth Amendment. For instance, we have no Fourth Amendment protection over our physical characteristics. We know our hair color, eye color, and other features are there for the world to see, so we can hardly expect special protection for this information. The Supreme Court has said the same of our movements on the public roads.
In today’s world that legal idea is more complicated. Sure: a police officer’s quick glance at your face may not raise concerns, but what about a hundred officers, or a thousand officers doing the same? What if those thousand officers were replaced by cameras equipped with facial recognition? Then your knowingly exposed self can be mapped in space and time: a map that would provide the government with all kinds of sensitive information, such as your religious affiliation or your political leanings. Yet, the conventional view is that no matter whether the government has taken one or a thousand snapshots of your face, you have given up your privacy rights.
Closely related to the idea of voluntarily exposed information is what is known as the third-party doctrine. The Supreme Court has long recognized that you lose Fourth Amendment protection in information you provide to third parties, such as banks and phone companies. In a 1979 decision, the Court held that the Fourth Amendment did not apply when the government intercepted the numbers dialed out by a suspect by installing a pen register at the phone company. Once handed over to a third party, that information lost the protection that would normally have required the police to a obtain a warrant beforehand.
The phone numbers dialed out by landlines in the 1970s are a far cry, though, from our relationship to data today. It is impossible to live a normal life now without providing all kinds of information to third parties. Indeed, both our heavy reliance on the Internet and the Internet of Things means we are constantly streaming information to third parties.
Despite the dramatic changes in the technologies used by everyone, including the police, these legal concepts about knowing exposure and third parties remain robust parts of Fourth Amendment law. What then, can we expect from the courts as the police rely even more on artificial intelligence?
It turns out the U.S. Supreme Court has already hinted at how it might one day approach the question. These hints come from an unlikely source: the Court’s 2018 decision in Carpenter v. United States.b Carpenter is not a case about artificial intelligence. It involves the investigation of a string of cellphone store robberies in the Midwest. Looking for evidence connecting Timothy Carpenter to the crimes, FBI agents asked his wireless services providers for his phone’s cell-site location information during the times of the robberies. The FBI eventually received more than 12,000 location points that showed Carpenter’s phone—and by implication Carpenter himself—near the robberies during the times they occurred.
The Supreme Court ultimately decided in Carpenter’s favor and ruled the collection of this location information amounted to a "search" under the Fourth Amendment. That conclusion meant the government should have obtained a warrant before collecting the data. What was remarkable about the decision was the Court’s expansion of Fourth Amendment protection to information that was held by Carpenter’s wireless carrier, not Carpenter himself. Instead, the Court focuses in Carpenter on the nature of information sought: "the qualitatively different category of cell-site records."c
Artificial intelligence vastly expands the potential pool of people and activities the police can watch.
This much has been commented upon widely. But the Court also said more in Carpenter that has implications for artificial intelligence: it also focused on three distinctive characteristics of the policing involved.d First, what concerned the Carpenter majority was a policing technology that was both superhuman and cheap. Unlike the "nosy neighbor who keeps an eye on comings and goings," the technology used by the police was "ever alert, and [its] memory is nearly infallible."e Few practical limitations exist when police can rely on "tireless and absolute surveillance methods."f
Second, the Court noted that with the vast amount of data collected all of the time, "police need not even know in advance whether they want to follow a particular individual or when."g In Carpenter’s investigation, the government was able to "access each carrier’s deep repository of historical location information" "[w]ith just the click of a button."h This passive form of surveillance vastly expands the power of the police.
Finally, the way the government collected information in Carpenter represented a decreasing reliance on human skill in favor of automation. As the Supreme Court itself observed, no one in Carpenter’s position—anyone with a cellphone—could escape the "inescapable and automatic nature of its collection."i This represents a change in the scale and scope of policing tasks that augment, not just supplement, what the police do.
The government’s case against Carpenter involved plotting points on a literal map for a jury, but the reasoning of the Court’s opinion points to concerns that could easily be applied to the police use of artificial intelligence. The Court was concerned about tools that had extended beyond "augmenting the sensory faculties bestowed upon [the police] at birth."j Tools that are "remarkably easy, cheap, and efficient compared to traditional investigative methods"k merited new ways of applying the Fourth Amendment’s protections.
To be clear: Carpenter is not a case explicitly about artificial intelligence. And the Supreme Court was adamant about trying to limit its decision to the collection of cell site location information. But that self-conscious restraint is not likely to last. In describing those changes in policing that called for a new Fourth Amendment approach, the Court happened to describe some of the key features of artificial intelligence tools being adopted by police departments: they are cheap, powerful, ubiquitous, automated, and invasive of privacy in ways that are novel and alarming. Whether the courts will embrace this potential approach to artificial intelligence tools in policing remains to be seen.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment