Race, Technology, and the Future of Policing
How harnessing technologies—and redistributing privacy—can help make policing more effective and egalitarian.
By Professor Bennett Capers
I am a black man. I say this up front because, to borrow from Patricia Williams, “the subject position is everything in my analysis of the law.” I say this up front, too, because when it comes to policing, my blackness means that I am also a disturbing statistic. After all, according to the Bureau of Justice Statistics, one in three black men can now expect to go to prison during his lifetime. A prosecutor is more likely to seek higher charges against me, and a jury is more likely to convict me than a white defendant based on similar evidence. And according to the United States Sentencing Commission, if I am sent to prison, I will likely receive a sentence 20 percent longer than a white offender for the same crime. In a world in which, as Elizabeth Gaynes, an advocate for those affected by incarceration, writes, “young plus black plus male” too often equals “probable cause,” and in which there is a “racial tax,” I carry myself knowing that, because I am a black man, I will be watched by the police, scrutinized by the police, and at any point I can be stopped by the police.
As much as I might hope that my status as an academic might insulate me from racialized policing, my own experience and the experiences of numerous other black professors suggest otherwise. The police do not see an academic. They see only what they want to see; as in Ralph Ellison’s Invisible Man, they see “only my surroundings, themselves, or figments of their imagination—indeed, everything and anything except me.” I am reducible to this: a black man.
So I am a black male. But not tragically so. After all, in a sense I am a black male because this is how I have been socially constructed. Change the construction, and liberation should be possible. There is a final reason to foreground my blackness: I want to make an argument that may seem counterintuitive, that may rile libertarians and progressives, and may even give pause to a few black folk. What I want to argue is that if we truly care about making policing egalitarian and fair to everyone, then that may mean more policing, not less. More to the point, it will mean redistributing privacy.
The policing problems that minority communities frequently face—police violence, under-enforcement, and racial profiling—are not unsolvable. However, the solution I am proposing has little to do with seeking recourse to courts. Instead, it has everything to do with technology, specifically with harnessing technology in ways that can de-racialize policing. I begin below with technology that can combat racial profiling. To be sure, such technology means that there will be more policing, not less. The cost, too, is that many Americans will have to surrender some of the privacy they now enjoy. But in the end, a utilitarian argument can be made that the benefits outweigh the disadvantages.
Consider that the police in New York City recorded 4.4 million forcible stops between 2004 and 2012, and that more than 83 percent of those stopped were either black or brown, a number far greater than their representation in the population. In fact, these numbers tell only part of the story. As statistician and criminologist Jeffrey Fagan has noted, the percentage of black and brown people stopped is disproportionately high even after adjusting for higher crime rates in some minority communities. Other numbers speak to what I have termed “Terry innocence.” For every 20 individuals stopped, a full 19 were found not to be engaged in activity warranting an arrest. In other words, the error rate was around 95 percent. Even this high percentage understates the true error rate, since studies have shown that nearly half of all arrests resulting from these stop-and-frisk encounters are eventually dismissed. The error rate rises even more when one considers the oft-stated objective of aggressive stop-and-frisk practices: to get illegal firearms out of the hands of criminals. According to the NYPD’s own data, between 2004 and 2012 they found approximately 1 firearm for every 1,000 stops, which translates into an error rate of over 99.9 percent. To put this in perspective, this is on par with the success rate when officers engage in purely random searches. Moreover, evidence suggests that racialized policing, rather than contributing to accuracy, adds to error. In New York, for example, stopped blacks were actually less likely to have a weapon than stopped whites. The same is true in other jurisdictions. For example, in New Jersey, troopers found evidence of criminal activity in 13 percent of their searches of black motorists, compared with 25 percent of their searches of white motorists.
The foregoing suggests more than merely a racialized policing problem in which “[s]kin color becomes evidence,” as David A. Harris writes in his research on racial profiling. The statistics point to more than what social scientists have long confirmed: that we all suffer from biases, and many of those biases are about race and criminality. It suggests more than simply the fact that stop-and-frisk practices have a breadth that disproportionately affects those who are Terry innocent. It suggests a fundamental flaw with the way we police, a flaw that undermines, with every racially inflected look, encounter, stop, or frisk, our protestations that we are all equal before the law. Instead of a color-blind government, it suggests a color-dependent government. It is certainly at odds with Chief Justice John Roberts’s insistence that “[t]he way to stop discrimination on the basis of race is to stop discriminating on the basis of race.”
Now consider the role a combination of existing and burgeoning technologies can play in de-racializing and de-biasing policing. The anchor of these combined technologies will be public surveillance cameras. Public surveillance cameras are already integral to law enforcement. For example, New York City currently aggregates and analyzes data from approximately 3,000 surveillance cameras around the city, and allows the police to scan license plates, cross-check criminal databases, measure radiation levels, and more. Washington, D.C., is in the process of consolidating more than 5,000 cameras into one network called the Video Interoperability for Public Safety Program. Chicago, with at least 2,250 surveillance cameras, has Operation Virtual Shield, which includes biometric technology. Baltimore has CitiWatch, which includes more than 400 cameras equipped with low light, pan, tilt, and zoom capabilities. Even small towns have turned to surveillance cameras. A survey from almost a decade ago listed more than 200 towns in 37 states that were either using or planning to use public surveillance cameras. But for the most part, these cameras tend to be clustered in communities that are poor, black, and brown, or areas deemed potential terrorist targets such as the New York Stock Exchange, the Chicago Board of Trade, Times Square, and the Washington Mall. A strong equalizing argument can be made that cameras should be extended to virtually all public spaces.
The second part to this cluster of technologies would be facial recognition technology, already in use by more than 50 police departments. The technology should not be limited to comparing faces with available arrest photos, but also to driver’s license photos and photos on social media sites like Facebook and Instagram. The third part would be access to Big Data. Already, the breadth and depth of information available (from credit card transactions to credit history, from Facebook likes to Twitter feeds, from favorite bands to favored political candidates) is vast. Consider this finding from the report “Big Data: A Revolution That Will Transform How We Live, Work, and Think”:
In 2013 the amount of stored information in the world is estimated to be about 1,200 exabytes, of which less than 2 percent is non-digital. There is no way to think about what this size of data means. If it were all printed in books they would cover the entire surface of the United States some 52 layers thick. If it were played on CD-ROMS, and stacked up, they would stretch to the moon in five separate piles.
Quite simply, whether it involves tracking location history by remotely accessing and analyzing metadata on our phones, or accessing surveillance camera data (both public and private), or turning to commercial data aggregators, we should give the police technological tools so that, with a click of a button, “unknown suspects can be known.”
Finally, this cluster of technologies would include terahertz scanners. Recall that one goal of stop-and-frisk practices is to get firearms out of the hands of criminals, which, because of implicit biases about race and criminality, contributes to the targeting of racial minorities. In fact, terahertz scanners, which measure terahertz radiation, can scan for concealed weapons without the need for a stop or frisk. The device is small enough to be placed in a police vehicle, or even mounted as a surveillance camera. As Police Commissioner Ray Kelly put it during his State of the NYPD address in 2013:
The device reads a specific form of natural energy emitted by people and objects known as terahertz. If something is obstructing the flow of that radiation, for example a weapon, the device will highlight that object. Over the past 12 months, we’ve been working with the vendor and the London Metropolitan Police to develop a tool that meets our requirements. We took delivery of it last week. One of our requirements was that the technology must be portable... we’re able to mount it in a truck.
To be clear, all of this may sound precariously close to George Orwell’s “Big Brother.” But such technology can also de-racialize policing. Cameras and terahertz scanners do not have implicit biases. Nor do they suffer from unconscious racism. Rather, technology can move us closer to real reasonable suspicion. Technology can improve policing so that looks, encounters, stops, and frisks turn on actual criminality, rather than the proxy of race. Put differently, having access to at-a-distance weapons scanners, facial recognition software, and Big Data can mean the difference between race-blind policing and “young plus black equals probable cause.” It would certainly mean a drastic reduction in the number of stopped minorities, indeed a reduction in the number of all stops.
Terahertz scanners would tell the police that the bulge in a black teenager’s jacket is nothing more than a bulky cellphone, but that white tourist who looks like he’s from Texas really does have a gun. Facial recognition technology combined with Big Data would tell the police that the Hispanic driver repeatedly circling the block in fact works in the neighborhood and is probably looking for a parking space; that the clean-cut white male reading a paper on a park bench is in fact a sex offender who, just by being near a playground, is violating his sex offender registration.
This technology would tell the police that the black youth running down the street is simply that—a youth running down the street. It would tell them, in a way that is not intrusive or embarrassing, whether someone is a troublemaker casing a neighborhood, or a student returning home with a bag of Skittles and a Snapple iced tea; a loiterer up to no good, or a father waiting to pick up his children from school; a burglar about to commit a home invasion, or a Harvard professor entering his own home; a mugger looking for his next victim, or the future U.S. Attorney General. And that the white kid from New Jersey driving into Harlem isn’t there to score drugs, but to see his black girlfriend.
Deploying technology to aid in policing—alarming as it may seem at first—can also play a role in tackling some of the other problems we associate with racialized policing. Consider police violence. Scanners, for example, would immediately tell officers that a suspect is unarmed, often enough to obviate the need for deadly force. Big Data could also tell officers whether a suspect has a history of violence or resisting arrest. Beyond this, public surveillance cameras can capture and make visible police use of excessive force. Indeed, they may even have advantages over recordings from body-worn cameras or police vehicle dashboard cameras. Body-worn cameras and dashboard cameras show police-citizen interactions from the police officer’s perspective. While this perspective is important, especially in cases where officers claim they acted with honest and reasonable belief, it is not the only perspective, let alone the most objective one. In addition, there is legitimate concern that the police, ex post, have the ability to control and edit the resulting film. Indeed, there is evidence that the officer who fatally shot Laquan McDonald in Chicago in 2014 tampered with his dashboard camera. More recently, one of the officers involved in the death of Keith Scott in Charlotte, apparently failed to activate his body-worn camera until after the shooting, a violation of department policy, thus contributing to the inadequate footage of that shooting. All of this may undermine the goal of objectivity or capturing the full picture. Public surveillance cameras, if used properly with public input and control, bypass these problems.
The role technology can play in addressing under-enforcement—the fact that police are less likely to vigorously investigate crimes committed against minority victims—is less direct, but important too. To the extent technology can increase accuracy and efficiency in policing, it can free officers to actually engage in the work that those of us who are black and brown and white want them to do: actual policing. Consider one statistic: police fail to make an arrest in about a third of all murders in the U.S. That means a full third of all murders go unsolved. Now imagine if officers, instead of focusing their resources on black and brown people who are Terry innocent, redirected their resources to solve real crimes.
There is much more to be explored about using technology to rethink policing. For one, technology, in the form of public surveillance cameras, may very well deter officers from committing Fourth Amendment violations, much in the way they deter other law-breaking. Equally important, because public surveillance cameras are specular, they have the potential to educate judges about how the Fourth Amendment is really being applied, and thus counter myopic perspectives that already tip the scales in favor of the police. This last point cannot be overstated, since such an education has the potential to “help change constitutional meaning,” as my Brooklyn Law School colleague Professor Jocelyn Simonson argues. Technology, as Harvard law professors Lani Guinier and Gerald Torres point out, can serve “demosprudence”—that is, action, instigated by ordinary people, to change the people who make the law and the landscape in which that law is made.
With this use of technology, none of us would need to be singled out because of race. Or more accurately, everyone would be subjected to the same soft surveillance. The Asian woman with the briefcase. The white businessman trying to hail a cab. The messenger on his bike. The elderly woman walking her poodle. Everyone. Certainly, this gets us closer to equality before the law.
Again, what I am proposing is more policing, not less. In exchange for de-racialized policing, there will have to be more policing of everyone, albeit in the form of soft surveillance. I am essentially proposing that some people cede some of the privacy that they currently enjoy for the greater good of everyone. While this may rankle some—especially civil libertarians—the simple truth is that privacy always has been unequal, with those who are privileged by race and class enjoying a surfeit. If we care about equalizing policing, then one trade-off is the redistribution of privacy in a way that is more egalitarian and consistent with our democratic ideals.
The techno-policing I am advocating may not be a complete cure-all in terms of leveling privacy imbalances and making policing more fair, especially given how interconnected, how networked, every aspect of our criminal justice system is. But it is a significant step in the right direction.
I am a black man. For me, the personal is the political. It is inseparable from how I think about the Fourth Amendment, how I think about policing, and how I think about the way we live now. That is why I argue for more technology in policing, even if it means, or perhaps I should say especially if it means, the redistribution of privacy. The costs, especially to those who already enjoy an abundance of privacy, may seem great. But even greater should be the possibility of what we can become: A fairer society. A more just society. A society where, just possibly, all of us—including those of us who are black and brown—can be equal before the law. In short, a society that gets us closer to the dream the founders could not have imagined, but was there all along, in the text, waiting to be born. Or to be truly read.
But already, I am getting ahead of myself. So for now, in this liminal moment, allow me to return to policing. Quite simply, if the goal is equality in policing, if the goal is efficiency and transparency and crime reduction, this essay maps a route there.
Professor Bennett Capers is the Stanley A. August Professor of Law at Brooklyn Law School, where he teaches evidence, criminal procedure, and criminal law. His academic interests include the relationship between race, gender, and criminal justice, and he is a prolific writer on these topics. His articles and essays have been published or are forthcoming in many of the top law reviews. He is co-editing the forthcoming book Critical Race Judgments: Rewritten U.S. Court Opinions on Race and Law (Cambridge University Press) (with Devon Carbado, Robin Lenhardt, and Angela Onwuachi-Willig). His commentary and op-eds have appeared in the New York Times and other publications. This fall he is a visiting professor at University of Texas Law School.
Before entering academia, Capers spent nearly 10 years as an Assistant U.S. Attorney in the Southern District of New York. His work trying several federal racketeering cases earned him a nomination for the Department of Justice’s Director’s Award in 2004. He also practiced with the firms of Cleary, Gottlieb, Steen & Hamilton and Willkie Farr & Gallagher. He received his undergraduate degree from Princeton University and his J.D. from Columbia Law School.
In 2013, Judge Shira Scheindlin appointed him to chair the Academic Advisory Council to assist in implementing the remedial order in the stop-and-frisk class action Floyd v. City of New York. He has also served as a mayoral appointee to the NYC Civilian Complaint Review Board.
This article was adapted from “Race, Policing, and Technology,” forthcoming in the North Carolina Law Review.