Select Page

support request:

I’m disturbed by the fact that law enforcement agencies are increasingly using robots for neutralizing threats, surveillance, and hostage situations. Maybe I’ve just seen RoboCop too many times, but I’m wary of machines making crucial, life-or-death decisions—especially given how often actual human officers abuse their authority. Do I have any kind of moral obligation to obey a police robot? 

—SUSPECT

Dear Suspect—

Hollywood has not been particularly optimistic about robots in positions of authority. RoboCop is just one example of the broader sci-fi canon that has burned into our minds the tragic consequences of relinquishing critical tasks to inflexible machines—robots whose prime directives are honored with a literalism that can turn lethal, who can blast a person to death but are confounded by a set of stairs. The message of these films is clear: Rigid automatons are incapable of the improvised solutions and moral nuance that’s so often required in moments of crisis.

It may have been this stereotype that led Boston Dynamics, some of whose robots are being incorporated into police departments, to release a video last December of its models dancing to the 1950s Contours hit “Do You Love Me.” Maybe you saw it? The robots included Atlas, an android that resembles a deconstructed storm trooper, and Spot, which served as inspiration for the killer dogbots in the “Metalhead” episode of Black Mirror. Neither machine seems to have been designed to quell fears about a robot takeover, so what better way to endear them to the public than to showcase their agility? And what better test of said agility than a skill considered so uniquely human that we invented a move designed to mock an automaton’s inability to do it (the Robot)? Watching the machines shuffle, shimmy, and twirl, it’s difficult to avoid seeing them as vibrant, embodied creatures, capable of the same flexibilities and sensitivities as ourselves.

Never mind that Spot’s joints can slice off your finger or that police robots have already been used to exercise deadly force. One way to answer your question, Suspect, without any appeals to moral philosophy, might be in terms of pragmatic consequences. If you have plans, as most of us do, to remain alive and well, then yes, you should absolutely obey a police robot.

But I sense that your question is not merely practical. And I agree that it’s important to consider the trade-offs involved in handing policing duties over to machines. The Boston Dynamics video, incidentally, was posted at the tail end of 2020 as a way “to celebrate the start of what we hope will be a happier year.” One week later, insurgents stormed the Capitol, and images proliferated of police officers showing little resistance to the mob—photos that were strikingly juxtaposed, on social media, against the more severe responses to the Black Lives Matter protests last summer.

At a moment when many police departments are facing a crisis of authority due to racial violence, the most compelling argument for robotic policing is that machines have no intrinsic capacity for prejudice. To a robot, a person is a person, regardless of skin color, gender, or cause. As the White House noted in a 2016 report on algorithms and civil rights, new technologies have the potential to “help law enforcement make decisions based on factors and variables that empirically correlate with risk, rather than on flawed human instincts and prejudices.”

Of course, if current policing technology is any evidence, things are not that simple. Predictive policing algorithms, which are used to identify high-risk persons and neighborhoods, are very much prone to bias, which the roboticist Ayanna Howards has called the “original sin of AI.” Because these systems rely on historical data (past court cases, previous arrests), they end up singling out the same communities that have been unfairly targeted in the first place and reinforcing structural racism. Automated predictions can become self-fulfilling, locking certain quadrants into a pattern of overpolicing. (Officers who arrive at a location that has been flagged as ripe for crime are primed to discover one.) These tools, in other words, do not so much neutralize prejudice as formalize it, baking existing social inequities into systems that unconsciously and mechanically perpetuate them. As professor of digital ethics Kevin Macnish notes, the values of the algorithm’s makers “are frozen into the code, effectively institutionalizing those values.’’