Stone the Algorithms

By MATT SHAPIRO

In 2010, convicted murderer Ronnie Lee Gardner was shot to death.

We’ll never know who killed Gardner because the identity of the gunmen is protected by the state of Utah because that is how firing squads work. But even the men pulling the trigger don’t know if they killed Gardner because one of them fired a harmless wax bullet.

This diffusion of responsibility is common in organized executions. When the responsibility for a human death is given to the state, we have patterns and systems through which the responsibility for that violence does not land on the shoulders of a single person, but is shared across the breadth of the institution. In a way, this makes everyone responsible for the execution. Lawyers, judges, juries, the governor, the person who loaded the guns, the people who fired the guns, they are all participants in the violence. But none of them did it alone. When the system kills, it is everyone’s responsibility and no one’s.

Is AI Culpable?

I thought about this shared, structured responsibility when I saw the report on the accident in which Uber’s self-driving car hit and killed a pedestrian. If the report is accurate, the driving software did “see” the pedestrian, but classified her as a “false positive.” What this means practically is that the computer vision algorithm might see a pattern of pixels that look like a human form, but 999,999 times out of a million, it’s nothing.

This time it was not nothing.

Technologists and futurists are quick to point out that driverless cars have a good safety record compared to humans as a whole. Optimistic technologists expect driverless cars to be an inevitability, with a safety record in such stark contrast to humans that human driving will be made illegal.

But there is an ethical blind spot that no one seems to be talking about. We all agree, I hope, that a reduction in vehicle deaths is good. So let’s assume for the sake of argument, that Google corners the market on autonomous driving and that a complete switch to autonomous vehicles reduces vehicular deaths by 70 percent. In aggregate, that’s great news. Google has saved 25,000 lives with their technology!

But that also means that Google is now responsible for 10,000 deaths per year.

That is some 10,000 families who will wonder who is responsible for the death of their father or mother or daughter or son. Would they be alive if a human had been driving? The answer is actually “most likely” because the kind of errors that an autonomous driver makes fall into a different category than the kinds of errors humans make. While the autonomous driver will never drive drunk, it may well be fooled by a reflection or a peculiar bounce of light that would never have bothered a responsible human pilot.

The Ethics of Botland

We currently have a justice system well-suited for finding responsibility for human drivers. If someone is hurt or killed, we can investigate, identify who is at faultĀ and deliver punishment. The responsibility in this system is acute, not diffuse. It’s not the fault of a nameless, faceless conglomerate filtered through a dozen Terms of Service signatures and reinforced by an enormously powerful legal team. It’s that guy in the handcuffs who had one too many and should have called a cab but instead hit the family van that night.

As I think about this, I wonder if there is something in that process of discovery and punishment that helps us make sense of the violence. When the responsible parties have faces and names, we can assign blame. We know the source of our pain, can point our anger or grief in a concrete direction. This process gives us the chance to work through what has happened, to make sense of it, maybe to learn from mistakes. It gives us an opportunity to come to grips with a freak accident or the slippery patch on the road or a reckless inexperienced teenager.

This system gives us an opportunity to discover what happened in legal and experiential terms and ask why it happened. It gives us a chance to forgive.

We can’t forgive an algorithm. Unless there are extraordinary circumstances, we can’t forgive a programmer whose bug caused the accident because there were so many of them. Does responsibility lie with the author of a line of code? Or the senior engineer who missed something in the code review? With the quality assurance team who, though they tested this thing to death, didn’t test that scenario enough times? With the product owner who could have saved lives with a slightly higher standard? Or the regulators who didn’t set the legal standard high enough?

It’s a system filled with wax bullets and there is no way to know which gun was loaded. We can’t know who killed our friend.

In a world of automated drivers, vehicle deaths will almost certainly be fewer. But they will also require us to place the trust of life and death perfectly in the unaccountable hands of corporations and engineers.

Unless we establish a new system that can provide facts and psychological closure for the victims and their families, we’re rushing into a world that will do away with a justice system that can at least attempt to meet our very human needs for discovering responsibility and blame. Our replacement is a system in which every death is met with the functional equivalent of a shoulder shrug and an excuse that will amount to, “Well, that kind of thing just happens sometimes.”

You may also like...

1 Response

  1. Johnny says:

    Agree, and I’ve been saying this forever. BUT, for this reason I also do not think self driving cars will be legalized in the foreseeable future (our lifetimes?). You precisely identify the “blame problem”; now extend that to lawyers and damages. They will exploit the human feelings you describe and start putting these corporations in the red. No insurance company will take the risk of “guaranteed blame” with potentially infinite damages/liability (was it it even an “accident” ?). I feel certain this is why Uber settled that accident with the family within 2-3 days of it happening, for an undisclosed but presumably huge amount (trying to avoid the can of worms). It’s not that I don’t want self driving cars (I really do), but the legal/emotional system we have set up will never allow it. And to be honest I don’t actually think self driving cars will even be reliable until they have “general AI”, and if that problem is solved then the whole world changes so much overnight that self driving cars will be the least of our concerns.

Leave a Reply to Johnny Cancel reply

Your email address will not be published. Required fields are marked *