Should You Get Into a Self-Driving Car?

About six weeks ago, a car in Arizona killed a woman.

She appeared suddenly in the road, wheeling a bike, and the human driver of the self-driving Uber vehicle wasn’t paying close enough attention to regain control in time. (It’s debatable whether he could have avoided the accident even if he had.)

Immediately, Uber suspended its testing in Arizona, Pittsburgh, Arizona, and Toronto, and said it wouldn’t attempt to renew testing permits in California.

But that doesn’t mean America is shelving self-driving cars. Google’s Waymo is logging more than 25,000 miles each week. (Its cars drive an average of 5,600 miles before needing a human driver to intervene. Uber was having trouble going 13 miles.) Apple, Tesla, Lyft, General Motors, and Toyota are also spending millions in the driverless car race.

“Driverless cars are inevitable,” said TGC Council member and bivocational pastor Darryl Williamson. “It’s a huge shift.”

It’s important for the church to ask—of every technological advance—“Should we provide this advanced service or product? Is this a good thing?” he said.

“We should ask if a product or service promotes or hinders human dignity and flourishing, fans our carnal natures or enhances our virtues, protects life or risks it,” he said.

But it’s not straightforward. For example, social media both promote a sense of expanded community and help people stay connected, and push us away from authentic community and isolate us on our phones.

Weighing advances is “valid conversation for Christians to have,” Williamson said. “Automation in general and artificial intelligence [AI] in particular warrants that kind of talk from the church.”

Williamson, who has spent nearly 30 years in the technology industry, explained how Christians can best think about this development.

The Arizona accident points to a moral dilemma that human drivers sometimes confront: When facing an inevitable accident, how do we choose what to do? And if those are moral decisions—say, between the woman on the road or the child on the sidewalk—how does a driverless car weigh them?

We often evaluate moral implications for the exceptional case, such as the who-should-the-car-hit scenario. But AI would argue that we should be driven by the morality of the common—that is, the ethical advantage of reducing fatalities through automation, since driverless cars are not affected by inattentive or inhibited drivers. The economies of these scenarios are higher.

(Indeed, some studies predict 300,000 American lives per decade can be saved by automated cars.)

Presumably, some of the ethical choices are programmed in. How can a programmer make those decisions? What would a Christian programmer do?

It’s not necessarily the case that a programmer would decide, but rather that the analysis of thousands of similar scenarios over time will “train” the systems on the best option to take. Because so much of the decision-making is based on analysis of large sets of data (outcomes of previous decisions, feedback from riders, and so on), a Christian programmer or analyst can be confident the system is trained to make good ethical decisions, albeit not perfectly.

One example: Since driverless cars are continually “aware” of the location of the vehicle, they can slow down a block or two before a school zone or in a neighborhood where children may be found, or respond quickly to someone suddenly emerging in traffic. It’s hard to ignore benefits like that.

But it’s not all machine-dependent. You need analysts to look for patterns in the data, to ask questions, to find various trends. They tell the system what they think it should be looking for.

Using machine-learning-based analytics, the data will tell you things you wouldn’t know otherwise. For example, a few years ago General Electric (GE) noticed that its plane engines needed more and more maintenance work. After running through enormous amounts of data and analytics, analysts narrowed the trouble down to the Middle East and China. They found engines were getting clogged in the “hot and harsh” environments there. GE began hosing down the engines more often, and estimate that the more efficient engines will save one customer $7 million a year in jet fuel.

That’s where data help us to make those kind of decisions—both predictive (“This is where you’re heading.”) and prescriptive (“You know, I’ve seen cases like this before. When we’ve seen folks do these things, it’s made it better.”).

Most Americans are afraid of getting in a self-driving car (63 percent, though that’s down from 78 percent last year). Why?

I suppose it’s because we’re so used to driving them ourselves. It’s intuitive to have our hands on the steering wheel.

It’s funny, because we’re not afraid of planes, which can take off, fly, adjust to weather conditions, and land themselves. Driverless cars have the potential to be so much safer—a driver isn’t going to be checking text messages or calling its mom or getting tired. A device on the outside of the car could be looking panoramically, unaffected by the dark or the weather.

But we don’t like to give up control.

What are things about advancing technology—like self-driving cars—that we can celebrate? What should we be wary about?

We can celebrate the improved efficiencies and security these technologies provide. If we only consider the large-scale reduction of automotive fatalities, automation gives us much to celebrate. But we can also be excited for the freedom provided by them as well, such as senior citizens who don’t want to or cannot drive safely. Driverless cars will provide them a level of independence they can only wish for today. These benefits are far-reaching. Indeed, it will eventually be deemed irresponsible not to use self-driving cars. We can celebrate that.

But we should also be cautious, because most technology companies are not primarily weighing the broader implications of their work. Almost every consideration is first, “Is this good for our company?” or “Is this profitable?” Few in the industry are asking broader social, ethical, or economic questions. That’s a place for the church to step in.

How can the church step in?

Evangelical seminaries can really help by broadening the ethical discussion beyond individual outcomes to the social and economic implications from a biblical perspective. For example, the framework the Old Testament law gives us around the function of the Sabbath (day and year) and the Jubilee helps us to know that productivity must be balanced against the social well-being of people and the sustainability of the environment.

Questions about the function of wealth in relation to community membership and responsibility are at the heart of biblical ethics, but these concerns have been largely ignored by evangelical ethicists. We need Reformed institutions, resting heavily on the biblical witness, to boldly lead in this area.

Some questions we must answer: Is it right to replace the unique contribution of human beings to the economy—advanced reasoning power—with perhaps even more superior reasoning? If the human presence in the advanced segment of the economy is minimized to create more wealth for some and considerably less for many, is that good for human flourishing?

These are important questions, because they begin to encroach on sacred notions of liberty or the pursuit of happiness in the minds of some. Can I maximize my productivity even if it diminishes the ability of others to participate in the economy meaningfully?

The question about driverless cars is just the beginning. The broader question of AI and its effect on work/life in our culture is a vital concern that warrants our conversation, prayer, and study.