In April, Google made news yet again with the controversy surrounding the formation of an ethics board focused on artificial intelligence (AI). The board, tasked with the “responsible development of AI,” was to have eight members and meet four times over the course of 2019 to evaluate the ethical implications of AI development and to make recommendations to executives.

But a week after the board was formed, it was officially cancelled. The Advanced Technology External Advisory Council (ATEAC), as it was called, ran into considerable controversy over the inclusion of Kay Cole James, the African American female president of the conservative think tank The Heritage Foundation, as well as the inclusion of drone company CEO Dyan Gibbens. The inclusion of James was protested by employees because of her views on sexuality and climate change. The inclusion of Gibbens brought up an older controversy Google faced: the outcry from its employees last year over an AI contract with the U.S. Department of Defense. Project Maven was designed to strength drone targeting systems by identifying objects in video data, but thousands of Google employees protested the company’s involvement, saying: “Google should not be in the business of war.”

The race to develop ethical AI is in vogue, with companies like Google and German-based SAP—as well as government organizations like the European Union—drafting forms of ethical guidelines for AI. These ethical statements are often developed in response to the growing concern among ordinary people about the way AI is reshaping society—from how we deal with bias in AI to the future of work in an AI-driven economy. The giants of Silicon Valley are sensitive to growing criticism.

These corporate and government principles can ring hollow, however, since they’re often based on the prevailing moral preferences of the day, which shift depending on what tribe or interest is at the table. Google points out that AI development should be socially beneficial and not cause harm, but rules out any military applications that might actually save lives through more precise weapon targeting. Often these statements are based more on popular opinion and what may increase profits than on any transcendent principles of justice and human dignity. Absent a shared moral consensus, it will be hard for tech companies and civic authorities to create principles that are universally embraced.

Need for Christian Wisdom

This is why Christians should do the hard work of thinking well about new technologies like AI. We must not look to corporations or governments to do the hard-but-crucial work of ethics and morality. Our source of truth comes from a power who is wiser than we, or any interest group, could ever hope to be. That’s why our presence in the field of AI—as developers, coders, business leaders, and end users—is vital.

We must not look to corporations or governments to do the hard-but-crucial work of ethics and morality.

One foundational moral concept that Christians should bring to the AI conversation is the notion of universal human dignity. We believe all humans are created in God’s image and by nature have innate dignity and worth. In fact, each human is so valuable that God himself became one in order to save us.

As opposed to some popular views of the nature of humanity, we are not machines, nor are we simply the products of evolution over time. Regardless of what technologists like Ray Kurzweil and Elon Musk may believe, humans are created uniquely by a loving God who desires us to be redeemed and restored. Every human being, regardless of perceived worth, is knit together by their Maker in their mother’s womb. We were intentionally formed, even before we took our first breath.

Without the foundational moral truth of the imago Dei, humans will naturally treat other humans in ways that reduce their value to either their utility or to their economic contribution. But a Christian witness insists that all human life is valuable and must be treated with respect and dignity—regardless of perceived value, economic utility, or political worth. The Christian witness reminds us that no matter how advanced artificial intelligence might become, it will never replace humanity as the crown jewel of creation.

No matter how advanced artificial intelligence might become, it will never replace humanity as the crown jewel of creation.

There is already AI that can outperform humans in narrow tasks such as games, data analysis, and decision making. But AI will never replace human beings in terms of ultimate worth. Why? Because even the most advanced AI is not a living being. It is a created tool given to us by a loving God, to honor him and to uphold the dignity of our neighbors.

Statement of Principles

Because of the need for Christian principles to be applied to discussions surrounding AI, evangelical Christians from across denominations and vocations have drafted and signed a new document called “Artificial Intelligence: An Evangelical Statement of Principles,” in hopes of grounding our understanding of this radical, life-altering technology in the Christian gospel. We hope this document transcends society’s shifting morality and offers more durable foundations for discourse about the ethical implications of AI—including the implications of AI on the nature of work, privacy, and even medicine.

Christians must not sit on the sidelines and let corporations or governments tell us what is ethical. We must proactively engage these pressing issues with biblical wisdom and moral insight, rather than responding to them reactively after their impact is widely made. This new statement is hopefully a first step in that direction.