The FAQs: What Christians Should Know About Artificial Intelligence


What just happened?

Last week more than 60 evangelical leaders released a statement addressing artificial intelligence. The Southern Baptist Convention’s Ethics & Religious Liberty Commission (ERLC) spent nine months working on “Artificial Intelligence: An Evangelical Statement of Principles,” a document designed to equip the church with an ethical framework for thinking about this emergent technology.

“There are many heated debates in Washington, many of them important,” said ERLC president and TGC Council member Russell Moore. “But no issues keep me awake at night like those surrounding technology and artificial intelligence. The implications artificial intelligence will have for our future are vast.”

Moore added, “It is critical that the church be proactive in understanding AI. It’s also critical that the church insist AI be used it ways consistent with the truth that all people possess dignity and worth, created as they are in the image of God.”

What is artificial intelligence?

The term artificial intelligence (AI) was coined in 1956 by the American computer scientist John McCarthy, who defines it as “getting a computer to do things which, when done by people, are said to involve intelligence.” There is no standard definition of what constitutes AI, though, because there is a lack of agreement on what constitutes intelligence and how it relates to machines.

According to McCarthy, “Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.” Human intelligence includes such capabilities as logic, reasoning, conceptualization, self-awareness, learning, emotional knowledge, planning, creativity, abstract thinking, and problem solving. A machine is generally considered to use AI if it is able to perform in a way that matches these abilities.

What are the types of AI?

The two general categories of AI are general and narrow. General AI (or “strong AI”) is the capability of a machine to perform many or all of the intellectual tasks a human can do, including the ability to understand context and make judgments based on it. This type of AI currently does not exist outside the realm of science fiction, though it is the ultimate goal of many AI researchers. Whether it is even possible to achieve general AI is currently unknown. But even if achieved it is possible, such machines would likely not possesses sentience (i.e., the ability to perceive one’s environment, and experience sensations such as pain and suffering, or pleasure and comfort).

Narrow AI (or “weak AI) is the capability of a machine to perform a more limited number and range of intellectual tasks a human can do. Narrow AI can be programmed to “learn” in a limited sense but lacks the ability to understand context. While different form of AI functions can be strung together to perform a range of varied and complex tasks, such machines remain in the category of narrow AI.

How do computers “learn”?

To be considered AI, a machine needs the ability to “learn.” One of the most common types of AI involves “machine learning,” the science of getting computers to learn and act like humans do, and improve their learning over time in autonomous fashion, by feeding them data and information in the form of observations and real-world interactions. (While all machine learning is AI, not all AI involves machine learning.)

Machine learning usually involves the processes of training and inference. In the training phase, machines are first fed data and information in the form of observations and real-world interactions. The machine looks at the data and makes generalizations from the examples provided. The machine then uses algorithms, that is, a set of guidelines that tell a computer how to perform a task, to make inferences (i.e., conclusions reached on the basis of evidence and reasoning).

A prime example of machine learning is teaching computers to learn how to identify images, such as recognizing human faces. During the training phase, programmers have the computer process a large dataset using thousands or millions of images of human faces. The machines are then taught to expect certain properties of faces, such as the average distance been nose and eyes or between ears. The computer may then break the images down into small sections and look for patterns based on color, shading, and so on. Through this process of training and inference an AI program can become better at learning what attributes are most relevant to recognizing faces.

What are positive examples of the use of AI?

Many current uses of AI appear to be rather mundane, such as when you ask iPhone’s Siri or Amazon’s Alexa to tell you the latest sports score. These machines use voice recognition AI to translate your spoken words into searchable format. For most people this will be nothing more than a time-saving novelty. But for those with disabilities, such AI enhanced features could provide them a greater degree of independence and autonomy.

In the near future AI may also transform such fields as health care. For instance, AI may soon allow for MRI scanning that is considerably faster and yet still provides an image with the required accuracy. As Rob Verger of Popular Science notes, patients would spend less time in machines and imaging centers, and hospitals could do more tests per day. By driving down the time and cost of MRIs, doctors could order one of those scans instead of a traditional X-ray or CT exam—and save the patient from further exposure to radiation.

What are negative examples of the use of AI?

As with every other technology, AI can be used in ways that are harmful or lead to unintended consequences.

In China, the government is using AI based tools to increase the power of the authoritarian state. “With millions of cameras and billions of lines of code, China is building a high-tech authoritarian future,” writes Paul Mozur in The New York Times. “Beijing is embracing technologies like facial recognition and artificial intelligence to identify and track 1.4 billion people. It wants to assemble a vast and unprecedented national surveillance system, with crucial help from its thriving technology industry.”

In the United States, Facebook was recently sued by the Department of Housing and Urban Development for using an AI enhanced system to allow advertisers to restrict who is able to see ads on the platform based on characteristics like race, religion, and national origin.

What are the moral concerns about AI?

When machines begin mimicking human intelligence they can potentially be engaging in moral behavior, making them artificial moral agents (AMAs). As philosopher James Moore explains, from a machine ethics perspective, you can look at machines as being:

• Ethical-impact agents — machine systems that have an ethical impact, whether intended or not, on humans, animals, or the environment.

• Implicit ethical agents — machines constrained to avoid unethical outcomes.

• Explicit ethical agents — machines that have algorithms to act ethically.

• Full ethical agents — machines that are ethical in the same way humans are (i.e. have free will, consciousness, and intentionality)

Since they are likely to have an influence that is not ethically neutral, most AI machines will be some type of ethical-impact agent. Few machines, however, will ever reach the level—if it’s even possible—of full ethical agent.

The area of concern is in whether they are implicit or explicit AMAs. Often it can be difficult to draw sharp lines of distinction. Consider, for instance, self-driving cars—a type of AMA—which need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. Should self-driving vehicles be programmed to always minimize the number of deaths? Should they be programmed to prioritize the lives of their passengers?

AI can also affect the moral behavior of humans. An example is how AI technology could be used in sex dolls or sex robots. Although sex dolls have been available in the United States since at least the late 1960s, advances in technology have led to the creation of sex robots that can move, express emotions, and even carry on simple conversations. The result is that such AI enhanced sex dolls could reduce male empathy by teaching men to treat women (and sometimes children) as objects and blank canvases on which to enact their sexual fantasies. (See also: The FAQS: Christians and the Moral Threat of Sex Robots.)

How should Christians approach and think about AI?

Because AI will affect so many areas of life, Christians need to be prepared to maximize the benefits of such technology, take the lead on the question of machine morality, and help to limit and eliminate the possible dangers.

“As Christians, we need to be prepared with a framework to navigate the difficult ethical and moral issues surrounding AI use and development,” says Jason Thacker, who headed the AI Statement of Principles project for ERLC. “This framework doesn’t come from corporations or government, because they are not the ultimate authority on dignity issues, and the church doesn’t take its cues from culture. God has spoken to us in his Word, and as his followers, we are to seek to love him and our neighbors above all things (Matt. 22:37-39).”