Earlier this year, The New York Times broke the story and raised public awareness of a little-known facial recognition startup called Clearview AI, which sells their controversial technology to more than 600 law enforcement agencies and police units around the nation. CEO Hon Ton-That says the benefits of the technology include allowing police to identify criminals quickly and efficiently, often from a single photo uploaded to the service.
Facial recognition technology is already used in varying capacities throughout our nation. With an estimated 18,000 U.S. departments currently using the tool, many officers claim they have been able to break open cold cases, ranging from child sexual abuse to assault, as well as stop criminals from leaving the country illegally. There are countless beneficial uses of this technology, but also many unknowns and concerns over issues like privacy and bias. Used with wisdom and abundant transparency, this type of technology might indeed revolutionize safety in our communities. But adopted too quickly, or without adequate understanding of its various possibilities, it might also lead to dangerous or even deadly injustices.
When you hear “facial recognition technology,” perhaps you think of Face ID on the iPhone or even Clear technology used in major airports. But many people fear facial recognition technology because of the potential for privacy invasion and a growing Big Brother surveillance state. Who is watching us, what do they know, and is my data safe? But another major ethical issue with this technology is the potential of racial bias or profiling, which may lead to additional racial profiling and harm to our brothers and sisters of color.
Facial recognition systems are built on various pieces of image data fed into an artificial intelligence (AI) system that can help identify potential leads and photo matches with varying levels of accuracy. But if there’s any problem with the quality of data, the number of inputs, or even how the system is used as it continues to grow in accuracy, it may yield false positives that could have devastating consequences in high-stakes situations.
Given the possibility of racial bias and profiling if these tools are overly relied on or used indiscriminately by police or government officials, IBM and Amazon recently announced they were either stopping all work on facial recognition or putting a year-long moratorium on facial recognition tools for police. They join many states and municipalities that have banned, or heavily regulated, facial recognition use by police. Pressure is also ramping up at the federal level for a comprehensive bill regulating this technology in policing.
Given the breakdown of trust in our communities and the known issues of racial bias with facial recognition technologies, it is wise and loving for us to slow down the implementation of these powerful tools in our communities, taking time to debate the ethics surrounding them—as well as vetting them sufficiently and accounting for bias—before they’re deployed in mass.
In December 2019, the National Institute of Standards and Technology released a landmark report demonstrating the inherent bias these systems can inadvertently exhibit. The researchers evaluated 189 different algorithms from across the industry and found some disturbing trends. The team saw “higher rates of false positives for Asian and African American faces relative to images of Caucasians,” especially those of African American females. High false positives were also found among Native Americans, Alaskan Indians, and Pacific Islanders.
Given the breakdown of trust in our communities and the known issues of racial bias with facial recognition technologies, it is wise and loving for us to slow down the implementation of these powerful tools.
Misidentifying a photo of a person on Facebook or on your phone’s photo app is a fairly innocuous facial recognition error. But misidentifying an individual in an often volatile policing situation can be deadly, especially for people of color. Every technology—especially those driven by advanced AI systems—will have flaws, bugs, and even biases because they are created by fallible human beings. But as AI ethics professor Mark Coekelburgh writes, “While problems of bias and discrimination have always been present in society, the worry is that AI may perpetuate these problems and enlarge their impact.” Even if these biases aren’t intentional or malicious, they can be deadly in the wrong or untrained hands.
In high-stakes decision-making, we must be careful to not become overly reliant on fallible technologies like this. As Microsoft president Brad Smith writes in his book Tools and Weapons, even in a democratic and free society our “police might rely excessively on this tool to identify a suspect without appreciating that facial recognition, like every technology, doesn’t always work perfectly.”
Often in a rush to deploy technological innovations, we fail to slow down and consider how technologies might have unintended consequences. This tendency to rush headlong into innovation, without adequate ethical reflection, is thankfully being recognized by computer scientists, corporate CEOs, and religious leaders alike.
Misidentifying a photo of a person on Facebook or on your phone’s photo app is a fairly innocuous facial recognition error. But misidentifying an individual in an often volatile policing situation can be deadly.
As we seek to reflect on these ethical contours, we should not gloss over the uniqueness and value of every human life in exchange for some utilitarian benefit in society. From a Christian perspective, this dignity is seen most clearly in the doctrine of the imago Dei (Gen. 1:26–28), which reminds us that no human life is more valuable than another; no human life is expendable. This dignity, bestowed by our Creator, must drive how we use technology in the age of AI. Real human lives are at stake: our brothers, sisters, mothers, fathers, children, and friends. A strong insistence on the imago Dei can guard us against the dehumanizing tendencies of a society that prioritizes utility and efficiency over human dignity.
Christians are also called to a simple, yet infinitely deep, ethical standard by Jesus himself: “Love your neighbor as yourself” (Mark 12:31). This framework leads me to sacrifice my privilege, convenience, comfort, and safety in order to love my neighbor in a way that defends their inherent dignity and protects them from injustices, marginalization, and abuse. We should thus cry foul on anything that harms or discriminates against a brother or sister of color, whether it happens intentionally by a human or unintentionally by AI.
Using facial recognition technology in policing—especially amid an environment of deep distrust—is unwise and dangerous at best, and deadly at worst. We need more public debate about both how we plan to use facial recognition, and the ethical issues at stake. There will come a time when a beneficial, vetted use of this technology can truly make the world a safer and more just place. But that time is not yet here, so patience and caution—for the sake of the neighbors we’re called to love—is necessary.