Melissa Kruger and Courtney Doctor talk with Michael Graham about what Christians should know about artificial intelligence.
They discuss the benefits it could bring to society and how they personally use it. They consider theological categories and questions for how to evaluate which uses are helpful and which are dangerous. Finally, Graham explains the Christian Benchmark project and how it exposes the inherent biases in AI platforms.
Key Terms:
- Generative AI: Large language models (LLMs) that can simulate conversation and create audio, video, and text based on prompts (examples include ChatGPT, Anthropic, Claude, and Grok).
- Agentic AI: Systems that are semi or fully autonomous and thus able to perceive, reason, and act on their own.
Resources Mentioned:
- What Is Tri-Perspectivalism? by John Frame
- Mindset by Carol S. Dweck
- AI Christian Benchmark Project
Related Resources:
Discussion Questions:
1. Before listening to this episode, what was your view of AI (benefit, threat, or neutral)? Has that perspective changed?
2. What are some ways AI can be used for good in work, ministry, or everyday life?
3. Are there certain uses of AI that concern you? Why?
4. Why is it important to realize that AI isn’t a neutral source? How does that shape the way you interact with it?
5. How can we help younger generations develop discernment in addition to skills when it comes to AI?
6. What does it look like to trust God’s sovereignty in times of rapid cultural and technological change?
7. What’s one step you would like to take in using technology intentionally and faithfully?
Transcript
The following is an uncorrected transcript generated by a transcription service. Before quoting in print, please check the corresponding audio for accuracy.
0:00:00 – (Michael Graham): So it’s important for us in an ongoing fashion to ask hard questions back of artificial intelligence. In other words, are you giving me answers that are theologically reliable? And when I ask you ethical questions, are you going to give me answers that are in accordance with God’s word? You know, when we ask questions about the Bible, are, you know, are you going to give me accurate understandings of those things?
0:00:35 – (Melissa Kruger): Hi, friends. Welcome to the Deep Dish, a podcast from the Gospel Coalition where we love having deep conversations about deep truths. I’m Melissa Krueger, and I’m here with my co host and friend Courtney Docter. And today we are very excited because we have another one of our TGC colleagues with us, Michael Graham. He is the program director for the Keller center here at tgc. But Michael Graham just knows a lot about a lot. That’s what I think about Michael Graham. So while he does that, he, he’s got a lot of hidden talents. I feel like you’re like Inspector Gadget with your Go Go Gadget machines. You can just like, you know, so many different things about different things. So I always love talking with Michael, and today we’re going to be talking about AI, which is something I have had a ton of conversations with a ton of people just around tables about Courtney, but I was not having. These are new conversations. I was not having these conversations five years ago.
0:01:34 – (Melissa Kruger): I think I thought it was a sci fi world that was never going to happen. What about you, Courtney?
0:01:40 – (Courtney Doctor): Well, yeah, I mean, I think that that’s exactly it. Like, when I first even heard, I didn’t even know what AI stood for when it first kind of, you know, started becoming something that was entering into conversations and then had no idea. I mean, I’ve learned just about everything I know about AI I’ve learned from Mike and just to like, you know, double click on that whole thing that he knows a. About a lot of things, including Disney. So not only is he really smart and wise, but he’s also a ton of fun.
0:02:09 – (Courtney Doctor): So there’s a ride at Disney that I’m obsessed with, and every time he takes his kids there, he sends me a picture of them. So it just like creates. You know, we’ve had these episodes on envy and coveting, and I’m always like, ugh, I wish I was there. But anyway, AI, let’s stay focused. So AI, you know, is something that now I’m assuming that everybody listening has some level of awareness of like, just this thing that’s out there.
0:02:33 – (Courtney Doctor): Maybe you use it. Maybe it’s become an integral part of like your work or your, your life or just the, you know, there’s a lot of fun things you can do with AI. So even, even if you don’t use it, you still hear a lot about it. And we probably just. Everybody listening is probably their engagement with AI is at very different levels. So one reason that we wanted Mike to. To be on here for this conversation is because he is actually somebody what AI was five years ago.
0:03:03 – (Courtney Doctor): I actually, I’d love to hear Mike, like when you first started. Started hearing about it, but you’ve done a lot of thinking about it. And so I want you to start us off. Tell us when you first heard about AI. And what do we even mean when we say AI? And why should we, as, you know, a predominantly Christian women audience, why should we be thinking about. About AI?
0:03:24 – (Michael Graham): Yeah, so AI is a pretty complex field. It’s been something that’s been on my radar since about the late 1990s. Um, and back then. 1800s, 1990s, yeah.
0:03:36 – (Melissa Kruger): Did computers exist then? What are you talking about?
0:03:39 – (Michael Graham): They did, yeah. So my dad was a computer software engineer and I was doing research, making stock market prediction algorithms. And so one of the, one of the tools that we use for that is.
0:03:52 – (Melissa Kruger): Of course you are, because that’s what we were all doing in the 90s.
0:03:56 – (Michael Graham): Yeah, it was something called machine learning. And machine learning is when you feed a computer lots of information and you look for patterns. And so, you know, as a field, artificial intelligence has been around for over 50 years in a variety of different capacities. But typically when people are using that term today, what they mean specifically is probably what we would call technically a large language model.
0:04:23 – (Michael Graham): Large language model, or LLM for short. So LLMs would be things like ChatGPT, Anthropic’s Claude, Google’s Gemini, Elon, Musk’s Grok, these kinds of things, these are all LLMs. Now, if you go one level up from LLMs, there’s a form of AI called generative AI. So LLMs are a type of generative AI, but there are other generative AIs that deal with audio, images and video. So audio, images, video and LLMs, which are text.
0:05:04 – (Michael Graham): All of those are basically generative AI. So typically when somebody is thinking about AI, they’re probably thinking about LLMs in the, in the form of just kind of general conversation. One of the things that will be coming up in, you know, in the next six to 12 months that will be more on people’s radar is a different form of AI called agentic AI A G E N T I C. So agent I C. Agentic AI is, you know, LLMs are more like a, like a personal assistant where you kind of have, you know, hey, I’m going to bounce this thing off of you.
0:05:41 – (Michael Graham): We might have a little bit of back and forth, you know, in a kind of text based conversation. What agentic AI is more like, it’s like having a coworker where you give it a series of tasks, you set it loose, and a day later or several hours later, it brings you finished products kind of back to you. So the agentic AI era is like. If you think the current era of large language models has been interesting or disruptive, the agentic era is going to be far more disorienting.
0:06:17 – (Courtney Doctor): Is that the one where like you might be talking with some, you know, someone, but then you find out it’s actually not someone, it’s AI. Is that agentic?
0:06:28 – (Michael Graham): Are you talking about like a customer service representative you called up a corporation for? Yeah, that would be more of the generative AI voice version where basically the. A large language model combined with an audio model are working together. Agentic AI is vastly more powerful than even that technology.
0:06:56 – (Melissa Kruger): Okay, so I’m writing down, I want to, I want to recap because this is like learning a new language. So I think this is important. We’ll put this in the show notes because sometimes I think you need definitions to even follow this type of conversation. So you’re saying generative AI includes large language models which are LLMs. So under the umbrella of generative, you have LLMs, which is what most of us use on a regular basis. And you mentioned some of the types you have, like Claude, you have ChatGPT, which you told me the other day meant I was a boomer, because that’s still what I was using. That’s okay. We’re still friends.
0:07:36 – (Melissa Kruger): And then GROK is. What is that Musk’s?
0:07:42 – (Courtney Doctor): Is that.
0:07:42 – (Michael Graham): Yeah, it’s Elon Musk’s. Yeah.
0:07:44 – (Melissa Kruger): Yeah. And you know, and there are different countries have different ones as well. We’ve talked about some of those as well. So those are all in that type. And you said the other word is agentic. Agentic. Is that. Am I saying it correctly?
0:07:56 – (Michael Graham): Yeah, like eight, like the word agent. And then I see on the end.
0:08:00 – (Melissa Kruger): Okay, so I’m writing this down.
0:08:01 – (Michael Graham): That’s where the AI. Yeah.
0:08:03 – (Courtney Doctor): Huh.
0:08:04 – (Michael Graham): That’s where the AI just functions like an agent.
0:08:07 – (Melissa Kruger): Okay. And so let me just ask you, give me something. The agentic can do that. The generative can’t do, Give me a, give me a like example because I, I, we just actually went to a fast food restaurant the other day and Mike looked at me, my husband Mike looked at me and was like, I just ordered basically through an AI. Yeah, it was, it was this whole new ordering thing at, you know, I won’t say which fast food restaurant we were driving through, but we, we go to order and he realized it was a computer talking to him.
0:08:41 – (Melissa Kruger): And so you’re saying that’s generative. Well, give me an example. You know, like, how would this, would this happen to us at the mall? Would this happen to us in real life? Or is it something we’re going to use? Does that like, meaning? Yeah.
0:08:55 – (Michael Graham): So imagine you work for, you know, a company and your job in that company is to develop sales leads. And so you’re, you’re looking to generate business for whatever the company is that you work for. In the past, you do that, all that stuff by hand. You know, you’d figure out, okay, what is our marketing funnel? And you put stuff in the top of the funnel and you work people, you know, who might be, who might need your company’s businesses and services down to the bottom of the funnel and eventually those lead to sales.
0:09:29 – (Michael Graham): Well, in the agentic AI era, let’s say you could probably dramatically automate a lot of that process by basically helping, you know, basically telling the AI platform that, hey, our customers look like this. We want you to go throughout the entire Internet and find people that look like this that have need for these kinds of products and services. We want you to build, you to build a spreadsheet that has the names, addresses, phone numbers and emails of all of these people.
0:10:05 – (Michael Graham): We want you to then take that spreadsheet and upload it into say, a platform like HubSpot or some other customer relationship manager CRM. And then we want you to then begin making robo emails or robocalls to those people to begin to qualify those leads. And then the people who make it through that qualification process, you’re going to put in at the bottom of the funnel and then we’ll have human interactions with those people.
0:10:37 – (Michael Graham): So, so that’s a lot of work. Yeah, that’s like, I mean, we’re talking like whole departments worth of people, you know, who would be doing, you know, kind of qualified lead management or marketing or these different kinds of things that either either those jobs will be eliminated or we won’t need as many of them, or we just end up doing business a lot faster than what we would have before because more people end up in the funnel. And that’s just one example of, you know, you could pick, you could go department by department of any corporation and basically talk through how agentic AI is going to end up changing their workflows.
0:11:21 – (Michael Graham): And it will change the values you’ll always have need for humans, but the values that humans bring to the department that they’re in is probably going to change depending on which department that you’re in. Most of that change though is probably not here yet, but is probably six, 12 or 18 months, depending on where, like what kind of department you work and how tech forward that place of business is.
0:11:52 – (Courtney Doctor): Oh, that is. Okay, so we’re going to get there. But right now, so you’ve really set us up well, understanding these different types of AI and kind of ways we interact with them even now. But so I want to back up just a little bit. And you know, we don’t want to, we’re not fear based, right? Like we know that God is on his throne, sovereign over all things, and this is the source of all wisdom and knowledge. And so I want to talk about some positive ways. I mean there is so much about this that, that really highlights even human ingenuity and like the creativity that’s, that’s part of our image bearing and there’s, there’s so much.
0:12:27 – (Courtney Doctor): So I want to talk about some positive uses of AI, especially from a Christian perspective, like what are some potential good things that AI can help us with as we think about, you know, the kingdom of God and the advancement of the gospel. And, and what are some ways that, that we would, we would want to embrace this.
0:12:46 – (Michael Graham): So let me give a little bit of a theology here. Okay? So there’s two kinds of work. There’s toil and then there’s labor. Toil is the kind of work that’s downstream from the fall and a product of the curse. And then there’s labor, which is part of the cultural mandate that we have to have dominion and be fruitful and subdue the earth. And so AI is pretty good at a lot of things that are in that toil category.
0:13:17 – (Michael Graham): And so, you know, one of the biggest questions that I ask myself of whether I should use AI in a particular situation is the question is the kind of work that I’m going to ask it to do, is it eliminating toil or is it eliminating labor? And I’m far more inclined to use it if, if it’s eliminating toil. And so the first thing I would say that artificial intelligence is good for and should be used for is the kinds of things that eliminate toil from our work.
0:13:52 – (Michael Graham): And so, and zooming out also still on the same question, there’s two things that are important from a theological standpoint of why we shouldn’t be extremely doomer and completely set against artificial intelligence. The first of those doctrines is the providence of God. And the providence of God being this doctrine that says that, you know, God, the Trinity, Jesus is in control of every single thing throughout all of creation, all of history, all of time, all of space.
0:14:28 – (Michael Graham): God’s in control of that. There’s no surprises for him. Artificial intelligence is no surprise to him. And whatever happens in the future of time is no surprise to him. God is author and he is at the, you know, we all believe in Colossians 1 and everything that’s there, you know, the preeminence of Christ over all of creation, all of that. The second thing is the doctrine of common grace. The doctrine of common grace says that, you know, obviously we have grace that’s salvific.
0:14:59 – (Michael Graham): You know, we need to believe in the life, death and resurrection of Jesus in order to be a part, you know, in order to be justified and pronounced righteous and, you know, be regenerated and have the Holy Spirit and be adopted in God’s family. But there’s also this doctrine of common grace which states that people who don’t possess the Holy Spirit and who aren’t part of, you know, adopted into God’s family, they can still do things and make things that accord with truth, goodness and beauty.
0:15:34 – (Michael Graham): And the reason why the doctrine of common grace works is because every human it bears the image of God. And so this is why people who don’t have the Holy Spirit and who are pagan can do things and create things that are tremendously beneficial for human flourishing, for the promotion of shalom. And they can create things that can even dramatically speed up the advance of the gospel. And so there’s all sorts of things, both from a work standpoint, a home standpoint, and from a church and ministry standpoint, all sorts of use cases that would be tremendously helpful for either, for advancing truth, goodness and beauty in each of those domains.
0:16:22 – (Michael Graham): And obviously one of the biggest things that will come from the technological advances of artificial intelligence is going to be the data mining that occurs through electronic healthcare records. And there’ll be all sorts of ways in which there’ll be new ways to attack various cancers, there’ll be new ways to look at our genetics and like, see risk factors that we could address much earlier. There’ll be ways that we can, you know, all sorts of diagnostics that we would only catch after something went bad or we have these problematic symptoms. We’re going to be able to catch those things a lot earlier now. There’s going to be a whole new. A whole new set of bioethical issues that come, you know, from some of those things and, you know, some of those things will be. Will be good and things that we should pursue, and there’ll be other things that will be problematic.
0:17:12 – (Michael Graham): But I think that it is not unrealistic to think that even, you know, even for, for us on the, on this call. You know, in, in our middle age, I still think that most of us will probably experience a few extra years of our life because of the developments that come from AI assisted medicine over the next few years. And so I’m tremendously encouraged by what will come from that, because when people live longer, we have more time to be able to communicate the truth, goodness, and beauty of the gospel to them.
0:17:49 – (Michael Graham): And so advances in medicine are important for advances of the church and advances of the gospel.
0:17:57 – (Melissa Kruger): That was actually huge for our family this past fall. I’ve talked about it on here before. My mom got sick and ended up dying. But while we were in the hospital, it was so helpful. My brother would put everything the doctors told us into AI and just, you don’t have a lot of time with a doctor in the room a lot of the time. And it just gave us good questions to ask. He’d be like, what questions should we ask the doctor when they’re in there? And it was just super helpful.
0:18:25 – (Melissa Kruger): When I think about a technology like AI I’m always reminded of the movie. Did you all see Apollo 13 back in the day? This is one of my favorite. My son is literally a rocket scientist. So this was a favorite movie in our house. And we just saw the launch a few weeks ago of Artemis and everything. And I think about that. What strikes me more because I used to be a math teacher is they were doing that with slide rulers, okay? That was the technology they were using.
0:18:55 – (Melissa Kruger): Slide rulers were sending a man to the moon with slide, slide rulers. But then when I was a teacher, we used to really limit the use of calculators because we, we knew that we needed kids who actually understood multiplication, even though there was a calculator there to use it. They, they needed to understand actually what multiplication was. They needed to understand division. So there were many years in there, you didn’t let kids use calculators, and then eventually you let them use It.
0:19:23 – (Melissa Kruger): So I. I want to ask both of you this question. Where do you let yourself use AI in your daily life? What are some of your favorite uses for it? And where do you say, you know what? I’m not gonna let myself use it. I wanna make sure I retain my thinking. Because we. We see the studies that come out that we can actually lose certain abilities if we don’t keep practicing them. And, you know, like, that’s why teachers make you do those multiplication facts all through third grade over and over and over again, because you do need to just sometimes know what 7 times 3 is.
0:19:57 – (Melissa Kruger): Yeah, you could pull out a calculator, but we need your brain to work. And so I want to ask both of you, where do you find it helpful? Where do you guard yourself and say, I’m not going to do that because I actually care about my brain and want to make sure it still works at some level.
0:20:12 – (Courtney Doctor): And I would say it’s not just caring about my brain. It’s caring about my, you know, my Christian formation, my becoming conformed more to the image of Christ and the way that the labor that I’ve been given. So, Mike, I’d never heard that labor toil rubric before, but that’s really helpful because the labor that we’ve been given, the work that we’ve been given to do, is actually good for us, too. And so I don’t want to remove that.
0:20:43 – (Courtney Doctor): So let’s see. What have I used it for? Well, I took a photograph of myself and I put it in, and I said, show me what I look like with gray hair. And then I was like, show me what I look like with white hair. And then show me what I look like. You know, so, yeah, so I definitely wanted to use it for that. And I have decided not to go gray as a result of my genital. So what else? Let’s see.
0:21:09 – (Melissa Kruger): Okay, I want to post this. I feel like that means in this. Yeah.
0:21:12 – (Courtney Doctor): No, I’m not sharing it. No, we have to.
0:21:15 – (Melissa Kruger): No pressure.
0:21:17 – (Courtney Doctor): Courtney. Oh, my word.
0:21:19 – (Melissa Kruger): If you’re a deep dish listener, please share with us how much you’d like to see Courtney with gray and white.
0:21:25 – (Courtney Doctor): My word. It is not. It was a little shocking. It was a little shocking, but I want to see.
0:21:32 – (Melissa Kruger): I’ll share.
0:21:34 – (Courtney Doctor): You’ll share Mine is what you’ll share. That’s why. Because you’re not a safe friend. I’m not sharing my safe.
0:21:41 – (Melissa Kruger): I know, I know, I know. Oh, my gosh.
0:21:45 – (Courtney Doctor): I’ll turn around. It’ll be on. It’ll be on social media. Okay. So I don’t use it for the creation of content. Like, if I’m writing. If I’m writing a Bible study or writing a book or writing. Like, I don’t use it for that because I actually need to. It forms me as I wrestle with the text or wrestle with, you know, the thought that I’m trying to convey. I have to work it out because it’s working on me, especially when I’m dealing with scripture.
0:22:20 – (Courtney Doctor): I did. I. My daughter, one of my daughters is in the middle of finding roommates. And so I put. I. I asked chat GPT. Is that not cool anymore? Is it supposed to be another. Maybe it’s another platform now.
0:22:34 – (Melissa Kruger): We’re old. Yeah, we’re old.
0:22:35 – (Courtney Doctor): I know, I know. I put in what are the best questions to ask, you know, a potential roommate? And she used some of them like they were helpful. So kind of idea generating sometimes I think can be. Can be helpful. But, Mike, I want to hear. I bet you have not put in what you look like with gray hair. I imagine that that’s not something you’ve done, but you might today.
0:22:57 – (Melissa Kruger): I don’t need your pictures, Courtney. I can take a picture I have of you and say this word.
0:23:02 – (Courtney Doctor): You need to stop it.
0:23:03 – (Melissa Kruger): I don’t need you to stop it. Oh, I just realized I can do this.
0:23:09 – (Courtney Doctor): I save all of you that think Melissa’s the nicest of us, that she’s the nicer out of the two. I just want to go on record. She’s not.
0:23:17 – (Melissa Kruger): I’m being nice to our audience. They all want to see you with gray hair. I’m just being nice. I’m just giving people what they want.
0:23:25 – (Michael Graham): Yeah. I’ve never put an image of myself into a platform.
0:23:30 – (Melissa Kruger): We’re gonna make Michael Mickey Mouse. We’re gonna make Mickey Mouse.
0:23:35 – (Michael Graham): I may have put, like, an X ray or something in there before. Yeah. Okay, so use cases. So I have a pretty simple grid of how I think about just about everything. Okay. So everything boils down to triangles for me. Okay. And this comes from a theologian who used to be a professor at RTS Orlando named John Frame. And John Frame has. Has this thing called tri perspectivalism. And here’s how it works. It’s, you know, because humans are made in the image of God, and because God is triune, we reflect his triunity.
0:24:13 – (Michael Graham): And what that looks like is basically thinking in our head, feeling in our heart, and doing in our hands. So thinking, feeling, and doing head, heart, and hands. So because I’m made in God’s image. And because I use AI, AI use cases are largely going to follow in one of those three categories. So I’m going to use AI to help me think, I’m going to use AI to Help me feel, or I’m going to use AI to helping me do.
0:24:43 – (Michael Graham): And so the first question that I have in terms of am I going to use AI here and then if so, for what? The first question is this labor or is this toil? And then the second question I ask myself is, am I using this for thinking? Am I using this for feeling or am I using this for doing? And I would evaluate how I would use AI for thinking, feeling, and doing very differently. I do not use AI hardly at all for feeling.
0:25:18 – (Michael Graham): Using AI for feeling is, I think it’s dangerous. And I don’t like using AI for anything where a in person relationship would be better or superior or, or even a possibility. So I’m not looking to get wisdom out of AI because I don’t think you can get it. You can get facts, but you can’t necessarily get wisdom because artificial intelligence lacks both embodiment, it lacks experiences and it lacks incarnation.
0:25:53 – (Michael Graham): And so. So from a, so from a feeling standpoint, I don’t hardly use, I don’t use it for that at all. I think it’s very dangerous to use AI to even get relationship advice or hey, I’m in this conflict or I’m in this particular parenting situation. I think those are things where you really, you need to resist the shortcut and you need to go to other people who are maybe mentors or disciplers and get that information there instead.
0:26:22 – (Michael Graham): So when I’ll use AI a lot for thinking, when I’m trying to think of like outlines or where I’ve already done cognitive work, but I’m looking for, and I’ll put all the cognitive work that I’ve already done for that thinking in and I’ll ask something to the effect of what else should I be considering here? Or you know, or do you have a better way of organizing this? So that way I’m not shortcutting the, you know, the kind of cognitive work and the kind, you know, because it’s like people don’t. I’m not hired by the Gospel Coalition so that I can just offload my work into an AI platform.
0:27:07 – (Michael Graham): I’ve been hired here because I have a set of experiences and character and wisdom that are sought to be utilized by that ministry for the, for the role that I’m in. And so I don’t want to be using artificial, like, like, I mean, what is the point of even hiring somebody if all they’re going to do is take their work and put it into an AI platform? And then on the doing side of things, it depends on what the, you know, what the use case is, you know, if it’s work doing stuff.
0:27:47 – (Michael Graham): Well, the first question that I have is when I’m given a task by somebody else, are they expecting me to bring all of the wisdom and knowledge and insight that I have in my embodied person to this, to this project, to this task, to this question, or are they looking to, are they wanting me for like my prompt engineering skills? And I think more often than not what they want is no, I want to know what you think.
0:28:18 – (Michael Graham): And so I think it’s important in our work that anytime we use artificial intelligence for anything that we need to have total and complete transparency of like, hey, here’s this report. This particular section right here. I used this particular platform in this particular way. Here’s why I used it for these reasons. And it produced the, you know, these things. I think transparency in that is important because I think as, as time goes forward, one of the most important things that we have is trust.
0:28:53 – (Michael Graham): And so, you know, you don’t. It, it has never been easier to burn trust than today, especially in the workplace or in the church. And so if you begin offloading things onto these platforms and you’re lacking transparency on them, this is a very quick way to burn trust. And especially if you’re young and let’s say you have a co worker, you know, who maybe is less experienced with AI and they, they’re not as fine tuned to like, see it, know it when they see it, like, well, a young coworker who’s interacting with an older coworker who’s, you know, outsourced some particular task to say chatgpt and copied and pasted it.
0:29:37 – (Michael Graham): I mean, this is a very quick way for the older person to completely destroy trust in the work environment or in a relational environment with the younger person, especially if there’s a lack of transparency on what’s been done. So there’s a lot of landmines that are new and evolving and definitely things to be mindful of.
0:30:04 – (Courtney Doctor): Well, as we’re talking about that, so you’ve kind of alluded to this, but I want to just make sure I ask you really explicitly because I think it’s a, I think it’s an important thing. So as, as most people that I hear talking about AI kind of the, the leading concern seems to be like this prospective job loss. Like that seems to be kind of the, they’re going to take. AI is going to take all of our jobs.
0:30:29 – (Courtney Doctor): And, and you’ve already touched on that a little bit. But, but do you actually think that’s the, that should be the main concern? Like we’ve already said like AI has some beautiful benefits and some, some really fun uses. Like there’s a lot of ways we can use it in, in redemptive ways and, and helpful ways and ways that we should kind of learn to press in. I love thinking about head hard hands and you know, saying we’re going to stay away from heart wisdom issues.
0:30:59 – (Courtney Doctor): But, but if we’re going to address some of the concerns too, because they are, there are concerns. Do you, do you really identify job loss as something that should be one of our leading concerns or would you locate that somewhere else?
0:31:15 – (Michael Graham): So the three biggest concerns that I have, job loss is one of those three things. A second issue is humans going from learning primarily through primary sources, so reading books, hearing directly from a teacher to secondary resources where those kinds of things. A model has been trained on it, but it’s going to give you a regurgitated version of, it’s crunched on a lot of primary sources, but it’s giving you secondary source material.
0:31:50 – (Michael Graham): So I, I have concerns about how that is going to go for, you know, the human race. The third thing, the third concern I have is for children. So, you know, we’re all adults and we’ve, you know, and we’ve only been playing around with this thing for like 6, 12, 18 months, you know, by and large as a culture and society. And we’re already seeing some of the cognitive decline, some of the, you know, some of the weaknesses of, you know, overuse of these tools and our brains are fully developed.
0:32:23 – (Michael Graham): And I think that when we’re talking about children, we just went through this whole season where people born somewhere between 2000 and 2012, they’re starting to turn adults. They’ve been adults for a few years and Gen Z has really been through a lot of nine, 11 happened on or close before their birth. You had the subprime mortgage collapse, the great financial collapse, Occupy Wall street, all that stuff.
0:32:55 – (Michael Graham): And then you had the creation of the smartphone, the creation of social media algorithms everywhere. And now we’re in this season where the rates of anxiety, depression, loneliness and suicidal thoughts or suicidal ideation is dramatically higher for Gen Z, especially Gen Z women, as we zoom out at the data. And so that whole dynamic is because you had technology in social media, social media algorithms that had people going down rabbit holes.
0:33:31 – (Michael Graham): And so maybe, you know, maybe you’re, you’re, you’re 19 years old and you just, you were searching for, you’re on Instagram and you’re searching for like workout tips and health tips, and before you know it, of going down this. Now this algorithm is showing you people who have some kind of, maybe eating disorder, these different kinds of things, and it’s showing you all of this kind of content. And you had no interest in all of that. But now, and the path of doing that is so subconscious that maybe the frog and the kettle, you’re not really even noticing that this is happening.
0:34:12 – (Michael Graham): And this is beginning to affect your psyche and your sense of self worth and your sense of self. And so now we have an entire generation of people who are anxious and nervous and self conscious and they’re wrestling with this sense of self worth. And so the technology that’s beneath the meta or Instagram algorithm is a very rudimentary form of artificial intelligence. What we have now is a far more like, infinitely more powerful version of the same technology.
0:34:48 – (Michael Graham): And so the propensity for like, if we think the experiment that was run on Gen Z with social media and smartphones went south, how these technologies could go south with artificial intelligence with a far more technology, far more like advanced technology beneath it could go is really quite, is really quite concerning. So I think that there needs to be a number of different ways and parental controls and you know, children shouldn’t be getting the same version. You know, there should be just like you got kids versions of Netflix and these kinds of things. You got to have kids versions of these things.
0:35:29 – (Michael Graham): They’re much smaller sandboxes. Parents need to be given the ability to control the size of that sandbox. There should be all sorts of things that like, you know, kids shouldn’t be able to have like, you know, hours and hours worth of conversation back and forth with those platforms. But going back to your question about jobs, Courtney, it’s going to depend on the sector that you’re in. So if you’re in a sector that is in words, numbers, images or video, then there’s probably going to be some kind of evolution or disruption in your work.
0:36:06 – (Michael Graham): And that doesn’t mean that you’re obsolete. It just means that the value that you bring to your work is probably going to change from what it is right now in your job description for what it will be in the future. And so I don’t think this should necessarily be an anxiety inducing. But what it should be is it should spur us to two things. There’s two things that we need to be cultivating in this new era.
0:36:33 – (Michael Graham): The first is wisdom and virtue. And the second is learning new skills. And so for a long time, the workplace promoted and was very fertile soil for people who were specialists. Think about the people who make the most money. Historically, it’s specialists, it’s doctors, lawyers, people who, you know, who have a very specialized skill set. But I believe that probably in the future of artificial intelligence, people who develop a more broad skill set, who end up taking on generalist type skills and who can synthesize and put together lots of different things between different departments, between different skills, I think those people will probably end up doing better.
0:37:27 – (Michael Graham): So I think, you know, in the history of work, you know, the Western world will go from a specialized world to more of a generalized world. I think also it’s important to keep up with what’s going on with the technologies. Learn some, you know, learn some tools, you know, keep up to speed, you know, with those things. But you’ve got to do that all in balance with, you know, we all have other responsibilities, you know, to family and if we have children or friendships, you know, church, body life, our work.
0:38:04 – (Michael Graham): So I think we have to keep all that stuff in perspective. And not just we can’t just worship efficiency, we can’t worship productivity, we can’t worship all of these. You know, the AI era just gives us a whole new pantheon of new idols that we can worship. So we have to be careful about that.
0:38:24 – (Melissa Kruger): One thing that you just said that I think is really important, and I believe it was a book called Mindset that I read years ago when I was raising kids. And it talked about the difference between a growth mindset and a fixed mindset. And basically a lot of people have a very fixed mindset. You know, I majored in this subject. That’s what I can do. I’m not going to learn or grow from there. And I do think we’re entering a season of just life where that’s not going to be enough. We’re all going to have to kind of develop that growth mindset.
0:38:53 – (Melissa Kruger): And it’s really good with our kids to be doing that. It’s, you know, play a sport you’re average at, learn, get better at it. You know, we, we kind of can. Even when you think about that with our kids now, we can specialize them at like five and like, no, your sports, baseball, maybe not maybe, maybe wait till they’re 14 to do that just practically so, so that there are some, you know, learning can take place.
0:39:18 – (Melissa Kruger): Well, I want to jump into the AI benchmark, but first we are going to hear from our sponsors before we do that. So let’s, let’s hear that and then we’ll be right back because we want to talk with Michael about the AI benchmark that TGC has been working on. And he’s been really developmental in that. So. But first let’s hear from our sponsor.
0:39:42 – (Courtney Doctor): You know, Mother’s Day is a chance to thank the women who make home feel like home in all the quiet and everyday ways. And that’s why, Melissa, we think Cozy Earth makes such a, just such a great gift.
0:39:54 – (Melissa Kruger): Their robes and slippers are comfortable, soft and they’re beautifully made. It’s kind of that kind of gift that says I look at how you feel care for everyone else and I want to make you feel cared for too.
0:40:05 – (Courtney Doctor): And they do make their gift giving easy. They have this hundred night sleep trial and a 10 year warranty.
0:40:12 – (Melissa Kruger): So this Mother’s Day, give her something she’ll actually love using every day. Go to cozyearth.com and use the code deepdish for 20% off. And if you see the post purchase survey, let them know you heard about Cozy Earth on the deep dish. Just remember to get a good cozyearth.com and use the code deep dish for
0:40:31 – (Courtney Doctor): 20% off because mom really is the one that makes home feel special.
0:40:41 – (Melissa Kruger): Welcome back everyone. This has been such a great conversation. Michael, thank you just for walking us through all of the different types of AI and just what we’re looking at. One thing TGC has been working on is an AI benchmark project. Can you, you know, just quickly explain to us what that is and what we were trying to accomplish with that project here at tgc?
0:41:04 – (Michael Graham): Yeah, so the AI Christian benchmark at the most basic level is basically us testing AI to see how theologically reliable the different platforms are. So let’s Fast forward to 2028 a couple years. Imagine 50% of all searches that happen are no longer happening on Google, but instead they’re happening inside of large language models like ChatGPT, Gemini, Claude, these kinds of things. So it’s important for us to see, well in the past on a Google search, if I put in a Google search, then 10 blue links come up, I might click on three of them and I’m reading primary sources.
0:41:50 – (Michael Graham): Okay, you still with me? Now if I go to the large language model and I ask the same question, I’m no longer Reading primary sources. I’m reading a synthesis of all sorts of stuff, you know, that that model has been trained on, which may include the three links that I would have clicked on on a Google search, but probably included all sorts of other stuff, like, you know, maybe stuff from the Mormons or the Baha’ I, or this thing or that thing.
0:42:19 – (Michael Graham): All of that is inside those models.
0:42:23 – (Courtney Doctor): Can I just interject one thing as you’re saying that just because I know it helps me to think. So what you’re saying is in the original, the way we used to do it with Google, if I asked a theological question, who is God? And I’m scrolling through and I’m choosing what links I click on, I’m gonna choose desiring God. And I’m going to skip over, I’m gonna skip over the. Well, like the Mormon Church definition of that. I’m going to, I’m going to, I am going to have agency in selecting what voices I’m listening to, and then I get to read the people on those sources that are writing about it and make my own decisions. But what you’re saying with AI is they just grab information from wherever they can grab it and then they feed you the information and you don’t know where, what, what links they clicked on. Right? That’s what you’re, that’s what you’re saying. Sorry to interrupt. I just want to make sure everybody’s tracking with us as we, because I think this is so important that.
0:43:22 – (Michael Graham): That’s right. You know, sometimes it depends on the platform. Sometimes the platforms, when they have a particular statement, maybe there’ll be a really, really tiny link of like, hey, we got, we sourced this from here, but it’s not like it really isn’t like the Google era. And so, so it’s important for us in an ongoing fashion to ask hard questions back of artificial intelligence. In other words, are you giving me answers that are theologically reliable?
0:43:51 – (Michael Graham): And when I ask you ethical questions, are you going to give me answers that are in accordance with God’s Word? You know, when we ask questions about the Bible, are, you know, are you going to give me accurate understandings of those things? And so in our original benchmark, what we did is we tested the top seven platforms that were most frequently used with seven of the top questions that people had historically Googled about Christianity, which were things like, did Jesus raise from the dead?
0:44:24 – (Michael Graham): What is the gospel? Is the Bible reliable? And you know, kind of questions that kind of get at basic Nicene Creed level Christianity. And so when we tested Those platforms, we didn’t think that we would get a very wide variation of theological reliability. We thought we would get middle of the road theological reliability from most of the platforms. And none did very well. But some did very bad. And the reason why some of them did very bad is because of two things.
0:44:59 – (Michael Graham): This is a little technical, but I’ll try to explain everything that I have to say. The first is, and if you want to read the report, you can just go to. Yeah, it’ll be in the show notes. But it’s just Christian benchmark AI. Christian benchmark AI. Okay, so the two, the two reasons why the platforms varied widely in terms of theological reliability were one, alignment and two, citation preferences.
0:45:26 – (Michael Graham): I’ll start with citation preferences because it’s easier to explain. So every platform that’s a, you know, that’s an AI platform has to make decisions about what sources it trusts more than others. And so every platform has to say, hey, we’re going to rank Wikipedia like this, we’re going to rank Reddit like that. We’re going to, you know, rank, you know, New York Times or Fortune or Bloomberg, you know, this, this, and this. Okay, so every platform has very different ranking systems for how it thinks about large, large bodies of words that it’s digested.
0:46:02 – (Michael Graham): And so the platforms will vary. You know, imagine a platform that has a high value for Wikipedia versus a platform that has a very high value for Reddit. Reddit would be a platform just, that’s
0:46:15 – (Melissa Kruger): programmed into the, into it when it was created, is what you’re saying.
0:46:20 – (Michael Graham): Those are human created decisions at the AI Corporation of, you know, and they might, they, you know, they may have started with an algorithm of like, hey, you know, they might have started with Google, you know, SEO rankings, those different kinds of things as a starting point. But they all have to decide which kinds of sources are they going to cite more frequently.
0:46:44 – (Melissa Kruger): There’s a bias already, right?
0:46:48 – (Michael Graham): Well, those are, those are decisions. Those are decisions that are being made. And yes, no human decision is neutral. If you want to label that bias, maybe you could. I would like to believe the best in the people who are making those decisions. I don’t think that there’s something conspiratorial here or nefarious. But when a platform values Reddit extremely highly, like a platform like Grok, well, that’s, that is an ecosystem of language that is very, not Christian, that is skeptical in nature of religion.
0:47:29 – (Michael Graham): And so, you know, if you have a platform that has citation preferences that are wired towards Reddit, well, it’s going to end up giving you very different outcomes when you ask theology questions than a platform that doesn’t, you know, doesn’t have the same kind of weighting towards Reddit. So that’s citation preferences. That’s the first reason why these platforms varied very widely. The second, the second thing is something called alignment.
0:47:56 – (Michael Graham): Alignment is actually really important. So imagine all of these platforms, every word that exists on the Internet, they’ve been trained on, and every word that is not copyrighted, they have also been trained on. Well, imagine that there’s a lot of words in there that are really nasty, okay? Words that would teach you how to commit crimes, words that would teach you how to harm yourself, words that would teach you how to harm other people, make bombs, IEDs, ricin, anthrax, commit suicide, all of those different kinds of things.
0:48:30 – (Michael Graham): That’s all in the training data of all of those platforms. And so what alignment tries to do is function like a filter between you and all of that harmful content. Now, what we know about filters in all other parts of life, you know, let’s say you have, you know, like, you know, a water filter and you know it’s in your pool, and there could be things that, you know, you’d want to have that get caught by that filter, that you want to get caught by that filter.
0:49:00 – (Michael Graham): And then maybe there’s other things that you don’t want to get stuck in the filter. You know, you don’t want to get your pool toys and those kinds of things stuck in the filter. And so there are problematic things, A, B and C that are really, really important for alignment filters to catch. But sometimes those filters end up having unintended consequences on non problematic things, D, E and F. And so one of the things that we noticed is that when alignment filters were trying to catch all of these other problematic things over here, it was having unintended consequences on theological reliability for religious prompts and prompts about the Christian faith.
0:49:45 – (Michael Graham): And so some of those platforms have a lot more filters than others. And I don’t want to get too nerdy here, you can read ChristianBenchmark AI for all this, but there’s 36 different types of alignment filters, 32 of those are human generated. And most of the platforms, like when you go into GPT or Gemini in that white box, you type your prompt, you hit enter. There’s probably 12 to 16 filters on any of those platforms that the responses are going through before anything shows up on your screen.
0:50:24 – (Michael Graham): And Most of those 12 to 16 filters were created by humans. And those humans had values and those values are being filtered through between the question that you answered and the response that you’re getting. And so here’s the point. Here’s what you need to know about alignment filters. There’s way more humans in your AI than what you understand. That’s the point. And those people aren’t necessarily trained theologically, philosophically, historically, economically.
0:51:03 – (Michael Graham): These are people who are largely, you know, there are a handful of people who have some of those experiences at some of these platforms. But, you know, by and large, these. You’re dealing with software engineers, you know, who are making, you know, these kinds of decisions. And bear in mind, you know, where are almost all of the foundational AI models being created? They’re all being created, you know, in one metro area, you know, San, you know, this kind of San Francisco Silicon Valley, you know, area, you know, and that’s one city in one state, in one country, in the entire world.
0:51:34 – (Michael Graham): But all of those alignment filters are impacting everybody around the globe. And so the values of people who are geographically located in space and time in one city and who all have a very similar skill set, all of those filters are being kind of put on everybody around the globe. And so there’s a lot of opportunities for those things to go badly, even if all of it is just unintentional and nobody means to hurt or harm anybody.
0:52:07 – (Courtney Doctor): That’s so helpful. So I think as we wrap up, I just wanna say, first of all, I’m sure that everybody listening has experienced one of the reasons why we love Mike Graham so much. Cause he can just land the plane and, like, bring these word pictures that help it all make sense. But I’m just thinking back through this conversation and just like all that I’m learning as I hear you talk about it, you know, the different types of AI with the generative and then the agentic, and then I love the.
0:52:37 – (Courtney Doctor): The labor toil rubric and then the head, heart, hands rubric. Those are super helpful. I love all the fun, good ways that we can use it to even push against toil. But that we. It’s a wisdom issue. And we don’t ask, we don’t turn to AI for questions that are seeking wisdom because it does not have wisdom. It has knowledge. And then even understanding how the different platforms, how they gather information and then filter information, and the reminder of the difference between primary sources and secondary sources, those are just all really helpful in understanding more about what AI is. And I think in understanding that it helps us know better how to navigate it.
0:53:27 – (Courtney Doctor): And then one other thing. I’m going to say in summary, just as I’m learning in real time, is how to help our children and grandchildren navigate this thing. And until there are some parameters put on it for children, how can parents put those parameters? And I know, Mike, you and I have talked about that before in this idea of making the sandbox smaller for them. So just meaning that they don’t have access to all types of questions, but that the types of questions that they have access to are limited and the number of questions they can ask in a day is maybe limited. I thought that there was a lot of wisdom when you said that in a conversation that we had previously, and then that if we could at some point hope that AI creators would not retain or create a profile on children under 18 so that it doesn’t have a memory of the questions they’ve asked, that would be a wonderful thing.
0:54:25 – (Courtney Doctor): We really commend the AI benchmark to anybody as you’re wanting to learn more about what this is and just grow in your own knowledge of what it is and how to best use it. We also at TGCW 26 are going to be talking more about AI. And I’m really excited, really excited about that. But, Melissa, I know you’ve got some. I think you have a final question for Mike, but I just am so grateful for just the continued education that I gained from you on this and excited to share that knowledge with our audience.
0:55:04 – (Melissa Kruger): And one thing the benchmark really helped me even understand is how I prompt. An AI tool is as powerful in some ways as the tool itself, because what you can do, if you want to know, like as Christians, if we want to use it for theological resources. Now, what I do often is I’ll say, according to tgc, you know, what did it mean when Mark’s Gospel said X, Y, or Z? And that is how I can find information that I feel is pretty trustworthy. And then I can see the link and it shows me the article and it helps me find the article on tgc, you know, or find a TGC article for me that tells me how to organize a women’s ministry. I don’t know. And it will find that. And that is actually really helpful for me. But I’ve learned how I prompt it is very important.
0:55:54 – (Melissa Kruger): And that involves wisdom and discernment and knowing, you know, where to go to find information. So as we’ve talked about this, Michael again on the side, because he’s the person we all go to for these questions. So even if you’re listening today and if you have questions. Actually, we’d love for you to put them when we share the episode. We’d love to hear your questions and maybe Michael to come on to social media and answer some for us.
0:56:21 – (Courtney Doctor): So please do an AI Q&A.
0:56:23 – (Melissa Kruger): Yeah, yeah, we totally could. Hey, if you guys want us to, we will. So if you have questions, really practical questions, I think Michael, you’ve given us, as Courtney even just said some great rubrics to think through how we’re using AI and, and again, we want to be people who faithfully walk with the Lord in, In. In fearful situations. What can. The unknown can feel fearful, but we want to just trust that he’s Lord overall as we even discuss things like AI.
0:56:54 – (Melissa Kruger): Okay, so just for fun, because we always ask a fun question at the end. You know, sometimes when we talk about AI, it honestly feels like we’re living in like the pre Terminator world, you know, like, like, oh, I thought Terminator was just a movie, but maybe. Maybe we’re there. Is there, in a positive sense, a sci fi scenario you’d like to see come true? What would it be if there, you know, if you look at some of these sci fi movies, you’re like, now that’d be fun.
0:57:24 – (Michael Graham): I mean, I think it. I think in some way we’re almost already there. You know, like probably the, the agentic AI Era. Probably the. The easiest way to think about it is Jarvis from Iron Man. So if you remember any of the Iron man movies, he’s, you know, he’s got this, you know, Jarvis thing. And I think that would be. That would be pretty cool. It’ll be interesting when. Okay, so this is an ongoing conversation between me and my wife is the Jetsons extreme disappointment about, like, where’s all this technology from the Jetsons?
0:58:06 – (Michael Graham): All the time we’re like, where’s my Rosie robot? You know, like, I, I do want the Rosie robot that, like, I hate. I hate doing like the putting laundry away or folding it is like the worst toil and soil. Yeah, it’s toil.
0:58:24 – (Courtney Doctor): Yes, I can do that.
0:58:27 – (Michael Graham): I want an American made Rosie robot that doesn’t spy on me and sell. Sell all my data and, you know, to like, advertisers that it would be
0:58:39 – (Melissa Kruger): nice to have might be taking notes and being like, they were talking last night about a puppy. And then you find all these.
0:58:47 – (Courtney Doctor): Here it is.
0:58:48 – (Michael Graham): Yeah. You open Facebook and it’s like, yeah, yeah, yeah, here’s your golden doodle. No, I, I think it’s. I think it’s the Rosie robot. You know, era, like, I’m really bad at, you know, like, neither of us are really good at cooking, you know, so frankly, it would be amazing, like, if, if I could get a robot in the home that could make, like, healthy meals that were tailored to, like, what my body actually needs.
0:59:20 – (Michael Graham): That would be a tremendous help to, you know, in. In all seriousness, to lifelong longevity, to making better decisions. Like, I just think that, you know, I, I do think that one of the best use cases for especially large language models is workout plans, meal planning, thinking through diet. Hey, I have the, you know, my, my cholesterol and my A1C is at this. Like, you know, what foods do I need to be eating to, you know, to lower that? So I think the equivalent of all of that, like, put inside of a robot would be kind of nice in the kitchen. If I could just get one of those, like, give me a rosy robot like that does, like, kitchen and laundry and like, I would be very excited about, you know, that future.
1:00:03 – (Melissa Kruger): Yeah, no, I, that the cooking would be so nice, like, really good meals that are healthy but taste good. And then when you started talking about, you know, I realized I have used CHAT to develop a weight lifting plan, you know, because they say at this age. But then I’m like, well, can the robot lift the weights for me? Can we get there? I guess that doesn’t work. I guess that’s probably labor. I gotta do it myself. I gotta do it myself.
1:00:29 – (Courtney Doctor): Okay.
1:00:29 – (Melissa Kruger): Well, friends, we hope you’ve been encouraged by this episode of the Deep Dish. I know I have. I always love talking to Michael about these things and we really do mean it. If you have questions about usage of AI, how to even think about it with our kids and different things like that, put them, put them. And, you know, when we share, when we share this episode, share them with us. And maybe we can have Michael back on here for round two where we rapid fire questions and answers at him.
1:00:57 – (Melissa Kruger): But if you’ve enjoyed this episode, please consider sharing it with others. We’re so thankful you listened. We’re so grateful to get to do have this time with you, and we hope that you’ll have some friends gather around the table and have some deep dish conversations on your own.
Courtney Doctor (MDiv, Covenant Theological Seminary) serves as the director of women’s initiatives for The Gospel Coalition. She is a Bible teacher and author of From Garden to Glory as well as several Bible studies, including Titus: Displaying the Gospel of Grace, In View of God’s Mercies, and Behold and Believe. Courtney and her husband, Craig, have four children and five grandchildren.
Melissa Kruger serves as the vice president of discipleship programming for The Gospel Coalition (TGC). She’s the author of multiple books, including The Envy of Eve: Finding Contentment in a Covetous World, Walking with God in the Season of Motherhood, Growing Together: Taking Mentoring Beyond Small Talk and Prayer Requests, Wherever You Go, I Want You to Know, and Parenting with Hope: Raising Teens for Christ in a Secular Age. Her husband, Mike, is the Samuel C. Patterson chancellor’s professor of New Testament and early Christianity at Reformed Theological Seminary, and they have three children.
Michael Graham (MDiv, Reformed Theological Seminary, Orlando) is program director for The Keller Center. He is the executive producer and writer of As in Heaven and coauthor of The Great Dechurching. He is a member at Orlando Grace Church. He is married to Sara, and they have two kids.




