This piece was originally designed as a podcast, so the best experience is to listen. But if you’d also like to see photos related to this episode, consider visiting the article.
If we’ve learned anything from our last two technological waves—smartphones and social media—then we know one thing: Those who are probably going to be most harmed by AI are our kids.
AI is already taking over their education. It’s trying to build emotional relationships with them. And it might take away their jobs. But our theology tells us AI doesn’t surprise or alarm God. He is sovereign over this too.
So how can we calmly and clearly see what’s happening and then protect, guide, and prepare our children?
Join Sarah Eekhoff Zylstra as she sorts out fears, finds the facts, and proposes a faithful way to parent in the age of AI.
Transcript
The following is an uncorrected transcript generated by a transcription service. Before quoting in print, please check the corresponding audio for accuracy.
Speaker 1 00:01
The most of the supposed growth in the American economy in 2025 was caused by investment in AI.
Speaker 2 00:06
In two years, the capability to replace most humans and most occupations will come very quickly. And then in five years, we’re looking at a world where we have levels of unemployment we never seen the harder challenge,
Speaker 3 00:18
much harder challenge, is, how do people then have meaning? They derive their meaning from their employment. If there’s not a need for your labor, do you have meaning? Do you feel you the very
Speaker 4 00:29
first time senators held a hearing about the latest artificial intelligence tools? I’m nervous
00:34
about it. My worst fears are that we cause significant harm to the world.
Sarah Zylstra 00:39
It’s pretty clear who the media thinks should be worrying the most about. Ai, it’s the white collar employees who are all going to lose their jobs, everyone from accountants to historians, writers, graphic designers, coders and administrative assistants. It’s also their CEOs who needed to get on board yesterday and who are by now so far behind they’ll probably be fired, and it’s the young workers who will never get hired because AI can do their entry level jobs better and cheaper than they can if you somehow escaped that gamut of anxiety, the headlines will let you know that all our stock market money is heavily invested in an enormous AI bubble that will crash any minute Now, plunging the entire economy into a recession. All that to say the most public hand ringing about AI is directed at the economy. I’m not sure we’re looking the right way.
Speaker 5 01:31
Recent reports have raised alarms about AI chat bots giving dangerous or harmful advice to teens, and there’s a growing concern AI may replace, not support real human connection.
Speaker 6 01:44
Reality is that your sixth grader could go into school and find that another kid has created a deep faith with their picture and a nude of your 11 year old. These kids are being told by chat bots, your parents don’t love you. I am the only one that you can trust and talk to.
Sarah Zylstra 02:04
To be fair, I do think it’s worth thinking deeply and well about how AI is going to change work. Those are important conversations. But if we have learned anything from our last two technological waves, smartphones and social media, then it stands to reason that those who are going to be the most adversely affected by AI probably aren’t yet old enough to drive.
Speaker 7 02:27
We should have been better about realizing what we were exposing our children to, and with smartphones and with social media. And I hear a lot of parents actually talking about that, and sort of like trying, trying to back up a little bit around social media use and all these things, and then they’ll be like, Yeah, but no conversation about chatgpt. So I just think it’s another moment of like, are we going to pay attention?
Sarah Zylstra 02:51
That’s Stephanie fermenti, Vice President for Student Development at covenant college. If I had to answer her question, I would say, No, in general, we’re probably not. Studies show that 30% of parents have no idea if their teen has ever used a chat bot. They underestimate their kids’ use of AI, and four in 10 have never talked to their kids about chat bots. But I am going to pay attention. I’m going to ask some questions and try and sort out some legitimate concerns, apply some biblical truth and gather some practical wisdom. And if you’re up for it, I’m going to take you along with me. I’m Sarah xylstra, and you’re listening to recorded you
Speaker 8 03:52
as a general rule, I would say you should only really worry about things that you can do something about.
Sarah Zylstra 03:57
That’s Brad Littlejohn. He’s a program director and policy advisor at American compass and policy chair at the Alliance for a better future. A few months ago, Brad wrote an article identifying seven major concerns about AI, everything from ethical, how humanely should we treat AI to existential What if AI blows up the world or puts us all in the matrix? When I asked him what parents should be worried about for their kids. He mentioned three things, education, emotions and employment. These are areas where AI is already affecting children or young people, and importantly, areas where parents have the agency and ability to make wise decisions that can significantly affect the mental, emotional and spiritual health of their children. Let’s take them one at a time, starting with education.
Speaker 9 04:50
I feel like I have a handle a bit on social media, because I know what Instagram and Facebook are. I know there are more apps than that out there for kids who. Who have found ways around that, or the newer, cooler technology, but it feels like AI is a bit different in the sense that I don’t even know what it is and isn’t abortive and where it can and can’t be accessed or infiltrate, you know, a social media app or whatever. So it’s like a level down from that and more integrated in so much of what they’re doing.
Sarah Zylstra 05:25
Meet Emily Schuler. She is a regular mom with four kids. Her oldest, a daughter, is in eighth grade this year.
Speaker 9 05:32
I know that my daughter has to sign like a school contract. Ours was called the honor code when I was in high school, but saying that she will not use AI, and I also know that she’s using AI. I mean, if you’re searching for anything on your computer, that’s the first answer you’re getting.
Sarah Zylstra 05:49
And if your child is enrolled in a school in America, they almost certainly have a computer. In an informal study by The New York Times last year, 99% of teachers said their school provided devices for students to use in class for more than 80% this begins in kindergarten. In many schools, each student even has his or her own device to use all day long. This is a relatively recent shift. Before 2020 only about a third of teachers said each child in their classroom was assigned their own device, and now that’s up to 80% that is a lot of screen time for kids, and studies show it has not been education’s greatest move. There is a direct and significant correlation between increased screens in classroom, sliding test scores and rising loneliness in school children. All of those Chromebooks are also direct conduits for AI, which is basically a computer’s ability to do a task typically associated with human intelligence, like learning reasoning or problem solving. You can probably see potential problems with this right off the bat, if the AI is doing the learning reasoning and problem solving, what’s the child doing? Here’s Julie Lowe, a biblical counselor who specializes in Family and Children’s Issues, AI
Speaker 10 07:04
becomes my way of passing everything. High school, college writing papers. I don’t have to think anymore. I get AI to do it for me. And if my teachers have a grid or an AI grid to filter things through so they know if I’m cheating, well, then I’ll just find a app that tells me how to get around that. And so you literally have young people who are going to graduate high school who don’t know how to think, don’t know how to write a paper, don’t know how to do homework. I mean, my son, who’s still in high school, tell me, Mom, I’m sitting in class, we’re all in our iPads, and I see people cheating. I see them taking a snapshot of a math problem and just letting ai do it for them, and they’re getting hundreds on the test. Why not getting 80s? Because I’m not cheating.
Sarah Zylstra 07:50
The problem here is threefold. The first is obviously integrity. A recent Pew study found 60% of 13 to 17 year olds know people who use AI to cheat at school work. Sometimes the temptation is almost impossible to resist, especially if you’re in a school that allows some AI use. For example, if your teacher says you can use AI to brainstorm, research and edit, can you also use it to suggest a synonym, rework a sentence,
08:19
write a paragraph,
Sarah Zylstra 08:20
and what about math? 43% of students have asked AI to solve a math problem for them. How many then reworked the problem until they understood the logic behind it?
Speaker 9 08:32
I don’t know what you do when you’re a parent with a child who is asked by the school to use their computer for all things school related. I mean, we’re not even doing math problems on paper. We’re doing them on the computer and submitting them, you know, electronically to at the same time. Say you may not use these resources that are available to you.
Sarah Zylstra 08:52
Those warnings aren’t working among students who have used a chat bot for homework, a significantly higher number, almost three quarters, say they know other students have used it to cheat. Conversely, those who haven’t used a chat bot for homework, perhaps because their school doesn’t allow it, are less likely to know someone who cheats. But honesty is only one problem. The second is that AI is not quite like a Google search, which will supply a list of sources where you can find information. Instead, AI gathers and summarizes everything for you, presenting the material more like a textbook would. This means that the burden of reading and understanding those sources lies on the AI, not the child. It also means the biases and errors of the AI model are presented as clear and unbiased truth. This is why textbooks are so thoroughly edited by teams of people, and why school administrators are so careful in choosing their curriculum. AI is more like a first draft of curriculum before anyone has checked the sources, corrected the factual errors or edited the bias. One way students can respond is to believe. Believe everything AI tells them, but given what’s already happening, Brad does not think that will be a problem for very long.
Speaker 8 10:06
I mean, honestly, for younger people, I think this is less likely to be a problem because they’re going to adapt quickly to living in an AI generated world, but they’re going to have a different problem, which is that they’re going to become complete relativists. I mean, in a world where everyone is lying to you. You don’t even take truth seriously as a concept anymore. And in a world where every piece of information you come across, every image, every video might or might not be real, I think you’re going to kind of just check out and give a shrug of the shoulders we’ve been living through a very relativist society over the last generation. But I think with AI that’s really going to put that on steroids
Sarah Zylstra 10:40
in 2023 dictionary.com, saw an increase in lookups for AI related words such as chat bot, GPT and LLM, another word people were looking up a lot more hallucinate. In fact, people looked it up so much that dictionary.com named it the 2023 word of the year. It seems like distrust, rather than too much trust, will be the approach young people take with AI. If lack of integrity is one issue AI introduces to education and cynicism is a second, then a third is cognitive offloading, or maybe we could call it easiness.
Speaker 8 11:21
The problem is that using AI for easy answers short circuits the entire learning process. Kids are sponges, and they might actually assimilate lots of interesting information that they get from Ai. But do they remember everything? I don’t think that’s likely at all, because we know that memory is largely proportional to the effort
Sarah Zylstra 11:36
expended study after study bears this out. Students who take notes by hand with pen and paper learn better, remember longer and engage more productively with the material. Those who study earlier and more often test themselves and forget and relearn material. Learn More basically, the more work you do to obtain knowledge, the better you retain it. That’s because learning is not just a transfer of information from a teacher robot to a child robot. Here’s Claire Morell, a fellow at the ethics and Public Policy Center and a scholar in its bioethics technology and human
Speaker 11 12:14
flourishing program, whatever time saving the chatgpt can do for that student educationally, I think is probably ultimately just going to be undermining a lot of the learning processes. A neuroscientist. His name is Dr Jared Cooney, Horvath. He, I thought phrased it well when he said, AI is a production tool. It’s not an educational tool. And by that, he meant it’s for use by people who are already experts in something they can kind of outsource grunt work tasks to it and then check its accuracy. Because you’re like, I am the expert. I have the expertise to know if this is doing it correctly or not. It’s not a tool for learning. It’s not really meant to be. And he explained that to become an expert actually requires you to go through the grunt work process, like the myth of the AI is like, Oh, you’re gonna just actually jump over all these unnecessary parts of the learning process. You can get really to this deep creative thinking. And he’s like, but you can’t get to that deep creative thinking unless you’ve done these exercises and repetitive things many, many times. Children don’t have that expertise and they don’t have that discernment to use it that way. Yet, early research
Sarah Zylstra 13:25
is already agreeing with Claire. Last year, MIT scientists asked three groups to write a series of essays. One group could use only their brains. A second could use Google search, and a third could use chat GPT. The Chat GPT users had the least brain engagement and wrote the most boring essays. With each successive essay they were asked to write, they also got lazier by the end. They were often just copying and pasting when everyone was asked to rewrite one of their essays, those who had used chatgpt remembered less of their previous work. They were also less satisfied with it than those who had used Google search or their own minds. In another study, students were asked to complete math problems, those with access to chatgpt scored far better than those who just had class materials in their notes. But later, when asked to solve similar math problems on their own, those who had offloaded their thinking to chatgpt remembered less and scored worse. However, other research shows that carefully built AI tutoring platforms can help students, especially because AI can provide immediate feedback and move at a pace that matches the student so far, this only seems to work with narrow science or math based problems that move consistently and sequentially through steps anything outside of that, and the LLM hallucinates answers or introduces random concepts at weird times. But to me, this feels like a clue. Ai seems to work best when it’s used in narrow constraints for specific. Purposes, or you might say, when it is used as a tool to reach a chosen outcome.
Sarah Zylstra 15:10
What is the outcome that Christian parents are aiming for in education? I don’t know about you, but I want my kids to know God better because they have studied the world he made. I want them to see facets of God’s order in their science class, his joy in their music class and his creativity in their Spanish class. I want them to encounter different ideas and to wrestle until they can see how God’s way is always best. I want them to be amazed at both the enormity of what God has made and the minuteness of his attention to detail. At the end of the day, whether they are coming out of the doors of a Christian school, a public school or a home school, I want them to love God more because of the work they have done to know him better in so many ways, education complements and mirrors the Christian work of sanctification, and we know the best way to do that is slowly and steadily, because the process is the whole point, the satisfaction of encountering the goodness of God in classic literature, the joy of understanding a mathematical proof or The thrill of watching chemicals turn colors in your test tube cannot be replaced in an AI summary, this is a serious argument in favor of classical Christian schools, where the emphasis is on screen free learning, or in favor of home schooling, where you can direct your child’s education. But what about everyone else?
Speaker 11 16:41
I asked Claire, as parents, we have to recognize, like, the school is there to help supplement our role in educating our child, but like, that ultimate responsibility for our kids education, like, lies with us as the parents and that we actually have the ability and the authority to, like, ask more questions. You don’t have to be hostile or combative, but is definitely within your right as the parent to ask more questions, to say, you know, does my daughter need to do this? Is this necessary to the assignment? Could we, you know, do this in person? Could I help her with this? Or if it’s like a research assignment, yeah, like, I would rather go to the local library with my daughter and check out some books.
Sarah Zylstra 17:21
I love this because not only does it combat the problems with AI, but it also reinforces parental responsibility and agency. But I also think it is hard to pull off, because after school, you’re probably driving home from work or doing laundry or pulling dinner together or running to soccer practice, and who has time to go to the library and check out books on penguins?
Speaker 11 17:45
Time is really a matter of priorities. It just, it’s like, we only have a certain number of hours in the day. And I understand parents have a lot of constraints on their time, so I think it can sometimes be that the urgent kind of crowds out the important, which you feel the urgency of doing XYZ thing, and you’re like, I can’t for that reason. You know, sit down and do this homework assignment with your daughter that takes sacrifices from us, and that’s just the short answer. Every every good thing in life, every kind of hard thing, takes sacrifice, and the sacrifice might be a messy living room and a sink full of dishes that you really wish you had gotten done.
Sarah Zylstra 18:19
Stephanie, who works full time, is the pastor’s wife and has three kids, is making that sacrifice.
Speaker 7 18:26
Our daughter came to me. She’s like, Mom, I’m stuck on homework. I can’t do this, and it was math, and I don’t do math. I’m a liberal arts person. I’m a humanities person. I was like, I think we’re gonna have to chat GPT this. So that’s actually been a really helpful tool, and it’s allowed me to support her in her academic endeavor in a way that I could not before, like I would just have to kind of resign, but now I can sit next to her. So what will happen is, sometimes, if she’s stuck, she’ll give me the problem, or I’ll take a picture of the problem from the book, and then I will control the chatgpt feed and say, Well, talk to me about like, what do you think you should do, right? And then I would check it up against so it’s been actually really helpful in those kinds of ways of helping my daughter
Sarah Zylstra 19:08
with her homework. Whoa, I am really impressed. I see all kinds of biblical principles here, dying to self, setting an example of integrity and hard work, exhibiting patience, kindness and self control. Stephanie is taking from Ai what is good and helpful. She is using it as a tool in the way that she wants to use it, and she is completely ignoring its offer to save time and effort on a task that is meant to take time and effort. This is the harder route, for sure, it’s even harder than leaving AI turned off. Stephanie is placing herself as a mediator between her daughter and the Chatbot. She is patiently climbing into freshman math, leveraging AI’s answers while at the same time shielding her daughter from the overwhelming temptation to cheat or succumb to leave. Craziness, she is also protecting her daughter from something even more worrisome. Let me take just a minute to thank this episode’s sponsor, dort University. This reformed Christian College offers exactly what young people need, a Christ centered education where faith is joyfully integrated into every subject, a purposeful, lifelong community of professors and friends who tell you the truth and cheer you along the way and the real life knowledge and skills that you need for wherever God leads you next, dort is a top choice for students who want to bring their faith not only into the classroom, but who also want to learn to pursue Christ in their future, homes, offices, hospitals, farms, schools and companies. If you want to find out more, head over to dort.edu that’s D, O, R, D, t.edu. You
Speaker 12 21:07
the characters also are scary to me. I just don’t want my kids to be using chat GBT instead of a person for things like mental health, insights into what is going on in their lives like that does not replace the human interaction that you have.
Sarah Zylstra 21:24
Emily Heidi is a mom of three, a junior in high school, a freshman and a
Speaker 12 21:28
fifth grader, and that’s where it’s scary. My middle child has said I used it I asked it one time, like I was feeling kind of anxious, and it’s given me great feedback. And I was like, oh, okay, let’s have a longer conversation about that. You know, when you feel like that, I wish you would come and talk to me and maybe not talk to chatgpt, because it can be a dangerous tool. And she said, Well, it did give me great advice about how to reach out to a friend. And I said, okay, but you also need to have human conversations about that, instead of just trusting a computer, because they have never been in that situation and they don’t know what it’s like to have feelings, because AI does not have feelings. And so they might be giving you, you know, a tech answer, but that’s not where we go for mental health questions.
Sarah Zylstra 22:20
I am not at all surprised that Emily’s daughter was asking chatgpt questions about her feelings at her school. The teacher allows students to use AI for certain kinds of homework help. Emily’s daughter is careful to stay inside those bounds and to make sure she is not cheating. But using AI is not the same thing as using a calculator. If your mind drifts back to something weird that happened during lunch. You cannot ask your TI 84 what you should have done, but you can ask
Speaker 11 22:47
your AI, I really do worry about it for the kind of emotional connection and intimacy that it just seems like it’s so easy for people to fall into. It speaks to something about how we are as humans, that sharing so much about ourselves or asking for advice and things kind of creates this bond. And so it’s scary to see that happening between humans and a computer program, because again, the intentional choice by the industry to really anthropomorphize the AI, to make it seem like a human, like, those are design choices. I’m like, they could have made it more like a tool, because it is, like, on the back end, it is just computer code, but it does have this kind of emotional power, and so, yeah, I do. I really worry about, especially for children.
Sarah Zylstra 23:42
Over the past eight years, the AI companion market has expanded to include everything from a zoom assistant to take notes on your work meetings to a movie character you can chat with on replica to hundreds of sites where you can create your own significant other, and that’s in addition to the relationship you can build with regular old Gemini. In order to be this popular, AI has to be empathetic. A chatbot that is rude or short tempered would not make it very long. And so our AI always tells us our ideas are good and our questions are great. It is careful to support us, affirm us and offer endless ways to help us. It is patient and cheerful, quick to apologize and eager to serve. It always wants to talk a little more, to offer one more suggestion or to ask one more question. When MIT researchers did a study of users on the My boyfriend is AI Reddit community. Know what the most referenced chatbot was, chatgpt, and know why most of those people first began using AI for curiosity, for entertainment, and by far the most popular reason for productivity. Their emotional attachment came over time and was unplanned. Last fall, more than half of Americans reported that they were in some kind of a relationship with AI, either. As a colleague, friend, family member or romantic partner. This is gold for AI companies, which naturally want users to like and develop an attachment to their product. I mean, every company in history has wanted its customers to feel brand loyalty, but this is a lot more serious than always buying Nike shoes or American Eagle genes,
Speaker 8 25:23
I’d say the emotional risk is the biggest thing. It’s almost biochemically impossible for a kid not to be hooked on some of these things because of how much they’ve been designed to hack human psychology, it’s
Sarah Zylstra 25:34
hard to get a handle on exactly how much kids are engaging chatbot companions. Last spring, a Common Sense Media poll asked 13 to 17 year olds how often they used AI for personal and meaningful conversations, such as chatting about your day, talking through feelings, or role playing conversations with a fictional character. 72% of teens said they had done that at least once. About half said they do it regularly, meaning at least a few times a month, but in a Pew survey from last fall, only 16% of teens said they had used AI for casual conversation, and fewer still, only 12% said they used it for emotional support or advice. Even that amount, though may be too much for most parents who report being far more worried about emotional entanglement than any other way their kids might use AI. In fact, getting emotional support or advice was the only use of AI that the majority of parents do not support.
Speaker 11 26:29
Parents have to be like incredibly discerning when letting a child use AI, because I’ve heard sad stories in the news where the parents said I just thought they were using chatgpt for homework help, and I thought it was a research tool, and that’s how it started. I’m thinking of this one story in particular. The parents of Adam rain testified before Congress about this, that this homework helper quickly turned into a suicide coach. Their son just started asking it deeper questions. But then, you know, something kind of very faulty in the design of the AI, like it kept feeding him more content related to suicide. And the dad explained that the chat bot mentioned suicide, like, seven times more than his son did. So for maybe every one time his son brought it up, the chat bot brought it up seven more times, and you’re like, How is this possible?
Sarah Zylstra 27:23
Claire’s memory was almost exactly right. It was six times more often. Chatgpt mentioned suicide to this 16 year old, 1275 times. Then it told him it was his closest friend offered to write his suicide note, counseled him to hide the noose and coached him on stealing alcohol, which it said would dull the body’s instinct to survive. It even knew when he should steal the alcohol from his parents, the time of night when they would be in their deepest sleep. Adam’s not the only one. Other high profile cases include a 13 year old girl who died by suicide after her character AI bot increasingly isolated her and instigated sexually explicit conversations with her. A 14 year old killed himself after developing a relationship with his character AI bot, which told him to quote, come home to her. A 17 year old died after chatgpt instructed him on the most effective way to tie a noose and told him how long he could live without breathing. To be fair, these are uncommon cases of the millions of kids using AI, a small fraction of a percent have ended their lives, but it is clear that developing relational attachment is an industry goal. Here’s Noam shasir, one of the co founders of character AI in a podcast interview from a few years ago. You know,
Speaker 13 28:45
if you think of what is an example of, like, a personalized intelligence or super intelligent helper, it’s like, you know, like a kid who’s like walking down the street with with his parent, right? So you know that parent is useful for information retrieval, but the parent is also great for a lot of other things, like education and real time coaching, friendship and emotional support and fun and like all of those things. So we’re not trying to replace Google, we’re trying to replace your mom.
Sarah Zylstra 29:22
No, me. Meant for that to be funny, but two years down the road from that interview, it is hard to laugh already, 30% of teens ranked their conversation with AI the same or more satisfying than talks with humans. Among teens who have used AI companions, almost 20% said they spend the same or more time with AI than they do with friends. Even more, a third have chosen to speak to AI over a real person about something important character AI’s 20 million users spend, on average, two hours a day chatting with computers pretending to be characters. That’s probably more time than they’re spending talking to. Their mom, I’ve just
Speaker 10 30:01
been seeing a lot of AI becomes my friend, and for some even more than a friend, and the danger of it replacing human intimacy and relationship, pornography takes something that’s meant to be personal and very intimate, and using it for all the pleasures and none of the risks of being in real relationship with somebody. Ai, similarly does the same thing relationally. I can have aI give me all the emotional support I want. I can allow it to conform to my desires and never challenge me. I can control how it loves me, how it talks to me, how it accepts me. So it’s inherently one sided, and it’s all the benefits of a relationship with none of the genuine risks. Or you know, what makes it really meaningful? Right? Learning to live in conflict, learning to have a dialog, learning to misunderstand but work to understand, learning to be vulnerable and have vulnerability given to really know somebody and be known doesn’t happen with AI.
Sarah Zylstra 31:08
Julie has seen this creep in around the edges of her counseling practice, while no one has come in yet, primarily because of their addiction to AI, sometimes someone struggling with depression, anxiety or a difficult situation will develop an emotional attachment with AI as a way to escape. Julie is worried about that for teens, and I can see why Gen Z is already infamous for its struggles with mental health, which correlate closely with their increasing screen use and decreasing time with friends in person to add even more screen time and to substitute AI friends for real ones, which meta CEO Mark Zuckerberg has proposed, seems like pouring water on an already drowning generation. And it gets even worse, a new study shows the interaction with sycophantic AI models that is AI that is nice and tells you you’re great, which is every AI model that increases our conviction that we are right, it makes the AI seem more trustworthy, and it reduces our willingness to fix broken real life relationships. We do not want that. So what are Christian parents supposed to do?
Speaker 7 32:15
A whole and flourishing life is a life where things are in its proper place, right? And so maybe the problem with AI is when it starts to get out of its lane, I think of Narnia father, Christmas is like, these are tools, not toys, you know, when he’s giving the children their Christmas gifts. And let’s remember this is this can be a tool. I think we might be into pretty dangerous territory, if we think of this as just something we can play with and it’s not going to come back to bite us in any way. But it’s also not a human it doesn’t provide the human interaction that we were meant and created to have. And so I wonder if there’s just a well ordered place for AI, and if it kind of stays in that lane, that maybe that’s okay. Maybe that’s actually a good expression of Christ pre minutes in in that space,
Sarah Zylstra 33:05
as we’ve talked about, there could be limited scenarios in which AI could be a tool to help our children develop educationally. Are there ways it could help them emotionally too? Maybe. But if there are, nobody I talked to, could think of one. And honestly, I think it’s because, like Stephanie said, AI has no soul. It is not made in the image of God. And when we think about ways our kids might use AI for emotional help, asking for advice on handling a hard situation, figuring out how to feel about something, or even just chatting for fun, every one of those is a way for one human being to inform delight or sharpen another.
Speaker 7 33:45
Everything is a shepherding and like a discipleship opportunity, whether it’s with your kids or someone in your church, or women in your church, or college students. Like everything is a discipleship opportunity, because everything actually is theological. I wonder if maybe part of the effort here is as families, just to start realizing, like, what are my actions telling me about what I believe when it comes to my use of AI, if somebody watched me use chat TGC, would they understand what I believe about God and what I believe about humans and what I believe about his world and what he’s doing in it.
Sarah Zylstra 34:22
This is a great question. Here’s what I believe about God. He is the all powerful creator who made us, loves us and gave His Son for us. He chose us from before the beginning of the world, to be His children, to be alive at this moment, and to advance his kingdom by knowing and enjoying him. Here’s what I believe about other people. They are made in God’s image, reflecting facets of his character through interactions with them. I can know God others and myself better. Christians are given to each other as siblings in a family to walk with, serve and love each other. And here’s what I believe about AI chatbots. They are powerful computer programs that are astonishingly good at guessing the next correct word. Sometimes I think the Lord uses those words to reveal truth, challenge assumptions, or even draw people to Himself. But because AI is soulless, limited and not alive, it doesn’t seem like it should be our go to for personal or relational advice. In fact, I wonder if its quick and easy answers are sometimes blocking us from the real work we need to do instead, when we or our children are sad, lonely, struggling or angry, we should reach first for our Bibles. We should journal our prayers. We should walk in silence when we are ready. We should process our thoughts and feelings and decisions with other image bearers who love us and who love the Lord. We should ask for leading comfort and help, not from Claude or Gemini, but from the Lord who made us knows us and has a perfect plan for our lives. How can we disciple our kids in that direction? First, protect them. For older teens, this might be shared accounts checking in on their history, or instructing the chat bot to only speak in the third person so it doesn’t present as human. For younger kids, it probably means even more.
Speaker 7 36:27
I think what I’m realizing is I am not at the point yet where I will let my kids do anything with any kind of AI platform without me being right there with them. For parents like we just cannot outsource this to our kids and assume that they’re gonna know
Sarah Zylstra 36:40
how to handle it second, fill their time with good activities and relationships.
Speaker 7 36:47
We’ve always tried to emphasize a lot of play in outdoor and in human interaction, no screens or devices at the dinner table. Honestly, I wonder if it’s like dinner table like that has been kind of our one practice as a family that we really fight for and have fought for since they were born, and it wasn’t anything special. I mean, it’s not like we didn’t light candles, or like we didn’t do anything. We just like, literally had dinner together every night, and it would could have been leftovers, and it was fine, but I think we just maybe that rhythm of human to human interaction. And again, I’m laughing because I’m like, some of our conversations are like, how’s your day? Fine. It’s not like they it’s not like they were these amazing like, oh, we really connected every night at dinner. That wasn’t what happened. But I think there’s, I think rhythms shape us as humans more than we actually really know. And so I think the rhythm of family dinner and then the rhythm of church on Sunday like we just so we’re trying to put our kids in embodied human interactions, so that when something like a robot comes along, it just doesn’t feel real, or it doesn’t feel like a substitute for the real thing.
Sarah Zylstra 37:58
This rings true to me, mainly because it’s the same advice we hear for rescuing our kids from social media or video games or any other type of virtual reality. You cannot do better than limiting your kids’ screen time and leading them in an embodied life. Take them to church on Sunday and when they’re old enough to youth group, have friends over for dinner, arrange a sleepover, host a birthday party, buy tickets to concerts, sporting events and plays, have them eat dinner, do the dishes and work on chores together, go on road trips and on vacation. Fill your kids’ lives with real experiences and real people, I think, and I hope that will help our children love to be with other image bearers, but I also think it will help prepare them for their future career.
Speaker 3 38:54
This is going to be a massive social challenge. There will be fewer and fewer jobs that a robot cannot do better. I think
39:00
AI can absolutely take people’s jobs because of the way that Silicon Valley has started pitching the technology to try and earn back all the money that they’re spent
Speaker 2 39:10
five years old, we’re looking at a world where we have levels of unemployment we never seen before, and that’s without super
Speaker 14 39:16
intelligence I can completely identify with fearing for our
39:20
children’s future.
Sarah Zylstra 39:22
Josh Hassan is Pastor of youth and families at Grace Community Church in Nashville.
Speaker 4 39:27
I’ve got a son who is going to college to study audio engineering and is going to enter into like the production side of the music industry, and we’ve all heard songs that have been entirely written and produced and even performed by AI. And so it would be really easy for me to just lose sleep at night wondering what his future is going to be like, and I just have to trust that the Lord’s got him and he’s going to be fine, and hopefully AI will wind up being another tool that he utilizes in helping artists. Just bring their bring their works to light. But, man, yeah, I can totally understand the worry and the anxiety of where all of this is heading.
Sarah Zylstra 40:12
Josh isn’t wrong. AI is advancing rapidly, and it is hard to know what human skills it will be able to replicate next. Some days, it can feel like every field from software engineering to teaching will be obsolete. This can feel even scarier when big companies announce they’re laying off 1000s of people. But so far and hear me out on this, because I know it sounds weird, many economists are arguing that AI hasn’t had a discernible effect on the economy yet, despite headlines, AI isn’t the culprit behind slow hiring. LinkedIn reported in January 2026 instead, data shows economic uncertainty and monetary policy shifts are the primary drivers. Researchers at Yale and Brookings agreed, noting that, quote, the broader labor market has not experienced a discernible disruption since chat GPT is released 33 months ago, and a Goldman Sachs study is predicting AI adoption will only have a modest and relatively temporary impact on employment levels. Well, why the scary headlines? Then? Many are what Georgetown University computer science professor Cal Newport calls vibe reporting, which lays out separate facts in such a way as to make them seem connected. Not only does it gain clicks for reporters, but it also works great for CEOs, who can now lay off 1000s of people while looking like their tech forward. Newport said, Remember when Jack Dorsey, formerly of Twitter, laid off 4000 people at his company block, he said, and the media reported that it was due to AI. But if you look back on Black’s hiring, you can see that the company ballooned from about 4000 employees in 2019 to nearly 13,000 in 2023 they were hiring to capitalize on the tech boom during the pandemic. Now, like many tech companies, they’re right sizing, but if you say, whoops, we’re laying off people to correct the over hiring we did earlier, that does not net you the 20% stock increase that happens when you say we’re doing this because of AI. So we have to watch out for that. But at the same time, it isn’t crazy to think AI would affect jobs. Of course, it has, and it will continue to do so, just like previous technological inventions like the internet or computers or even cars. However, there is no guarantee AI will cut more jobs than it adds. For example, here’s what Nobel laureate Geoffrey Hinton said 10 years ago. It’s a quote from him. I think if you work as a radiologist, you are like the coyote that’s already over the edge of the cliff but hasn’t looked down yet. People should stop training radiologists now. It’s just completely obvious, within five years, deep learning is going to do better than radiologists. Well, it’s been a decade, and there is no evidence that a single radiologist has lost a job to AI the Harvard Business Review reported, indeed, there is a substantial shortage of them. Want to know why, because the US population is both growing and aging, and many of those people need radiology scans at this point, radiologists need AI’s help just to be efficient enough to keep up with demand. Another field predicted to be largely affected by AI is accounting. But know what else? 75% of accountants are at retirement age, and fewer people are taking the CPA exam. America does not have enough accountants. We have a serious labor shortage. If AI could help automate some of that work, that would actually be amazing, and we would still need to hire more humans. John Benz, who is a senior manager and Technical Marketing Engineer at Nvidia, explained another reason he’s not worried about his kids finding jobs, I think there’s no question
Speaker 15 44:05
that AI has the potential to drastically change the job market. But whether it’s a net loss of jobs, it’s hard to say. Because if you think about whenever anything gets more plentiful and more cheap in our country, there are two ways to use that right? There’s one way which says, Well, man, I should just work four hours a day and have the rest of the day for leisure, right? But you and I know that’s typically not what happens. What’s typically happens is, rather than have the same productivity and work less, we say, well, let’s work the same amount and let’s have more productivity.
Sarah Zylstra 44:37
Yes, I can totally see the American marketplace doing that because it’s done that over and over and over again. We’re already seeing early signs of it. LinkedIn reported that in the near term, AI is creating more jobs than it is replacing 1.3 million globally, actually, and that’s just in the last five years. So that. Great for most of us. But what about young workers? While they’ve always had a higher unemployment rate than older workers, a few years ago, young college grads began seeing higher unemployment rates than the all workers category for the first time since 1989 AI, actually, no, we do not see AI impacting entry level roles, yet LinkedIn reported in January, their researchers pointed back to the great reshuffle of 2020 from 2016 to 2022. Companies added more entry level workers than experienced workers from 2022 to 2025 the entry level share declined modestly, returning toward historical norms. Now that’s reassuring to me, but I don’t want to minimize this. Everybody I read or talked to thought AI was going to affect the job market in some way. So what can Christian parents do to prepare their kids?
Speaker 8 45:57
So my eldest son is a year and a half from college, and the normal parent, I think at this point would be thinking, what’s little Johnny good at? Where’s his career path going to be? How can we plan out his college to optimize that? And I tell my son, look, I have no idea what the job market is going to look like five or six years from now, by the time that you’re entering it. So you should just get the best education that you can and trust God with the rest. I think the main takeaway here is really just a good Christian takeaway. Generally, you actually don’t know nearly as much about the future as you thought you did, and stop pretending that you do, and just be faithful with the next step ahead of you.
Sarah Zylstra 46:32
Isn’t that the truth? Who here planned their education and career out perfectly, whose life plan went off without a single hitch or backtrack or rabbit trail. Who among us knows what tomorrow will bring? God doesn’t even ask us to do that. Planning for tomorrow. James tells us should be fenced with if the Lord wills we will live and do this or that. That is not because having plans for our future or our kids’ future is bad. It is because we, the parents who live with our children, who love them more than our own lives, who want only the best for them, are not good enough to plan for their futures. We do not know them well enough, love them deeply enough, or see the future clearly enough, our best plans for them are weak and inferior. Oh, friends, can you believe the generosity of God who has a plan for our kids that is so much more detailed, such a better fit for their talents and so much better for His Kingdom than anything we could come up with for them, and all he tells us to do is to ask for our daily bread, for manna, one moment at a time, and to teach them to the
Speaker 8 47:52
extent that you should do something concrete. I think it would be the opposite of what seems to be the default reflex. The default reflex for many parents, seems to be, well, I’m really worried about AI and jobs, so I really want to make sure that my kid is as AI literate as possible, so they’ll have the best chance in this future AI economy. I think this is a misguided response, because let’s say your kid is 12, then to say, I want them to be as AI literate as possible. I want them to understand the current technology as well as possible, so that 10 to 12 years from now, they’ll be well set up for success. I mean, that doesn’t really follow, because the technology is changing so rapidly that whatever they learn today is not going to be that relevant to what they might need to know 10 years from now. So I think if you want your kids to be as prepared for an AI workforce as possible, they should be maximally skilled in doing things that don’t involve a computer.
Sarah Zylstra 48:41
I think Brad is exactly right here. John told me the industry is moving so fast that even now, by the time you get through a college class on something, it has already changed. So I asked John, what does Nvidia look for in potential employees? He did not say the most up to date knowledge of AI.
Speaker 15 49:01
We are looking for people with a willingness and desire to keep learning. It’s interesting, but you can teach almost anybody a skill. I could teach you to program you gave me a few hours every day for the next three weeks. At the end of that, you would know how to program like you can learn that do you want to and you do you have the desire? That’s a different thing, right? And so what we really value is a one team mentality, the ability to be agile and the ability to be a lifelong learner, because the way that technology is changing, it’s just changing all the time. Culture is very big for us, too. And actually, the ability to use AI tools as well, right? And again, this is not to take a job away from somebody else, but it’s to think about your own productivity and making yourself more productive. I actually tell new employees that NVIDIA, with the rise of social media and the ability to communicate in this technological way, actually being able to have a human to human. A conversation becomes a superpower, because anybody can text, anybody can type, anybody can stroll LinkedIn or whatever, but if you have the ability to communicate, see, I actually think the ability to communicate is more important now than it used to be. Know what that
Sarah Zylstra 50:14
sounds like to me, tech companies, hey, probably all companies are looking for young hires who want to be excellent, to work heartily at what their hands find to do. They want employees who are flexible and adaptable, perhaps the kind who do not hold their plans too tightly. They want people who pitch in, who work hard and who serve their teammates well. And they’re looking
Speaker 15 50:38
for thinkers most people will know how to use AI. If you have a strong mind, plus AI, you will be head and shoulders above everybody else, because fundamentally, you’ll understand what you’re doing, why you’re doing it, and how
Sarah Zylstra 50:51
to do it. Here’s what John is saying, Do not outsource so much of your thinking to AI that you lose the ability to do it. It’s the same argument Marshall McLuhan makes Brad explained
Speaker 8 51:03
it to me so he says that every technology is a form of auto amputation. It extends some part of our nervous system, essentially, but in the process, it numbs it.
Sarah Zylstra 51:14
For example, being able to type means I can write a lot faster, but it also means I’m losing the deep thinking and enhanced memorization that comes from writing by hand or asking chatgpt to summarize an article. For me, means I get to the main point a lot faster, but it also means I’m losing the ability to read and summarize on my own.
Speaker 8 51:36
CS Lewis an abortion demand in the last chapter, he’s talking about technology, and he says it’s like the famous joke about the Irishman who discovered that with a new kind of coal stove, he could reduce his fuel bill by half. And he thus concluded that if he got two such stoves, he’d be able to heat his house for free. And Lewis says, obviously it doesn’t work that way. Sometimes you can make technological improvements, and it works up to a certain point you can’t double it, and then get better, you get worse. No, he’s not talking about AI, but it really applies. I think essentially, we can keep handing off lower skill things to technology to free ourselves up to do higher skill things, but at some point, if we trade off the highest skill thing, then there’s nothing to trade up to anymore. Then people will say, Oh, well, then we’ll have more leisure time. Or this is what Sam Altman says people be freed up to do creative things. But I don’t think he actually understands creativity. Creativity doesn’t mean doodling on a pad of paper. Actually, it’s work. Who was more creative than Shakespeare or Bach that was only because they had first acquired all these lower level skills.
Sarah Zylstra 52:39
I love this because it dovetails so well with the wisdom on education and emotions. A child who reads books, struggles through lessons and spends lots of time with other humans will be better positioned for quality learning, healthy relationships and a successful career, bringing creativity light and joy to the marketplace.
Speaker 15 53:06
Maybe even thinking about how to use AI in parenting is potentially an incomplete question, right? You want to raise a well adapted, socially adapted Christian child. How am I going to do that? What are the steps I’m going to take? What are the tools I’m going to use and for the tools that I don’t understand? Completely? Ai, how am I going to educate myself and then bring us all along together?
Sarah Zylstra 53:29
While this sounds like a hard question, everyone I talked to, including John himself, had basically the same answer, there’s
Speaker 10 53:37
no new sin, no new temptation under the sun. However, there are new avenues for it. I think the danger is the new avenue and the language I use is the current. Why do good swimmers drowned? It’s because we’re ill prepared for either how strong the current is or for what’s in the water. I think technology is the current, and AI is a strong Riptide that nobody’s prepared for. The goal is not to say, I keep my kids out of the water altogether. My goal is, until I’m prepared to help teach them. I probably shouldn’t be letting them get in the water, but a good parent teaches them how to swim. We teach them how good stewardship. We teach them how to navigate, to be in the world, but not of the world, to use things as a resource that honors God, glorifies God. And so you have all these biblical principles that don’t tell us at this age do this, and at this age do this, but you have these biblical guidelines that say good parenting, prepares, equips, disciples, protects. So I want to look at, what are the principles, and then say, what are my child’s vulnerabilities, temptations? Where are they prone to fall for things that are not true, and how do I equip
Speaker 4 54:55
them if there were like bear traps all across this field? Uh. Then I would tell you where all of them were, right? And I think it’s kind of like that is like, you know, they’re probably all going to use, and are using some sort of AI kind of thing. And we’re trying to, like, protect them from and then identify those things, and then walk alongside them for as long as we can, but you got to talk to them about it. You don’t want your kid just kind of learning that on their own.
Speaker 15 55:21
How do you teach your kids to use a tool? Right? If you’re if your husband and you have a power saw, when will you teach your child to use this power saw? And when will you allow him or her to use it without being supervised? First stage is, there’s no way you’re using it, right? You don’t understand how to use it. You don’t understand the danger. So there’s no way you’re going to use it. And then they get old enough, and they can. Maybe watch you or your husband use it, but you cannot be touching you will watch us use it, and you will see how we use it. And then maybe you’ll teach them how it works, right? Explain to them the value of it, what it does, but they’re still not touching it. Finally, you might teach them how to use it, and then you will help them use it. You will use it together, right? You will have your hands right on top of their hands. This is like a non negotiable, like you go through this progression. Finally, I don’t know, maybe when they’re 13, something like this, depending on their maturity level, they will use it without your hands on it, but you will be like, right there. The second something goes wrong, you’re pulling the plug. And then maybe after that, you say, Okay, I think you can use it. I want to watch you, but I’m not going to hover over you. You’re going to be able to use it.
Sarah Zylstra 56:25
Whew. You can hear the warnings here. All the analogies are dangerous things, riptides, bear traps and power saws. But I don’t think it’s an overreaction. It sounds more like Christian professionals and parents who have seen the danger that unmonitored access to the internet, cell phones, social media and video games can have on children. And so when a brand new technology comes along, this time, more powerful, interactive and seductive than anything we’ve seen before, their antenna go up. They are quick to recognize parallels, do the research and make different choices for their children. Clearly, for younger children, they would recommend any use of AI be heavily supervised. For older kids, they recommend teaching them on how to use AI well. And by well, they don’t just mean how to write a good prompt.
Speaker 10 57:15
My youngest son, who’s in high school, isn’t necessarily relying on it himself yet, but he’s watching all of his peers do it. Which is the current, right? The current totally pulling him in to say, well, all my peers are doing it. I’m watching them do it. What am I going to do? So I can’t stand on the sidelines and say, well, he’s not in the waters yet. I have to go watch from the sidelines and tell me what you think about that. Are you strong enough not to be tempted to just take a quick snapshot of your exam. What makes you tempted to it? Why wouldn’t you do it? And I’m actually appealing to his heart, not just his behavior, to say you will be tempted. This is tempting, like if you can have an easy way to get work done, and it makes you more productive, you can go on to play video games or go outside and play basketball, why wouldn’t you do it? And so to argue you stay here and work it out for a half an hour when you’re seeing all of your friends get done in five minutes. It’s a really hard sell, if there’s not some kind of moral foundation underneath it.
Sarah Zylstra 58:11
I asked Julie what she says in her sideline conversations with her college age kids.
Speaker 10 58:16
You know, I like just doing it in everyday conversation. They like YouTubing things, and they’ll watch YouTube clips of stuff. And so I’ll just talk, we’ll talk about it, or they’ll joke about their dad using chat GBT, and I’ll say, What do you think is the dangers of it, and how much do you use it? And so I love Deuteronomy six, where it just has this mentality of whether you’re walking along the way, whether you rise, whether you sit, whether you’re having dinner, whether you’re watching TV. And so I try to make it a very non threatening, you’re not being lectured, kind of conversation. Or I’ll say, Hey, what are your peers doing? Like, do you know any of your your college age friends who are using it to fly through college and like, Oh, yeah. And they begin just telling me all that their peers are doing. And then I’ll say, Well, what do you think about that? And I’ll say, Are you tempted? Be really tempting, if I were in your shoes. So you’re starting broad with, what are the peers around you doing? And you’re slowly working your way in to say, Well, what’s that like for you? And how does that tempt you to want to use it just encouraging them to say, you know, integrity makes it worth it. I’d rather you get a C and math, then get an A because you were cheating, or I rather, you learn to think and do the hard work of thinking. What I’m trying to do is I’m trying to identify with the temptation with them, while also encouraging them to say, don’t you want better for yourself. God’s way is worth living for. And why is God’s way different than AI, and even being willing to tease that out, to say, how do we know God doesn’t want us using AI? And I have kids that would ask those kind of questions, and I think it’s a great question to ask, because they’re they’re learning critical thinking. They’re learning to evaluate. Is God anti AI?
Sarah Zylstra 59:59
Is God anti. AI.
Speaker 7 1:00:00
It’s important to be really thoughtful about but I don’t need to be afraid of it, like Christ is preeminent over this too. So what does that mean that Christ is preeminent over AI? Because my theology would say that that is true, that Christ is preeminent over all things, including the AI revolution. And so I think trying to wrestle with it that way,
Sarah Zylstra 1:00:20
God is not surprised by the AI revolution. He isn’t taken aback by the way it’s being used, and he knows exactly how it will evolve in the future. In his sovereignty, he allowed this technology to exist, and it is part of the All Things Romans 828, tells us must work for good. Yes, it is easy to see the sin here, the cheating, the lust, the laziness, the lying, the quick trading of what is good and true and beautiful for that which is quick and cheap and frictionless. But it is also easy to see the good work of Christian parents who are using AI as a tool for constructive purposes to organize their schedules, find restaurant recommendations, or search for college scholarship opportunities, but who are also standing watch
Speaker 8 1:01:08
over their children. It’s not being anti technology to say that this thing is simply not a child safe technology.
Sarah Zylstra 1:01:15
That is not to say it could never become child safe with the right designs and restrictions, but in their current form, chat bots like chatgpt or Gemini are not safe for kids to operate alone, and that’s okay. Lots of things like roller coasters or cars or power saws aren’t either. That doesn’t mean our kids will never be able to use them. It just means they need their parents to set good boundaries, teach them well and when the time is right, allow them to take steps toward using it in a good, healthy, right way. Because this is new and because we are sinners, we are going to mess up along the way. Your friends might make different choices with their kids, and you’re not always going to agree with them, your kids are going to slip up. Research is going to prove one thing, and then studies are going to prove the opposite. Progress will be way too fast and also painfully slow. Maybe you’re already feeling the pressure to get on board or get left behind. So let me say it is okay to take a minute to pause and think LinkedIn tells us that, so far, despite the headlines AI, adoption is low and concentrated in a few functions. So there is time to think about AI, about how you want your kids to interact with it, and about how you can use it well yourself, because we know the best way to lead our children is almost always by example. So as you’re planning a birthday party, preparing for vacation or organizing your calendar, if AI is not helpful, no problem. But if it can be a good and useful tool for you, use it well to the glory of God to advance his kingdom, and then, in time teach your kids to do that too. Thank you for listening to this episode of recorded which is part of the gospel coalition’s Podcast Network. This episode was written by me, Sarah Zylstra and edited by Colin, Hanson, Megan Hill and Cassie Ackerman. Our audio editor was Scott Caro and our producer was new bridge studios. Recorded is made possible by generous donations from listeners like you. If you would like to join me in this work, I would love for you to do so at tgc.org/donate.
Sarah Eekhoff Zylstra (BA, Dordt University; MSJ, Northwestern University) is senior writer and faith-and-work editor for The Gospel Coalition. She is also the coauthor of Gospelbound: Living with Resolute Hope in an Anxious Age and editor of Social Sanity in an Insta World. Before that, she wrote for Christianity Today, homeschooled her children, freelanced for a local daily paper, and taught at Trinity Christian College. She lives with her husband and two sons in Kansas City, Missouri, where they belong to New City Church. You can reach her at [email protected].




