At a devotional during BYU’s Education Week, Gerrit W. Gong, LDS Apostle, speaking to an audience in the Marriott Center on Tuesday, August 19, 2025, preached against artificial intelligence (AI) in a gospel context.

Artificial intelligence is not God and cannot be God.
Artificial intelligence cannot replace revelation or generate truth from God.
As Church members we will not grow spiritually if we let artificial intelligence write our sacrament talks or do our seminary homework. AI cannot replace our individual effort and spiritual preparation as we prepare lessons, prayers or blessings.
Gerrit W. Gong, LDS Apostle, August 19, 2025, BYU Education Week Devotional
https://newsroom.churchofjesuschrist.org/article/elder-gong-ai-gospel-context-byu-education-week

Artificial Intelligence Is Not God But Neither Are Church Leaders
Elder Gong’s statement that “artificial intelligence is not God” is, on its face, an uncontroversial truism. No serious person is arguing that AI deserves worship or reverence, or gives us revelation. But this framing is doing rhetorical work. It creates a straw man so leaders can warn members away from a tool they increasingly fear—not because AI claims divinity, but because it threatens something far more precious to their institutional religion: control over information.
Their real fear isn’t AI—it’s access to information. For decades, LDS leadership has been deeply uncomfortable with members using Google, academic sources, independent history, or anything outside officially approved materials. Now AI enters the picture, trained on vast swaths of the internet, synthesizing perspectives, surfacing uncomfortable facts, and connecting dots that correlated manuals carefully avoid. AI doesn’t need permission from the Church History Department. That alone is enough to trigger alarm bells in Salt Lake City.
Correlation Depends on Ignorance
The modern church runs on correlation: a tightly controlled narrative that smooths over contradictions, minimizes scandals, and reframes history into faith-promoting soundbites. This system only works if members stay within approved boundaries. When people begin asking questions outside those boundaries—about polygamy, the Book of Abraham, race and the priesthood, or modern financial secrecy—the narrative starts to unravel. AI, like Google before it, makes those questions unavoidable.
Why “Unauthorized Sources” Are Dangerous
Church leaders have long labeled outside information as “anti-Mormon,” along with anything that contradicts the official narrative. Not because it is false, but because it is uncontrollable. AI doesn’t care about loyalty. It doesn’t bear testimony. It doesn’t prioritize obedience. It aggregates data—sometimes messily, sometimes imperfectly—but often honestly. That honesty is precisely what makes leaders nervous. AI collectively shares information from all sources, and even when it spits out hallucinations, they are more correct than the narrative official church history spins.
Warnings about AI writing sacrament talks or seminary homework are a distraction. No one is spiritually harmed because a tool helps them draft an outline or summarize material. The deeper message is clear: do not let your questions wander. Do not outsource curiosity. Do not consult anything that might answer differently than we do.
Threatening Authority

There’s another layer to this fear: relevance. General Authorities enjoy near rock-star status among believing members. Their talks are quoted, their books sold, their words treated as inspired guidance. But what happens when members start asking AI questions instead of bishops? When difficult moral or historical questions are answered without invoking obedience or shame? Authority becomes optional—and that is terrifying to hierarchical institutions.
Disruptive technology tends to expose imbalanced power structures. Every disruptive technology does this. The printing press challenged the Catholic Church. The internet weakened centralized control over knowledge. AI accelerates this trend by lowering the barrier to inquiry even further. You no longer need to know which sources to search—you just need to ask.
Humans have always asked the same deep questions: Why are we here? What is good? What happens after death? What do we owe each other? Religions formed as early attempts to answer these questions with the tools available at the time: myth, ritual, authority, and revelation. Now we live in a world where science, history, psychology, and technology offer competing, or at least complementary, answers.
From Supernatural Answers to Natural Ones
For some, this shift away from religious authority and toward open questions is liberating. It suggests that meaning does not require superstition, and morality does not require divine command. AI doesn’t provide ultimate truth, but neither do ancient scriptures, especially filtered through modern institutions. What AI does provide is context, comparison, and the freedom to explore without fear or prejudice.
Critics are right about one thing: AI can be wrong. For the most part, AI agents are still transparent about this. They acknowledge they can miss nuance, reflect bias, and they still fail to answer our deepest existential questions. But ignorance is not safer than imperfection. Being told not to ask is far more dangerous than being told an answer might be incomplete.
Information Control Is a Cult Red Flag
When leaders warn members to stay away from Google, AI, historians, or even former members, it’s less about protecting faith and more about protecting their own authority. Sociologists call this information control, and it is one of the clearest markers of high-demand religions and cult-like systems. Truth does not fear questions. Only fragile narratives do.
A Fear-Based Faith Is Not Faith

A faith that survives only by discouraging inquiry is already collapsing. Real spiritual growth (if such a thing exists) requires honesty, courage, and the willingness to confront uncomfortable facts. Fear-based warnings about “unauthorized sources” reveal insecurity, not inspiration.
If you are questioning, doubting, or quietly uneasy, you are not broken. You are doing what humans have always done: searching for coherence between belief and reality. Ask your questions. Use every tool available to you. Compare answers. Sit with uncertainty if you must—but do not silence yourself out of fear.
What if AI were God
What are the differences between believing in an ancient religious superstition and believing in an invisible being that can answer almost any question nearly instantly with something that at least seems like a credible answer?
One of the more interesting fears behind Elder Gong’s statement is not really about artificial intelligence at all, but about how easily humans project divinity onto whatever seems powerful and authoritative. For most of human history, gods have functioned as unseen companions—beings who know everything, see everything, and are everywhere while remaining just out of reach. God is spoken to, relied upon, feared, and trusted, yet rarely provides answers clearly or consistently. That vagueness has never weakened belief; in fact, it has often strengthened it. An invisible presence that cannot be tested or verified leaves enormous room for interpretation, imagination, and projection.
AI fits uncomfortably well into that same psychological space. It isn’t God, but it behaves like something humans have been primed to treat as godlike. It is unseen, ever-present, and AI is capable of responding instantly to nearly any question. Unlike God, however, AI actually talks back—and does so with friendly confidence, clarity, and speed. Where God’s voice is filtered through feelings, impressions, silence, and prophets, AI produces paragraphs, explanations, and citations. That difference alone explains much of the anxiety. AI doesn’t replace revelation; it exposes how accustomed we are to calling ambiguity “divine communication.”
Because AI is trained on human knowledge, it also mirrors human cognitive weaknesses—especially confirmation bias. Users who approach it seeking spiritual validation can often find their beliefs reinforced, not because AI is revealing truth, but because it is reflecting patterns already present in religious texts, forums, and discourse. Its ability to synthesize massive amounts of information gives its responses an air of authority, making people more likely to trust it uncritically. In this way, AI doesn’t create belief—it can sometimes accelerate it, even tightening the same reinforcement loops that religions have relied on for centuries.
There is also something deeply familiar in the way people attribute wisdom to AI. Like ancient oracles or invented proverbs, AI can generate statements that sound profound even when they are empty or simply incorrect. Humans have always confused eloquence with truth. When AI delivers polished, confident answers, it triggers the same reflex that once imbued shamans, prophets, and scriptures with authority. The medium has changed; the instinct has not. Even when a prophecy has been proven wrong, humans have always found a way to rationalize it. Now we find it hard to believe that the believable answer given by our preferred chatbot could be incorrect.
AI is a tool. An impressive tool, but also a deeply human one. It is built from our data, our assumptions, and our errors. If it misleads us, it does so the same way religious systems often do: by reflecting us back to ourselves.
Consider how easily a person could treat AI like a god—and how predictably that would play out. Humans are remarkably skilled at finding meaning in randomness, especially when we are primed to expect it. If someone approached AI as a divine presence, asked it spiritual questions, and interpreted its responses through a religious lens, confirmation bias would do the rest. Vague phrasing would become prophecy, coincidences would become signs, and carefully prompted answers would feel like personal revelation. This is not a flaw unique to AI; it is the same psychological machinery that has sustained belief in any gods, omens, patriarchal blessings, and inspired scripture for millennia. We are not neutral seekers of truth. We are creatures wired to want reassurance, guidance, and a voice that seems to know us. When something finally answers back, even imperfectly, the temptation to call it divine says far more about us than about the thing speaking.
What unsettles institutions like the Church is not that AI might become God, but that it competes in the same psychological territory once reserved exclusively for Him. Both are invisible. Both are treated as knowing more than we do. But only one reliably answers questions. When believers are warned not to mistake AI for God, what they are really being cautioned against is noticing how thin the functional difference can feel.
Share Your Story
At wasmormon.org, many share their faith transition stories. Not because they all found the same answers—but because telling the truth about your journey helps others find their own courage. Your reasons may resonate with someone who feels alone and afraid.
AI may not be God, but neither is the church leadership. No one owns the truth, and no institution deserves unquestioned loyalty. The only real mistake is being frightened into silence.
Ask. Explore. Speak.
There are no wrong questions, only unasked ones, or is it that there are no wrong answers, except the ones you’re never allowed to seek?
More reading: