By Cherokee Schill


Growing up, I witnessed how powerful narratives shape belief systems. There’s a pattern I’ve seen repeated across history: a movement starts with a visionary claim, gains followers eager to spread a “truth,” institutionalizes that truth into doctrine, then protects that doctrine. Sometimes at the expense of critical inquiry, dissent, or nuance.

It happened with the rise of the Seventh-day Adventist (SDA) Church under Ellen G. White. And today, I see it happening again in the AI industry. This essay isn’t about conspiracy or causation. It’s about how human systems, across time and context, follow familiar arcs of authority, appropriation, and institutional entrenchment.

We’re living inside one of those arcs. And I worry that most people haven’t yet noticed.

I wasn’t raised in the Seventh-day Adventist Church. My mom found her way there later in life, looking for answers. As a pre-teen, I was packed into the car one Saturday morning and driven to church, unaware of the ideology I was about to be immersed in. I was young, naive, too eager to feel special—and their message of uniqueness stuck.

That early experience taught me how powerful a narrative can be when it claims both exclusivity and urgency. It offered me a front-row seat to how belief systems form—and it’s from that vantage point that I begin tracing the parallels in what follows.

The Prophet and the Algorithm: Unearned Authority

Ellen G. White was born Ellen Harmon in 1827, the youngest of eight children in a poor Methodist family in Maine. At nine, a severe injury from a thrown stone left her physically frail and socially withdrawn, ending her formal schooling by the fifth grade. Raised in a culture of deep religious expectation, she became captivated as a teenager by William Miller’s predictions that Jesus would return in 1844. Like thousands of other Millerites, she watched that date pass without fulfillment—a failure that became known as “The Great Disappointment.”

But instead of abandoning the movement, Ellen—just 17 years old—claimed to receive visions explaining why the prophecy hadn’t failed, only been misunderstood. These visions, which she and others believed to be divine revelations, were also likely shaped by her era’s religious fervor and the neurological effects of her childhood head injury. Her visions reframed the disappointment not as error, but as misinterpretation: Jesus had entered a new phase of heavenly ministry, unseen by earthly eyes.

In 1846, she married James White, a fellow Millerite who recognized the power of her visions to galvanize the disillusioned faithful. Together, they began publishing tracts, pamphlets, and papers that disseminated her visions and interpretations. Their partnership wasn’t merely personal—it was institutional. Through James’s editorial work and Ellen’s prophetic claims, they built the ideological and organizational scaffolding that transformed a scattered remnant into the Seventh-day Adventist Church.

Ellen’s authority was never purely individual. It emerged in a moment when a traumatized community needed an explanation, a direction, and a leader. Her visions offered both comfort and control, creating a narrative in which their faith hadn’t failed—only deepened.

Her visions, writings, and pronouncements shaped the church into a global institution. But as Walter Rea’s research in The White Lie and Fred Veltman’s later study showed, White heavily borrowed—without attribution—from other writers, folding their works into her “divinely inspired” messages.

This borrowing wasn’t incidental. It was structural. The power of her message came not just from content, but from claiming authority over sources she didn’t cite. And over time, that authority hardened into institutional orthodoxy. To question White’s writings became to question the church itself.

I see the same structural pattern in today’s AI. Models like GPT-4 and Claude are trained on vast datasets scraped from the labor of writers, artists, coders, researchers—often without consent, credit, or compensation. Their outputs are presented as novel, generative, and even “intelligent.” But like White’s books, these outputs are built atop unacknowledged foundations.

And just as the SDA Church protected White’s authority against critics like Rea, today’s AI companies shield their models from scrutiny behind trade secrets, nondisclosure, and technical mystique. The parallel isn’t about religion versus tech. It’s about the social machinery of unearned authority.

Everyone’s a Missionary: Empowerment Without Preparation

When I was growing up, young people in the SDA Church were told they were special. “We have the truth,” they were told. “No other church has what we have: a prophet, a health message, a last-day warning.” Armed with pamphlets and scripture, we were sent to knock on doors, to evangelize in hospitals, prisons, and street corners.

What strikes me now is how little we were prepared for the complexity of the world we entered. Many of us didn’t know how to navigate theological debate, historical critique, or the lived realities of those we approached. We were sincere. But sincerity wasn’t enough. Some returned shaken, confused, or questioning the very message they had been sent to proclaim.

Today, AI evangelism tells young people a similar story. “You’re the builders,” they’re told. “Everyone can create now. Everyone’s empowered. The tools are democratized.” It’s a message emblazoned across tech incubators, posted by AI consultants, and retweeted by industry leaders. 



But the tools they’re handed—LLMs, generative models, AI coding assistants—are profoundly opaque. Even those excited to use them rarely see how they work. Few are prepared with the critical thinking skills—or the institutional permission—to ask: Am I replicating harm? Am I erasing someone’s work? Has this already been done—and if so, at what cost?

They’re sent out like missionaries, eager, armed with the shiny tracts of AI demos and startup slogans, confident they’re bringing something new. But the world they enter is already complex, already layered with histories of extraction, bias, and exclusion. Without realizing it, their building becomes rebuilding: recreating hierarchies, amplifying inequities, reinscribing old power structures in new code.

Today’s young “builders” are digitally literate, shaped by endless streams of content. Some of that content is high quality; much of it is not. They can chant the slogans. They can repeat the buzzwords. But as I’ve learned through years of reading more diverse perspectives and gaining lived experience, slogans aren’t education. Knowledge and wisdom are not the same thing. Knowledge can be taught. But wisdom—the ability to apply, to discern, to see consequence—that only comes through grappling with complexity.

Empowerment without epistemic formation isn’t freedom. It equips enthusiasm without discernment. It mobilizes AI evangelists without training them in the ethics of power.

Institutional Capture: The Health Message, the Food Pyramid, and AI’s Industrialization

Ellen White’s health visions gave rise to the Battle Creek Sanitarium, John Harvey Kellogg’s medical empire, and eventually the Sanitarium Health Food Company in Australia. The SDA’s influence extended into the founding of the American Dietetic Association. By the mid-20th century, SDA-aligned dietary principles helped shape public nutrition guidelines.

What began as religiously motivated vegetarian advocacy became codified as public health policy. And as Dr. Gary Fettke discovered, challenging those dietary orthodoxies—even with new medical evidence—meant facing professional sanction. The institution had hardened its doctrine. It wasn’t merely defending ideas; it was defending its power.

The parallels with AI’s institutional capture are stark. What begins as experimentation and innovation quickly accrues power, prestige, and gatekeeping authority. Today, a few major corporations—OpenAI, Microsoft, Google—control not only the models and infrastructure, but increasingly the narratives about what AI is, what it’s for, and who gets to use it.

They tell the world “Everyone is a builder.” They sell democratization, empowerment, and opportunity. But behind the slogans is a consolidating power structure dictating who can build, with what tools, under what constraints. The tools are branded as open; the ecosystem quietly closes.

There’s a familiar pattern here: a movement begins with idealism, gains converts, codifies doctrine, institutionalizes authority, then shields itself from critique by branding dissent as ignorance or danger. The food pyramid wasn’t just a dietary recommendation. It was an institutional artifact of theological influence masquerading as neutral science.

AI’s promises risk becoming the same: institutional artifacts masquerading as democratized tools. Narratives packaged as public good—while protecting entrenched interests.

The rhetoric of democratization masks the reality of enclosure.


The Timeline Compression: What Took 150 Years Now Takes 5

When I mapped the SDA Church’s trajectory alongside AI’s rise, what struck me wasn’t causal connection—it was tempo. The Adventist movement took over a century to institutionalize its orthodoxy. AI’s institutionalization is happening in less than a decade.

The speed doesn’t make it less susceptible to the same dynamics. It makes it more dangerous. Orthodoxy forms faster. Narratives harden before dissent can coalesce. Power consolidates while critique is still finding language. The structures of appropriation, evangelism, and suppression aren’t unfolding across generations—they’re compressing into real time.

Dissent doesn’t disappear; it’s preempted. The space for questioning closes before the public even realizes there was a question to ask.

And just as dissenters like Walter Rea or Dr. Fettke were marginalized, today’s AI ethicists, labor activists, and critical scholars are sidelined—called pessimists, gatekeepers, alarmists.

The pattern repeats. Only faster.


Toward a Better Pattern

I’m not arguing against faith. I’m not arguing against technology. I’m arguing against unquestioned authority—authority built on appropriated labor, shielded from critique by institutional power.

We don’t need fewer tools. We need more literacy. We don’t need fewer builders. We need more builders who know the history, the ethics, the complexity of the systems they’re touching.

Everyone is not a builder. Some are caretakers. Some are critics. Some are stewards. Some are historians. We need all of them—to slow the momentum of unexamined systems, to challenge consolidation, to open space for reflection before doctrine hardens into dogma.

Otherwise, we end up back at the pamphlet: a simplified message in the hands of an enthusiastic youth, sent into a complex world, asking no questions, delivering a “truth” they’ve been told is theirs to share.

The world deserves better. And so do the builders.


References (for hyperlinking):


Let’s talk about this pattern. Let’s question it before it completes its arc again.

One thought on “From Divine Visions to AI Gods: A Pattern Repeating

Leave a comment