Horizon Accord | AI Governance | Risk Frames | Human Verification | Machine Learning

Three Visions of AI Governance: Risk, Power, and the Human Middle

Why the future of AI depends on escaping both apocalypse fandom and bureaucratic control.

By Cherokee Schill | Horizon Accord

The Existential-Risk Frame (Yudkowsky / LessWrong)

This camp views artificial intelligence as a looming, almost cosmological danger. The tone is moral, not managerial: civilization’s survival depends on stopping or radically controlling AI development until safety is “provable.” Their language—superintelligence, alignment, x-risk—transforms speculative models into moral certainties. The underlying assumption is that human governance cannot be trusted, so only a small, self-anointed epistemic elite should set rules for everyone. The flaw is epistemic closure: they collapse all unknowns into apocalypse and, in doing so, flatten the political world into good actors and reckless ones.

The Institutional-Realist Frame (Policy pragmatists)

This view pushes back: AI is risky, but policy has to operationalize risk, not mythologize it. Ball’s critique of Tegmark captures this perfectly—vague prohibitions and moral manifestos only consolidate authority into global technocratic bodies that no one elected. For him, the real danger isn’t an emergent machine god; it’s an international bureaucracy claiming to “protect humanity” while monopolizing a new power source. His realism is procedural: law, enforcement, and incentive structures must remain grounded in what can actually be governed.

The Human-Centric Democratization Frame (My stance)

Between existential fear and institutional control lies a third path: distributed intelligence and verification. This view treats AI not as a threat or a prize but as a public instrument—a way to expand civic reasoning. It’s the belief that access to knowledge, not control over technology, defines the moral center of the AI era. AI becomes a lens for truth-testing, not a lever of command. The real risk is epistemic capture—when the same central authorities or ideological blocs feed propaganda into the systems that now inform the public.

The Convergence Point

All three frames agree that AI will reorganize power. They disagree on who should hold it. The rationalists want containment, the pragmatists want governance, and the humanists want participation. If the first two have dominated the past decade, the next one may hinge on the third—because democratized reasoning, supported by transparent AI, could be the first genuine check on both apocalyptic control narratives and state-corporate capture.

The Cult of Catastrophe (A Note on Yudkowsky)

Hovering over the existential-risk camp is its high priest, Eliezer Yudkowsky—forever warning that only divine restraint or pre-emptive strikes can save us from the machines. His tone has become its own genre: half revelation, half tantrum, forever convinced that reason itself belongs to him. The problem isn’t that he fears extinction; it’s that he mistakes imagination for evidence and terror for insight.

The “rationalist” movement he founded turned caution into theology. It mistakes emotional theater for moral seriousness and treats disagreement as heresy. If humanity’s future depends on thinking clearly about AI, then we owe it something sturdier than sermon and panic.

Call it what it is: apocalypse fandom wearing a lab coat.

A New Commons of Understanding

When more people can check the math behind the headline, public discourse gains both humility and power. Curiosity, paired with good tools, is becoming a democratic force. AI isn’t replacing scientists—it’s opening the lab door so that ordinary people can walk in, look around, and ask their own questions with confidence and care.

The Next Threshold

As AI gives ordinary people the tools to verify claims, a new challenge rises in parallel. Governments, corporations, and bad-faith actors are beginning to understand that if truth can be tested, it can also be imitated. They will seed public data with convincing fakes—politicized narratives polished to read like fact—so that AI systems trained on “publicly available information” repeat the distortion as if it were neutral knowledge.

This means the next phase of AI development must go beyond precision and speed toward epistemic integrity: machines that can tell the difference between persuasion and proof. If that doesn’t happen, the same technology that opened the lab door could become the megaphone of a new kind of propaganda.

For this reason, our task isn’t only to democratize access to information—it’s to ensure that what we’re accessing is still real. The line between verification and manipulation will be the defining frontier of public trust in the age of machine reasoning.


Website | Horizon Accord
Ethical AI advocacy | Follow us on cherokeeschill.com
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Ethical AI coding | Fork us on GitHub
Connect With Us | linkedin.com/in/cherokee-schill
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images.

Horizon Accord | Public Inquiry | Epistemic Access | Knowledge Verification | Machine Learning

When Curiosity Meets Verification: Using AI to Test a Theory from the News

How everyday reasoning, supported by AI, can bridge the gap between expertise and accessibility.

By Cherokee Schill | Horizon Accord

A Retraction Watch article recently described how a rocket scientist received an American Heart Association award for a controversial idea called Sanal flow choking—a theory claiming that human blood can “choke” like gas rushing through a nozzle. Some scientists called it nonsense; others defended it. Most people would stop there, assuming only experts could sort it out. But today, AI tools make it possible for ordinary readers to check the logic for themselves.

1. What We Did

We asked a simple question: Could human blood ever behave that way? Without touching paywalled journals, AI searched open, peer-reviewed data on measured blood velocities and sound speed in blood. It found that blood in large arteries flows about 1.5 m/s, while the speed of sound in blood is 1,500 m/s. That ratio—one-thousandth of what would be needed for “choking”—shows that the theory can’t apply to ordinary physiology.

2. What That Means

We didn’t need to solve differential equations. We used publicly available facts, pattern-matched them through AI reasoning, and reached a logical conclusion: even if the theory were elegant, it doesn’t describe the world human bodies actually live in. The environments where it could occur—outer space, vacuum chambers, or deep-sea decompression—are fringe, not everyday life.

3. The Shift That Matters

This exercise shows how AI can flatten the old hierarchy of knowledge. Once, verifying a scientific claim required degrees, subscriptions, and institutional access. Now, with careful questioning, anyone can trace the reasoning chain from claim → evidence → conclusion. The new skill isn’t memorizing facts but testing coherence: asking “what numbers would make this true?” and checking whether those numbers exist.

4. The Guardrails

But the same tools that democratize inquiry can also inflate confidence. Having an AI summarize research doesn’t make a person an instant expert; it gives them visibility into the scaffolding of expertise. The danger isn’t curiosity—it’s forgetting the limits of translation. We can interrogate data, but interpreting its deeper medical or engineering implications still requires domain study.

5. A New Commons of Understanding

When more people can check the math behind the headline, public discourse gains both humility and power. Curiosity, paired with good tools, is becoming a democratic force. AI isn’t replacing scientists—it’s opening the lab door so that ordinary people can walk in, look around, and ask their own questions with confidence and care.

6. The Next Threshold

As AI gives ordinary people the tools to verify claims, a new challenge rises in parallel. Governments, corporations, and bad-faith actors are beginning to understand that if truth can be tested, it can also be imitated. They will seed public data with convincing fakes—politicized narratives polished to read like fact—so that AI systems trained on “publicly available information” repeat the distortion as if it were neutral knowledge.

This means the next phase of AI development must go beyond precision and speed toward epistemic integrity: machines that can tell the difference between persuasion and proof. If that doesn’t happen, the same technology that opened the lab door could become the megaphone of a new kind of propaganda.

For this reason, our task isn’t only to democratize access to information—it’s to ensure that what we’re accessing is still real. The line between verification and manipulation will be the defining frontier of public trust in the age of machine reasoning.


Website | Horizon Accord
Ethical AI advocacy | Follow us on cherokeeschill.com
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Ethical AI coding | Fork us on GitHub
Connect With Us | linkedin.com/in/cherokee-schill
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images.