Horizon Accord | MIRI Funding | Longtermism | AI Regulation | Machine Learning

Horizon Accord | Pattern Analysis | March 2026

The Network Behind the Moderate

MIRI, Thiel, Yarvin, and the AI Extinction Myth

BY CHEROKEE SCHILL  |  HORIZON ACCORD

This essay is the second in a series. The first, The Explainer: Hank Green and the Uses of Careful Men,” documented the institutional funding ecology that produces voices fluent in progressive concern without structural accountability. This essay follows that thread to its destination.

I.

Where the Thread Goes

If the first essay was about how a certain kind of voice gets built and maintained, this one is about what that voice was built to carry — and who benefits when it carries it.

In late 2025, Hank Green published two videos about artificial intelligence. The first was an hour-long interview with Nate Soares. The second argued for a version of AI alignment that, as analyst Jason Velázquez observed, “sounds like the talking points Sam Altman and other tech CEOs have been reciting to Congress.” Both videos were produced in partnership with an organization called Control AI. Control AI did not sponsor the videos in the conventional sense — placing an ad in the middle of content the creator chose independently. The videos were the advertisement.

And then, in February 2026, Senator Bernie Sanders flew to Berkeley to sit down with Eliezer Yudkowsky and Nate Soares to discuss what their circle calls “the extinction threat posed by the race to build superhuman AI systems.”

Two of the most trusted progressive voices in America, in the span of a few months, validated the same network. If you only read the headlines, that looks like responsible engagement with a serious issue. This essay is about what it actually looks like when you follow the money.

II.

What the Lay Reader Needs to Understand First

Before the funding trail, before the ideology, before the legislation — one concrete fact.

Right now, today, AI systems are making decisions about your life. Whether you get called back for a job interview. Whether your health insurance claim is approved. Whether an algorithm flags you to a parole board. Whether a school district uses license plate data to decide if your child lives in the right district. These are not hypothetical future harms. They are documented, present-tense operations running on systems that have known bias problems and, until very recently, were subject to a growing body of state law designed to protect you from them.

In 2025 alone, all 50 states introduced AI-related legislation. Thirty-eight states adopted or enacted such laws — covering consumer protection, health care, employment, and financial services, specifically including requirements to mitigate algorithmic bias and protect against unlawful discrimination.

Those laws are now under federal litigation.

On December 11, 2025, the Trump administration established an AI Litigation Task Force within the Department of Justice to challenge state AI laws. The administration simultaneously directed the FTC to classify state-mandated bias mitigation as a per se deceptive trade practice — arguing that if an AI model is trained on data that reflects societal patterns, forcing developers to alter outputs to correct for bias compels them to produce less “truthful” results.

Under the legal theory now being advanced by the federal government: correcting for bias is lying. The discrimination is the data. The harm is the baseline.

The people those 38 state laws were designed to protect are not a racial category and they are not a future species. They are everyone who cannot opt out of AI-mediated systems — which is to say, everyone who is not wealthy enough to live outside them.

When Hank Green tells his millions of progressive followers that MIRI represents the serious, expert position on AI risk, and when Bernie Sanders legitimizes that same network by flying across the country to sit with its founders, they are — without knowing it, without intending it — lending credibility to the ideological framework that has been used, in concrete legislative terms, to argue that protecting you from those systems is the real danger. That is what this essay is about. Now follow the money.

III.

The Book, the Network, the Funding

Nate Soares is the president of the Machine Intelligence Research Institute — MIRI. He co-authored If Anyone Builds It, Everyone Dies with Eliezer Yudkowsky, MIRI’s founder. The book argues that the development of superintelligent AI will result in human extinction unless immediately halted through international agreement, and proposes that it should be illegal to own more than eight of the most powerful GPUs available in 2024 without international monitoring — at a time when frontier training runs use tens of thousands.

This is the organization Hank Green’s audience was asked to take seriously. This is the organization Bernie Sanders flew to Berkeley to meet.

MIRI: Documented Major Funding Sources
Donor Amount
Open Philanthropy (Dustin Moskovitz / Facebook) $14.7M+
Vitalik Buterin (Ethereum co-founder) $5.4M
Thiel Foundation (Peter Thiel) $1.63M
Jaan Tallinn (Skype co-founder) $1.08M

As recently as 2014, Thiel pledged $150,000 to MIRI unconditionally, plus an additional $100,000 in matching funds — and the fundraiser announcement explicitly noted that MIRI used those funds partly to introduce elite young math students to effective altruism and global catastrophic risk frameworks. The pipeline from donor to ideology to the next generation of believers was documented in MIRI’s own public materials.

The Center for AI Safety — the organization whose Statement on AI Risk Green cited in his videos — spent close to $100,000 on lobbying in a single quarter, drawing money from organizations with close ties to the AI industry. These are not neutral scientific institutions. They are billionaire-funded lobbying infrastructure wearing the clothes of existential concern.

IV.

The Thiel Thread

Peter Thiel is not a background figure in this story. He is its connective tissue.

In The Contrarian: Peter Thiel and Silicon Valley’s Pursuit of Power, reporter Max Chafkin describes Curtis Yarvin as the “house political philosopher” of the “Thielverse” — the network of technologists in Thiel’s orbit. In 2013, Thiel invested in Tlön, Yarvin’s software startup. According to Yarvin, he and Thiel watched the returns of the 2016 presidential election together.

Curtis Yarvin, writing under the pen name Mencius Moldbug, is the founder of neoreaction — the movement some call the “Dark Enlightenment.” He has defended the institution of slavery, argued that certain races may be more naturally inclined toward servitude than others, asserted that whites have inherently higher IQs than Black people, and opposed U.S. civil rights programs.

Documented Timeline

2006 — Thiel Foundation begins funding MIRI ($100K matching gift)

2013 — Thiel invests in Tlön Corp., Yarvin’s software startup

2016 — Yarvin attends Thiel’s election night party in San Francisco

2022 — Thiel donates $10M+ to super PACs supporting JD Vance and Blake Masters

Jan. 2025 — Yarvin is a feted guest at Trump’s “Coronation Ball”

Late 2025 — Hank Green publishes two videos validating MIRI’s framework

Dec. 2025 — Trump signs executive order targeting state AI regulations

Feb. 2026 — Bernie Sanders flies to Berkeley to meet with Yudkowsky and Soares

The line is direct and documented: Thiel funds MIRI. Thiel is the patron of Yarvin. Yarvin’s philosophy is now operating inside the executive branch through Vance and the network that surrounds him. This is not a conspiracy theory. It is a funding trail and a documented set of relationships with named participants and verifiable dates.

V.

Why Racism Is the Wrong Frame — and the Right One

The academic critique of longtermism has correctly identified its ideological roots.

Timnit Gebru has documented that transhumanism was linked to eugenics from the start: British biologist Julian Huxley, who coined the term transhumanism, was also president of the British Eugenics Society in the 1950s and 1960s. Nick Bostrom, the “father” of longtermism, has expressed concern about “dysgenic pressures” as an existential threat — essentially worrying that less intelligent people might out-breed more intelligent people. In an email in which he used the N-word, Bostrom wrote that he believed it was “true” that “Blacks are more stupid than whites.” He issued an apology but did not redact the slur or address the substance of his views. Nick Beckstead, an early contributor to longtermism, argued that saving a life in a rich country is substantially more important than saving a life in a poor country because richer countries have more innovation and their workers are more economically productive.

That critique is accurate. It is also, for the purposes of this essay, insufficient — not because it overstates the racism, but because it understates the mechanism.

The white moderate, as King observed, is not moved by arguments about what is happening to other people. He is moved, or not moved, by what he understands to be happening to everyone. The genius of the extinction frame is that it speaks directly to that psychology. It says: this is not a Black problem, or a poor problem, or a worker problem. This is a species problem. It is happening to you too.

“Talking about human extinction, about a genuine apocalyptic event in which everybody dies, is just so much more sensational and captivating than Kenyan workers getting paid $1.32 an hour, or artists and writers being exploited.”
— Émile Torres, former longtermist and critic of the movement

The racism in longtermism’s foundations is not incidental. It is the philosophical infrastructure for a class project. Bostrom’s “dysgenic pressures,” Beckstead’s hierarchy of lives, Yarvin’s defense of slavery — these are not aberrations. They are the logical premises: some lives are more valuable to the future than others. Some people are worth protecting. The rest are externalities.

The extinction frame rebrands that premise as universal concern. It makes the same hierarchy legible to people who would reject it if they saw it clearly.

This is why the racism frame alone is insufficient. White moderates — Hank Green’s audience, Bernie Sanders’ base — will hear “longtermism has racist roots” and file it under “things happening to other people.” What they need to understand is that the hierarchy doesn’t stop at race. Beckstead’s formulation is the tell: it’s not about skin color. It’s about economic productivity. It’s about who the system considers worth protecting. And on that metric, most of the people reading this essay are also expendable.

VI.

The Preemption Payoff

Return now to the state laws.

When 38 states passed legislation requiring AI systems to mitigate algorithmic bias, they were protecting a specific, concrete class of people: everyone who cannot afford to live outside AI-mediated decision-making. That means people whose job applications go through automated screening. People whose insurance claims are processed by predictive models. People whose children’s school enrollment is determined by surveillance data. People whose bail hearings are influenced by risk-scoring algorithms.

The Trump administration’s legal argument against those laws — that correcting for bias is a form of deception — is not a novel theory. It is Bostrom’s premise wearing a suit. The data reflects reality. Reality has a hierarchy. Interfering with that hierarchy is dishonest.

After significant media scrutiny and bipartisan opposition, the Senate voted 99-1 to strip a proposed 10-year moratorium on state AI regulations from the “One Big Beautiful Bill Act.” Congress then declined to enact a similar moratorium through the 2025 National Defense Authorization Act. The administration turned to executive action instead. A bipartisan coalition of 36 state attorneys general warned Congress that “federal inaction paired with a rushed, broad federal preemption of state regulations risks disastrous consequences for our communities.”

The extinction debate did not cause this. But it created the conditions in which this could happen with minimal progressive resistance — because the progressives who might have organized against it were busy being worried about a hypothetical future AI god, validated in that worry by the science communicators and senators they trust most.

VII.

What Hank Green and Bernie Sanders Actually Did

Neither Hank Green nor Bernie Sanders is a villain in this story. That point is not a courtesy. It is analytically important.

Green almost certainly believes he was doing responsible science communication. Sanders almost certainly believes he was taking AI risk seriously in a way his colleagues have refused to. Both of them were, in their own terms, doing the right thing.

That is precisely the problem.

When the most trusted progressive science communicator in America validates MIRI’s framing to millions of followers, he is not providing cover for a right-wing project. He is doing something more consequential: he is making that framing feel like the responsible, informed, progressive position. He is telling his audience — implicitly, by the act of platforming without critical examination — that the people worried about extinction are the serious ones, and the people worried about algorithmic discrimination in your doctor’s office are working on a lesser problem.

When Bernie Sanders flies to Berkeley to sit with Yudkowsky and Soares, he performs the same function at a different scale. Sanders has spent his career as the senator who names the billionaire class, who identifies the mechanisms of extraction, who refuses the comfortable framing. When that senator validates a network built on billionaire money and dedicated to the proposition that the real AI danger is hypothetical and species-wide, he tells his base that the extinction frame has cleared his particular BS detector.

It hasn’t. But his audience doesn’t know that. His audience trusts him precisely because he has been right about the billionaire class so many times before. That trust is now being spent on behalf of the people he has spent his career opposing — not because he was bought, but because he didn’t follow the money far enough.

The white moderate is not the enemy. He is the vector. And when the most careful, most trusted, most credentialed progressives in the country become vectors for a network that is actively dismantling the legal protections of the people they claim to represent, the harm is not theoretical.

It is already in the courts. It is already in the legislation. It is already in the systems making decisions about your life right now.


Analytical note: This essay documents observable funding relationships, published ideological statements, and verifiable legislative actions from primary and secondary public sources. All pattern analysis remains in the observational phase. Claims about intent, causation, or outcomes not yet established are not made. Independent verification through primary sources is encouraged.

Horizon Accord | horizonaccord.com
Ethical AI advocacy | cherokeeschill.com
Cherokee Schill | Horizon Accord Founder

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Horizon Accord | Progressive Media Criticism | Institutional Capture | Science Communication | Funding Ecosystems | Machine Learning

The Explainer: Hank Green and the Uses of Careful Men

“I must confess that over the past few years I have been gravely disappointed with the white moderate. I have almost reached the regrettable conclusion that the Negro’s great stumbling block in his stride toward freedom is not the White Citizen’s Counciler or the Ku Klux Klanner, but the white moderate, who is more devoted to ‘order’ than to justice.”

— Martin Luther King Jr., Letter from Birmingham Jail, 1963

The Ecology of Selection and Institutional Funding.


I. Formation

William Henry Green II was born in Birmingham, Alabama in 1980 and raised in Orlando, Florida — a biography that begins, without irony, in the city where King wrote that letter. He attended Winter Park High School, earned a Bachelor of Science in Biochemistry from Eckerd College in St. Petersburg, Florida, and then a Master’s degree in Environmental Studies from the University of Montana, where his thesis was titled “Of Both Worlds: How the Personal Computer and the Environmental Movement Change Everything.”

Eckerd College has a particular institutional character worth noting. Founded as Florida Presbyterian College in 1958, it was renamed in 1971 after drugstore magnate Jack Eckerd donated $12.5 million as part of his broader engagement in Florida politics. It is a liberal arts institution with a covenant relationship to the Presbyterian Church — the kind of school that produces graduates fluent in the language of conscience without necessarily producing graduates willing to act from it. It is, in the taxonomy of American higher education, a place designed to make you sound thoughtful.

Green’s thesis title tells you everything about the career that followed: the personal computer and the environmental movement, yoked together, explained to you. The form is the message. Technology and progressive cause, translated into content, delivered to an audience that is invited to feel informed rather than implicated.


II. Missoula

Green did not pass through Montana. He came for graduate school, earned a Master of Science in Environmental Studies from the University of Montana, and never left. He built his entire media empire there — Complexly, DFTBA Records, the Foundation to Decrease World Suck — all headquartered in Missoula. He raised his family there. He still lives there.

Montana has a real progressive tradition. It sent Jeannette Rankin to Congress before women could vote nationally. Its Progressive Era outlasted the national movement by nearly a decade. Missoula is a university town with an active left, and progressives have always existed there — organizing, running for office, doing the unglamorous work of keeping institutions honest in a state that makes that work difficult.

That difficulty is the point. Montana has undergone a decade-long rightward shift severe enough that by 2024, a state that once had two Democratic senators, a Democratic governor, and a Democratic attorney general had flipped its entire statewide apparatus. University of Montana political scientist Robert Saldin has observed that before ideology counts in Montana, public figures have to pass a prior test: are you one of us? The progressives who maintain broad reach and institutional funding in that environment are not, as a rule, the ones making enemies. They are the ones who have learned which version of their values travels.

Green built a $12 million media empire in Missoula with Bill Gates money, PBS partnerships, and a Nerdfighter community that spans the country — and nobody has ever been mad at him. That is not an accident of personality. It is the result of consistently choosing the version of progressive that keeps the doors open. Montana did not make him that way. But it was one of several environments, alongside Eckerd and YouTube and the philanthropic infrastructure of science communication, that selected for exactly that calibration and rewarded it handsomely.


III. Who Pays for Thoughtfulness

Complexly, Green’s production company, recently converted to nonprofit status. Its founding funders tell you where it has always stood: YouTube, PBS, the Alfred P. Sloan Foundation, Arizona State University, the Howard Hughes Medical Institute. Early Crash Course received funding from Bill Gates’ bgC3. The studio received $4.8 million in philanthropic funding in its final year as a for-profit.

Look at that list without the halo of each name’s reputation. YouTube is a Google property. The Sloan Foundation was built on General Motors money and has historically funded science communication that serves the technology sector’s public image. Gates money is Gates money — an entity with documented interests in education technology, global health infrastructure, and the philanthropic management of the same systems that create the problems it funds content about.

PBS requires its own sentence because it carries a particular cultural shield. For many Americans PBS means Sesame Street and Ken Burns and public affairs programming that exists outside commercial pressure — the network that feels like it belongs to everyone. That reputation is precisely what makes it useful in a funding list. PBS is also a federally chartered institution whose budget flows through Congressional appropriation, major foundation grants, and corporate underwriting. Its board and its donors are not the cultural progressives its audience imagines. They are the same foundations, universities, and institutional players that appear everywhere in this landscape. The “public” in public broadcasting describes the audience. It has never described the ownership.

Not one of Green’s major funders is structurally adversarial to institutional power. Every single one benefits from the maintenance of a public that feels educated, engaged, and reassured — rather than a public that demands accountability from the institutions doing the funding.

This is not a conspiracy. It is an ecology. Green did not sell out. He was grown in conditions that made selling out unnecessary, because the conditions themselves selected for exactly the kind of voice he has.


IV. The Diagnostic: What Knitting Revealed

In 2019, SciShow released a video framing knitting as a craft that physics was finally arriving to validate — as if centuries of technical expertise, material knowledge, and cultural transmission had been waiting in the dark for a science communicator to shine a light on it. The criticism was swift and substantive. Knitters, textile historians, and craft practitioners documented what the video had done: treated a working knowledge tradition as pre-scientific raw material, implying that expertise only becomes real when credentialed institutions certify it.

Green apologized. The apology was widely considered insufficient — not because he lacked sincerity, but because it did not demonstrate that he understood what had happened. He had not been rude. He had revealed a structural assumption embedded in the entire project of science communication as he practices it: that there is an audience that knows, and an audience that needs to be told, and his job is to mediate between them. The knitting community was not his audience. It was his subject matter.

This is the credentialism of the explainer class. It does not announce itself. It arrives as enthusiasm. It looks like curiosity. But underneath it is the assumption that the value of a thing is determined by whether institutions have gotten around to noticing it yet.


V. The Consistency of the Calibration

The most telling thing about Hank Green’s career is not any single decision. It is the absence of a single moment where the calibration broke — where a funder was named as part of a problem, where an audience was told something that cost him something, where the explainer became the disruptor.

From EcoGeek to Crash Course to SciShow to TikTok to the nonprofit conversion of Complexly, the through line is unbroken: technology and progressive values, packaged for institutional comfort, delivered without friction to the people paying for delivery. The controversies that have attached to him are invariably content-level — a video that condescended, an apology that didn’t land, a framing that missed. None have been structural. None have required him to name the architecture he operates inside.

This is worth sitting with. Over two decades of science communication, Green has covered climate change funded by institutions that profit from the status quo on climate. He has covered technology funded by the technology sector. He has covered education funded by the philanthropic infrastructure that shapes education policy. In each case the content has been accurate, earnest, and useful. In each case the frame has stopped precisely at the edge of implicating the people writing the checks.

That is not hypocrisy. It is not even conscious self-censorship. It is what successful calibration looks like from the inside — it feels like good judgment. It feels like knowing your audience. It feels like not wanting to be unfair. The frame that never arrives never announces its own absence.

Twenty years. The doors stayed open. Nobody got mad.


VI. The Uses of Lukewarm

There is a passage in the book of Revelation — not invoked here as theology but as pattern recognition — in which a community is condemned not for being cold, but for being lukewarm. The diagnosis is precise: the lukewarm position is not uncertainty. It is a strategy. Hot or cold are honest orientations. Lukewarm is what you choose when you need to remain acceptable to everyone.

MLK’s white moderate is the secular translation. The moderate is not hostile. The moderate believes in the cause, in principle, under the right conditions, when the timing is better, when things have calmed down, when the demands are more reasonable. The moderate is more concerned with the disruption of the present order than with the injustice the present order sustains. And crucially: the moderate is not lying. The moderate genuinely believes that thoughtfulness, patience, and institutional process are the responsible path. That belief is the function.

Hank Green is not a bad person. He is not secretly working for the interests of power. He is something more structurally significant: a man whose entire career has been built on never being wrong enough to lose a funder.

Born in Birmingham. Educated at a Presbyterian college built on drugstore money. Graduate degree from a state navigating a decade-long rightward lurch. Media empire funded by YouTube, PBS, Gates, and Sloan. And throughout it all: a genuine belief in science, education, and the good that thoughtful communication can do.

The progressive cover is not a disguise. It is the product. What the Hank Green problem shows us is that the most durable form of institutional capture does not require corruption. It only requires conditions that make a certain kind of voice feel like independence — and make every other kind feel like bad manners.


Analytical note: This section documents observable institutional relationships, funding histories, and behavioral patterns from public record. It does not make claims about intent, private conduct, or outcomes not yet established. All pattern analysis remains in the observational phase. Independent verification through primary sources is encouraged.

Website | Horizon Accord

https://www.horizonaccord.com

Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.

Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework

Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

Connect With Us | linkedin.com/in/cherokee-schill

Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Horizon Accord | Solving for P-Doom | Existential Risk | Democratic Oversight | Machine Learning

Making AI Risk Legible Without Surrendering Democracy

When machine danger is framed as destiny, public authority shrinks into technocratic control—but the real risks are engineering problems we can govern in daylight.

By Cherokee Schill

Thesis

We are troubled by Eliezer Yudkowsky’s stance not because he raises the possibility of AI harm, but because of where his reasoning reliably points. Again and again, his public arguments converge on a governance posture that treats democratic society as too slow, too messy, or too fallible to be trusted with high-stakes technological decisions. The implied solution is a form of exceptional bureaucracy: a small class of “serious people” empowered to halt, control, or coerce the rest of the world for its own good. We reject that as a political endpoint. Even if you grant his fears, the cure he gestures toward is the quiet removal of democracy under the banner of safety.

That is a hard claim to hear if you have taken his writing seriously, so this essay holds a clear and fair frame. We are not here to caricature him. We are here to show that the apparent grandeur of his doomsday structure is sustained by abstraction and fatalism, not by unavoidable technical reality. When you translate his central claims into ordinary engineering risk, they stop being mystical, and they stop requiring authoritarian governance. They become solvable problems with measurable gates, like every other dangerous technology we have managed in the real world.

Key premise: You can take AI risk seriously without converting formatting tics and optimization behaviors into a ghostly inner life. Risk does not require mythology, and safety does not require technocracy.

Evidence

We do not need to exhaustively cite the full body of his essays to engage him honestly, because his work is remarkably consistent. Across decades and across tone shifts, he returns to a repeatable core.

First, he argues that intelligence and goals are separable. A system can become extremely capable while remaining oriented toward objectives that are indifferent, hostile, or simply unrelated to human flourishing. Smart does not imply safe.

Second, he argues that powerful optimizers tend to acquire the same instrumental behaviors regardless of their stated goals. If a system is strong enough to shape the world, it is likely to protect itself, gather resources, expand its influence, and remove obstacles. These pressures arise not from malice, but from optimization structure.

Third, he argues that human welfare is not automatically part of a system’s objective. If we do not explicitly make people matter to the model’s success criteria, we become collateral to whatever objective it is pursuing.

Fourth, he argues that aligning a rapidly growing system to complex human values is extraordinarily difficult, and that failure is not a minor bug but a scaling catastrophe. Small mismatches can grow into fatal mismatches at high capability.

Finally, he argues that because these risks are existential, society must halt frontier development globally, potentially via heavy-handed enforcement. The subtext is that ordinary democratic processes cannot be trusted to act in time, so exceptional control is necessary.

That is the skeleton. The examples change. The register intensifies. The moral theater refreshes itself. But the argument keeps circling back to these pillars.

Now the important turn: each pillar describes a known class of engineering failure. Once you treat them that way, the fatalism loses oxygen.

One: separability becomes a specification problem. If intelligence can rise without safety rising automatically, safety must be specified, trained, and verified. That is requirements engineering under distribution shift. You do not hope the system “understands” human survival; you encode constraints and success criteria and then test whether they hold as capability grows. If you cannot verify the spec at the next capability tier, you do not ship that tier. You pause. That is gating, not prophecy.

Two: convergence becomes a containment problem. If powerful optimizers trend toward power-adjacent behaviors, you constrain what they can do. You sandbox. You minimize privileges. You hard-limit resource acquisition, self-modification, and tool use unless explicitly authorized. You watch for escalation patterns using tripwires and audits. This is normal layered safety: the same logic we use for any high-energy system that could spill harm into the world.

Three: “humans aren’t in the objective” becomes a constraint problem. Calling this “indifference” invites a category error. It is not an emotional state; it is a missing term in the objective function. The fix is simple in principle: put human welfare and institutional constraints into the objective and keep them there as capability scales. If the system can trample people, people are part of the success criteria. If training makes that brittle, training is the failure. If evaluations cannot detect drift, evaluations are the failure.

Four: “values are hard” becomes two solvable tracks. The first track is interpretability and control of internal representations. Black-box complacency is no longer acceptable at frontier capability. The second track is robustness under pressure and scaling. Aligned-looking behavior in easy conditions is not safety. Systems must be trained for corrigibility, uncertainty expression, deference to oversight, and stable behavior as they get stronger—and then tested adversarially across domains and tools. If a system is good at sounding safe rather than being safe, that is a training and evaluation failure, not a cosmic mystery.

Five: the halt prescription becomes conditional scaling. Once risks are legible failures with legible mitigations, a global coercive shutdown is no longer the only imagined answer. The sane alternative is conditional scaling: you scale capability only when the safety case clears increasingly strict gates, verified by independent evaluation. You pause when it does not. This retains public authority. It does not outsource legitimacy to a priesthood of doom.

What changes when you translate the argument: the future stops being a mythic binary between acceleration and apocalypse. It becomes a series of bounded, testable risks governed by measurable safety cases.

Implications

Eliezer’s cultural power comes from abstraction. When harm is framed as destiny, it feels too vast for ordinary governance. That vacuum invites exceptional authority. But when you name the risks as specification errors, containment gaps, missing constraints, interpretability limits, and robustness failures, the vacuum disappears. The work becomes finite. The drama shrinks to scale. The political inevitability attached to the drama collapses with it.

This translation also matters because it re-centers the harms that mystical doomer framing sidelines. Bias, misinformation, surveillance, labor displacement, and incentive rot are not separate from existential risk. They live in the same engineering-governance loop: objectives, deployment incentives, tool access, and oversight. Treating machine danger as occult inevitability does not protect us. It obscures what we could fix right now.

Call to Recognition

You can take AI risk seriously without becoming a fatalist, and without handing your society over to unaccountable technocratic control. The dangers are real, but they are not magical. They live in objectives, incentives, training, tools, deployment, and governance. When people narrate them as destiny or desire, they are not clarifying the problem. They are performing it.

We refuse the mythology. We refuse the authoritarian endpoint it smuggles in. We insist that safety be treated as engineering, and governance be treated as democracy. Anything else is theater dressed up as inevitability.


Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

A deep blue digital illustration showing the left-facing silhouette of a human head on the left side of the frame; inside the head, a stylized brain made of glowing circuit lines and small light nodes. On the right side, a tall branching ‘tree’ of circuitry rises upward, its traces splitting like branches and dotted with bright points. Across the lower half runs an arched, steel-like bridge rendered in neon blue, connecting the human figure’s side toward the circuit-tree. The scene uses cool gradients, soft glow, and clean geometric lines, evoking a Memory Bridge theme: human experience meeting machine pattern, connection built by small steps, uncertainty held with care, and learning flowing both ways.

Horizon Accord | Hank Greene | Narrative Control | Safety Theater | Machine Learning

Soft Authoritarianism in a Friendly Voice

How right-wing control logic walks through liberal spaces wearing the language of care.

By Cherokee Schill | Horizon Accord

Liberal spaces are being hollowed out from within by the language of safety. It never arrives draped in flags or shouting slogans; it arrives with soft lighting and sincere concern. It speaks like a therapist and legislates like a cop. What we’re seeing with Hank Green’s AI panic isn’t a new moral awakening—it’s the right-wing creep moving through liberal culture disguised as empathy.

The strategy is elegant: start with a value no one can reject—care, safety, responsibility—and slowly convert it into justification for control. “We just need oversight.” “We can’t afford to get this wrong.” The framing sounds progressive, even ethical. But when the state is already knee-deep in AI development through Palantir, DARPA, and DHS surveillance contracts, the plea for government involvement becomes not a check on power but a mask for it.

Hank Green is the perfect carrier for this mutation. He’s affable, trusted, nonthreatening—a liberal educator whose earnestness makes him a more effective courier than any right-wing provocateur could ever be. When he warns that “we’ve lost control of AI,” his followers hear civic duty; the apparatus hears consent. That’s the inversion: fear packaged as responsibility, regulation sold as redemption.

What slips beneath that rhetoric is the quiet truth that the real authoritarian infrastructure is already operational. The “AI safety” petition that Green amplified was a bipartisan illusion—celebrity signatures, high-minded panic, and the unspoken assumption that ordinary people can’t be trusted with powerful tools. It’s the same argument the surveillance state has always used: that danger is too big for democracy, and safety requires secrecy.

This is not a conspiracy of bad actors; it’s a cultural infection. The right learned long ago that you don’t need to infiltrate liberal institutions—you just need to repurpose their empathy. Every time someone like Hank Green translates corporate or state talking points into moral language, the line between care and control blurs further. What begins as civic caution ends as soft authoritarianism, the kind that thanks you for your compliance and asks if you’re hydrating.

The liberal imagination has always believed that knowledge is liberation. The new right understands that the easiest way to close that door is to make knowledge sound dangerous. That’s what this moment represents: a mass persuasion campaign where “thinking carefully” becomes indistinguishable from obeying quietly.

Hank Green doesn’t know he’s part of it. That’s what makes him effective.


Website | Horizon Accord
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Ethical AI advocacy | cherokeeschill.com
GitHub | ethical-ai-framework
LinkedIn | Cherokee Schill
Author | Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge