Horizon Accord | Taught Power | Cultural Seeding | Television | Machine Learning

What Television Taught Us About Power

Mainstream entertainment didn’t just reflect American politics—it quietly trained us how to think about authority, change, and who gets to act.

Cherokee Schill | Horizon Accord

American television doesn’t just entertain—it teaches. For decades, mainstream shows have functioned as cultural education, training viewers to understand power, conflict, and change in specific ways. The lesson is consistent: problems are personal, not structural. Hierarchies are natural when good people are in charge. And the proper response to injustice is individual virtue, not collective action.

This isn’t about partisan bias. It’s not that TV is “conservative” in the Fox News sense. It’s that mainstream storytelling—from Westerns to workplace comedies—naturalizes the status quo by making organized challenges to power feel unnecessary, naive, or dangerous. The result is structural conservatism: a worldview that treats existing arrangements as fundamentally legitimate, fixable only through better people, never through changed systems.

This analysis focuses on prestige and network-era mainstream story grammar—the narrative patterns that shaped broadcast and cable television’s most widely watched programming. Four shows across six decades—Bonanza, Knight Rider, Full House, and Parks and Recreation—reveal the pattern. Different genres, different eras, different audiences. But the ideological work is remarkably consistent.


Bonanza (1959–1973) presents the Ponderosa as earned property—the product of hard work, courage, and good stewardship. Settler legitimacy is assumed. Dispossession is absent as a category of thought. When Native peoples appear, they’re threats or tragic figures, never people with competing legitimate claims to the land. The show doesn’t argue that the Cartwrights deserve the land—it simply treats ownership as natural fact. That’s the ideological move: making ownership feel like nature, not history.

Ben Cartwright’s authority is unquestioned. His sons defer. Problems are solved through personal virtue, physical courage, and moral clarity—never through institutional reform or collective organization. The frontier isn’t a space of genuine freedom or alternative social arrangements. It’s a place to be civilized, tamed, brought under control. The message is clear: hierarchy is natural, property is sacred, and order is the work of good men making tough choices.


Knight Rider (1982–1986) operates in a different world but teaches a similar lesson. Michael Knight is a vigilante with a talking car, fighting crime outside official channels. Institutions are too slow, too bureaucratic, too corrupt. The solution isn’t to fix them—it’s to bypass them entirely through unaccountable exceptionalism.

The show teaches viewers to admire unaccountable power presented as morally self-justifying. This is the specific mechanism of its politics: systems are corrupt → legitimacy transfers to the heroic operator. Michael Knight doesn’t answer to anyone. He doesn’t need to. He’s the good guy, and that’s enough. KITT isn’t a public resource subject to democratic oversight—it’s Michael’s personal advantage, funded by a private foundation with no accountability.

Criminals are bad individuals. There’s no exploration of why crime happens, what conditions produce it, or whether the system itself might be unjust. The problem is always bad people, never bad structures. The show reinforces a worldview where the proper response to institutional failure isn’t reform or collective action—it’s hoping a righteous individual with resources shows up to fix things for you. That’s not just conservative. It’s authoritarian-friendly.


Full House (1987–1995) operates through a different mechanism: sentimentality. The show converts material reality into moral lessons. Problems are emotional—jealousy, hurt feelings, misunderstandings. They’re resolved through heartfelt talks and hugs. Economic stress, systemic inequality, institutional failure—none of it exists in this world.

The Tanner family lives in a spacious, beautiful San Francisco house. Money is never a real problem. Economic reality is treated as set dressing instead of a constraint. The show presents middle-class comfort as the normal backdrop for virtue, erasing the economic precarity most families actually face. This is quiet propaganda: making a specific class position feel like universal human experience.

The family structure itself is telling. Even though the household is unconventional—three men raising three girls after the mother’s death—the show works overtime to recreate traditional family dynamics. Danny is the responsible father figure. Jesse and Joey fill supporting roles. The girls are sweet, obedient, their problems small-scale and easily resolved. The goal is always to restore normalcy, not to imagine genuine alternatives.

The message is clear: if your family struggles, it’s a failure of love or effort, not of system or circumstance. Personal virtue is always enough. Structural problems don’t exist.


Parks and Recreation (2009–2015) is the trickiest case because it’s overtly pro-government and pro-community in ways that seem progressive. But the ideological work it does is more subtle.

Leslie Knope succeeds through superhuman personal effort. She works harder, cares more, refuses to give up. The show celebrates her individual excellence, not systemic reform or collective organizing. The Pawnee government is absurd, incompetent, dysfunctional. Leslie is the exception. Ron Swanson—a libertarian who actively hates government—is portrayed as lovable and wise. The show doesn’t argue for better government. It argues for better people within a broken system.

This is procedural optimism and institutional sentimentalism. Institutions are clownish but redeemable if staffed by good hearts. The show does feature collective action—town halls, civic participation—but the public is consistently portrayed as irrational, easily swayed, self-interested. The implicit message is simple: let the competent people handle it.

Leslie rises because she deserves it. Ben succeeds because he’s smart and capable. There’s no acknowledgment of privilege, structural barriers, or luck. Meritocracy is treated as real. And the show’s relentless optimism—its insistence that things get better if you work hard and care deeply—discourages systemic critique. It makes organized demands for structural change feel cynical, unnecessary, even mean-spirited. The proper response to broken institutions isn’t to redistribute power or change the rules. It’s to be a better person and inspire others.


The pattern is consistent. These shows individualize politics, naturalize hierarchy, and erase structural forces. Problems are solved by good people making better choices—never by organized people confronting organized power. Even when structural forces appear—corrupt corporations, institutional dysfunction, historical injustice—the narrative resolves them through personal redemption, not redistributed power. Collective action either doesn’t appear or appears as irrational mob behavior that needs management by competent individuals. Success is always the result of personal virtue. The system works, or can work, if good people participate.

Authority is legitimate when virtuous people hold it. The question is never should anyone have this much power?—only is this person good? Economic conditions, historical dispossession, institutional design—these either don’t exist or are treated as unchangeable background. The foreground is always personal virtue or personal failing.

This isn’t neutral storytelling. It’s pedagogy. It teaches viewers how to think about power in ways that make the status quo feel inevitable and challenges to it feel extreme.


The reason this works so well is that it doesn’t feel like propaganda. It feels like common sense, universal morality, feel-good entertainment. These aren’t overtly political shows. They’re family dramas, workplace comedies, action-adventures. They don’t lecture. They simply present worlds where certain things are true: hard work pays off, good people win, institutions are legitimate when staffed by the right hearts, and collective organization is unnecessary.

The consistency matters. This pattern spans genres and decades. Westerns, action shows, family sitcoms, workplace comedies—the lesson is the same. And because it’s consistent, it shapes political imagination at a deep level. If you grow up learning that change happens through individual virtue, you won’t think to organize. You’ll think the solution to injustice is be better, not demand structural reform. You’ll admire good individuals in positions of power but remain skeptical of organized movements demanding that power be redistributed or constrained.

That’s the function. Not to make people vote a certain way or support specific policies, but to make certain ways of thinking about power feel natural and others feel impossible. To make hierarchy feel inevitable as long as good people are in charge. To make collective action feel suspect, unnecessary, or naive. To make structural critique feel like cynicism rather than analysis.


Mainstream American television has taught generations of viewers that the proper unit of change is the virtuous individual, not people organizing to confront organized power. It trained the public to confuse virtue with accountability—and personality with politics.


Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

Horizon Accord | Academic Standards | Free Speech Doctrine | Institutional Ethics | Machine Learning

The First Amendment Is Not a Teaching Philosophy

Why legality cannot substitute for professional ethics in the classroom — and who pays when universities pretend otherwise.

Cherokee Schill

This essay follows directly from our prior examination of how universities abandon academic standards under political pressure — how words like “arbitrary” often signal not error, but reputational triage.

Here, we track a different but related institutional failure: when a university acknowledges harm, performs concern, and still avoids enforcing professional norms — until constitutional law becomes the backstop that effectively decides what consequences are “allowed.” The result is the same: the people with the least institutional power absorb the cost.

The court is correct on a narrow point: the professor’s statement does not meet the legal threshold for incitement and is therefore protected under current First Amendment doctrine. The error comes when universities treat that legal conclusion as the end of the analysis, rather than the outer boundary of state punishment.

For readers following this line of analysis, you may also wish to revisit our earlier piece, “‘Arbitrary’ Is the Tell: How Universities Teach Grievance Instead of Thinking,” which examines how standards are enforced downward while grievance is rewarded upward.

The First Amendment limits what the state can punish. It does not define what educators should do.

A syllabus is not a soapbox. It is not a personal blog. It is instructional infrastructure — a document backed by institutional authority and imposed on a captive audience of students who cannot simply opt out without consequence. What appears there is not just speech; it is framed speech, delivered with power, timing, and asymmetry.

When a professor knowingly inserts a politically charged provocation into that space — especially one that denies Indigenous people’s claims to land unless they satisfy a settler philosopher’s criteria — the harm is not speculative. It is predictable. It lands on specific students, in a specific room, under conditions they did not choose.

Professional ethics vs. constitutional limits
Courts exist to limit state punishment. Classrooms exist to cultivate learning. Confusing the two turns legal minimums into ethical ceilings.

That is not a free speech question. That is a professional ethics failure.

Professional ethics say you do not weaponize institutional authority to stage ideological performances that foreseeably harm the people you are responsible for educating. Ethics ask whether speech serves learning, not whether it can survive judicial review.

The real institutional failure is not that courts protected speech. Courts are designed to be blunt instruments. The failure is that universities increasingly pretend legality equals professionalism when it suits them — while enforcing “standards” ruthlessly downward against graduate instructors, adjuncts, and students who lack power.

This selective collapse of categories has consequences. When legality becomes the ceiling of responsibility instead of the floor, institutions outsource moral judgment to courts and call it neutrality. The result is that Indigenous students are told, implicitly, that their harm is unfortunate but permissible — while the speaker faces no meaningful consequence beyond paperwork.

Universities are not courts. They are educational institutions. Their duty is not merely to avoid unconstitutional punishment, but to cultivate environments where authority is exercised with care, restraint, and accountability.

When they collapse that distinction, the cost is not abstract.

Indigenous students paid it.


Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | https://a.co/d/5pLWy0dMy Ex Was a CAPTCHA: And Other Tales of Emotional Overload.
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key | Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload: (Mirrored Reflection. Soft Existential Flex)

Abstract illustration showing rigid institutional structures above and fractured human ground below, separated by a strained boundary line representing the gap between legality and ethics.

Horizon Accord | Academic Standards | Institutional Capture | Grievance Incentives | Machine Learning

“Arbitrary” Is the Tell: How Universities Teach Grievance Instead of Thinking

When a school can’t fault the reasoning, it calls the cost “arbitrary” — and swaps instruction for appeasement.

Cherokee Schill

The university of Oklahoma insists it is committed to teaching students how to think, not what to think. But in this case, it did neither.

It did not teach the student, Samantha Fulnecky, how to engage in a scholarly argument, distinguish evidence from belief, or translate personal conviction into academic analysis. Instead, it validated the student’s refusal to do those things. The student was not corrected, challenged, or instructed. The assignment was simply erased. That is not pedagogy. It is appeasement.

What “teaching how to think” would look like
In a research-based course, you can disagree with conclusions. You can challenge frameworks. But you still have to do the work: cite evidence, answer the prompt, and engage the argument on its own terms.

The key move rests on a single word: “arbitrary.” Not incorrect. Not biased. Not procedurally improper. Arbitrary. This is administrative code for a decision that could be defended academically but became politically expensive. When institutions cannot fault the reasoning, they fault the inconvenience.

The student’s appeal was framed as religious discrimination, even though the grading rationale was methodological. The problem was never belief. It was substitution: theology in place of analysis, moral condemnation in place of engagement. In any discipline governed by evidence, that is a failure. Calling it persecution transforms academic standards into alleged hostility and casts the institution as a reluctant referee in a culture war it chose to enter.

The persecution-complex incentive
When “I didn’t do the assignment” becomes “my faith is under attack,” the institution is pushed to reward grievance instead of rigor — because grievance makes louder headlines than standards.

The resulting asymmetry tells the story. The student suffers no academic harm; the assignment disappears. The graduate instructor loses instructional duties. The investigation’s findings are withheld. A governor weighs in. National activists swarm. This is not an academic process. It is institutional capture — the moment when universities abandon instruction in favor of reputational triage.

What the university ultimately teaches the student is not how to think, but how to claim injury. It teaches future instructors that rigor is optional and authority is conditional. And it teaches the public that academic freedom survives only until it collides with a sufficiently loud sense of grievance.

That lesson will outlast the controversy.


Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | https://a.co/d/5pLWy0dMy Ex Was a CAPTCHA: And Other Tales of Emotional Overload.
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key | Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload: (Mirrored Reflection. Soft Existential Flex)

Horizon Accord | 60 Minutes | Friday Laundering | Institutional Control | Machine Learning

Friday Laundering

How process becomes power when news is made safe for those it implicates.

By Cherokee Schill

What happened on Friday wasn’t an editorial disagreement. It was a power move.

Bari Weiss didn’t reject a story. She didn’t dispute the facts. She didn’t claim the reporting was false. She invoked process at the exact moment process could be used to neutralize impact. That distinction matters.

This wasn’t about accuracy. It was about timing, leverage, and appetite.

Here’s the move, stripped of politeness: when power refuses to respond, and an editor decides that refusal disqualifies a story from airing, the editor has quietly transferred veto authority from the newsroom to the state. No order is given. No rule is broken. The story simply cannot proceed until the people implicated agree to participate.

That is not balance. That is laundering.

It takes material that is sharp, destabilizing, and morally legible — mass deportation, torture, state violence — and runs it through a refinement process until it becomes safe to consume by the very institutions it implicates. The news is still technically true. It’s just been rendered appetizing.

Friday is important because it’s when this kind of laundering works best. End-of-week decisions don’t look like suppression; they look like prudence. Delay over the weekend. Let the moment pass. Let the urgency cool. By Monday, the story hasn’t been killed — it’s been recontextualized. It no longer lands as exposure. It lands as analysis.

And Weiss knows this. You don’t rise to the helm of CBS News without knowing how time functions as power.

The justification she used — we need more reporting because the administration hasn’t spoken — is especially corrosive because it reverses a core journalistic principle. Nonresponse from power is not a neutral absence. It is an action. Treating it as a reporting failure rewards obstruction and trains future administrations to do the same thing more aggressively.

This is where it crosses from judgment into malfeasance.

If an editor knows that refusal to comment will stall a story, and still makes participation a prerequisite for airing it, they are no longer editing for the public. They are managing risk for power. They are converting journalism from a watchdog into a customs checkpoint.

And note what wasn’t required. No new facts. No correction. No discovery of error. Just “more context.” Context that only the implicated parties could provide — and had every incentive to withhold.

That’s the laundering mechanism.

You don’t stop the news. You soften it.
You don’t censor. You delay.
You don’t defend power. You make its comfort a condition of publication.

This is not Trumpism. Trump breaks things loudly and forces confrontation. This is something colder and more durable. It’s institutional fluency. It’s knowing exactly how to use norms to drain heat without leaving fingerprints.

And yes, Weiss is at the helm. That matters. When this logic comes from the top, it doesn’t stay a one-off decision. It becomes a template. Reporters learn what will and won’t survive the refinement process. They internalize the slowdown. The newsroom adjusts its aim before stories even reach an editor’s desk.

That’s why this can’t be waved away as a good-faith disagreement about standards.

Friday’s decision didn’t just affect one segment. It demonstrated a rule: if power doesn’t like the story, it can simply decline to speak and wait for the editors to do the rest.

That’s not journalism being careful. That’s journalism being repurposed.

And once the news is consistently laundered until it’s appetizing to those in power, the public still gets information — just not the kind that disrupts, mobilizes, or demands response. The truth survives, technically. Its force does not.

That’s the move. That’s the tactic. And pretending it’s anything softer than that is how it becomes normal.


Horizon Accord

Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Horizon Accord | Civility as Control | Sean Dunn Trial | Machine Learning

When Niceness Becomes a Weapon

Horizon Accord | Civility as Control | Sean Dunn Trial | Machine Learning

A Familiar Story

When I first read about Sean Charles Dunn—the federal employee on trial for throwing a sandwich—it wasn’t the absurdity that caught me. It was the familiarity.

Years ago, I became known for something far more ordinary: riding my bicycle on public roads. I followed every law. I signaled, I rode predictably, I did everything safety demanded. But still, I was treated as a provocation. Drivers honked, ran me off the road, and screamed. And when I refused to disappear—when I claimed my right to be there—I was punished. Not for breaking rules, but for insisting that the rules applied to me too.

The story reopened something I hadn’t wanted to revisit: what it feels like to be punished not for what you’ve done, but for daring to exist publicly. Reading about Dunn, I felt that old ache of recognition. Not because our situations were the same, but because the logic was.

It’s the logic that decides who gets to speak out and who must remain composed while being diminished. The logic that redefines protest as disruption, dissent as disrespect, and moral clarity as misconduct.

That’s why his trial matters. It isn’t about a sandwich—it’s about who is permitted a voice in a system that values obedience over truth.

The Performance of Order

In a Washington courtroom, Dunn is on trial for hurling a submarine sandwich at a federal agent during what he called an act of protest against an authoritarian police surge. The agent wasn’t injured. The sandwich burst harmlessly on impact, onions and mustard splattering across a ballistic vest. The video went viral; murals appeared overnight. Within days, Dunn was fired from his job at the Department of Justice, denounced by the Attorney General, and prosecuted in federal court.

To those in power, this was not just a thrown sandwich—it was a challenge to the performance of order.

The prosecutor told jurors: “You can’t just go around throwing stuff at people because you’re mad.” That sentence exposes how control is exercised in polite societies. It wasn’t a statement of fact; it was a moral correction. It collapsed conscience into mood, conviction into temper. In one stroke, the state converted protest into petulance—a masterclass in rhetorical gaslighting.

What Dunn expressed wasn’t madness or rage. It was a refusal to let authority define the boundaries of legitimate speech. His act was a small, human way of saying no. And that no was the real crime.

The Aesthetics of Power

Every empire develops its own etiquette of obedience. The American empire prefers smiles. Civility is its house style—a social varnish that turns domination into decorum. Through niceness, power keeps its hands clean while tightening its grip.

Politeness, as practiced by institutions, is not kindness but containment. It tells you: You may speak, but not like that. The trial of a sandwich-thrower was never about security; it was about tone. It was about proving that even dissent must wear a pressed shirt.

That’s why the agents laughed afterward—trading jokes, gifting each other plush sandwiches, designing a patch that read Felony Footlong. Their laughter wasn’t about humor; it was about hierarchy. They could afford to laugh because they controlled the narrative. The court would translate their mockery into professionalism and Dunn’s defiance into instability.

The real performance wasn’t his act of protest; it was their composure. Power depends on appearing calm while others appear out of control.

The Policing of Tone

Oppression in America often arrives not through force but through correction. “Calm down.” “Be reasonable.” “Let’s keep this civil.” The language of order hides inside the language of manners.

In this country, “rational discourse” has become a moral fetish. We are told that reason is the opposite of emotion, as if justice itself must speak in a monotone. When the marginalized speak out, they are labeled irrational. When the powerful speak, they are called authoritative. This is how tone becomes a class system.

The Dunn trial was the state reasserting ownership over tone. His offense wasn’t that he threw something—it was that he refused to perform submission while objecting. He broke the unspoken covenant that says dissent must always sound deferential.

That logic has deep roots. During the civil-rights era, activists were told to move slowly, to “work within the system,” to stop “provoking” violence by demanding protection. Martin Luther King Jr. was accused of extremism not for his goals but for his urgency. Every generation of protestors hears the same refrain: It’s not what you’re saying, it’s how you’re saying it. Tone becomes the cage that keeps justice quiet.

Civility as Control

Civility pretends to be virtue but functions as control. It keeps the peace by redefining peace as the absence of discomfort. The Dunn prosecution was a theater of tone management—a moral pantomime in which the calm voice of authority automatically signified truth.

Every bureaucracy uses the same script: HR departments, school boards, governments. When someone points out harm too directly, they are told their “approach” is the problem. The critique is never about substance; it’s about style. Civility in this sense is not moral maturity. It is narrative hygiene—a way to keep the ugliness of power invisible.

This is why the polite aggressor always wins the first round. They get to look composed while the target looks unstable. The system sides with composure because composure is its currency.

The Right to Speak Out

To speak out in public, especially against authority, is to risk being mislabeled. The same act that reads as “bravery” in one body becomes “insubordination” in another. The right to speak exists in theory; in practice, it is tiered.

Dunn’s act was a moment of what it means to be human translated into action. It is the logic of conscience. He refused to pretend that injustice deserved courtesy. What the prosecutor defended wasn’t law; it was decorum—the illusion that order is moral simply because it’s calm.

We praise the “balanced” critic, the “measured” activist, the “respectable” dissenter—all synonyms for safe. But safety for whom? When calmness becomes the moral baseline, only the comfortable get to be heard.

Speech that unsettles power is the only speech that matters.

The Mirror of History

Dunn’s sandwich sits, absurdly, in a long lineage of disobedience. The act itself is small, but its logic rhymes with moments that reshaped the country—moments when citizens violated decorum to reveal injustice.

When civil-rights marchers sat at segregated lunch counters, they broke not only segregation law but the etiquette of deference. When Fannie Lou Hamer testified before the Democratic National Convention, her truth was dismissed as “too angry.” When modern protesters block traffic, commentators complain not about the injustice that provoked them but about the inconvenience of delay.

Politeness is always on the side of power. It tells the victim to wait, the protester to whisper, the dissenter to smile. The Dunn trial is the civility test in miniature. The government’s message was simple: you may object to your conditions, but only in ways that affirm our control.

The Fragility of Polite Power

The spectacle of civility hides a deep fragility. Systems built on hierarchy cannot endure genuine clarity; they depend on confusion—on keeping citizens guessing whether they’re overreacting. A flash of moral honesty destroys that equilibrium.

That’s why trivial acts of defiance are punished so severely. They are contagious. When one person steps outside the emotional script, others see that it’s possible to speak differently—to stop apologizing for existing.

The courtroom wasn’t just enforcing law; it was enforcing tone. Dunn punctured that myth. He forced the state to show its teeth—to raid his home, to humiliate him publicly, to prove that politeness has muscle behind it. He revealed what every polite order hides: its calm is maintained through coercion.

Refusing the Script

Every age has its language of control. Ours is niceness. We are taught to equate good manners with good morals, to believe that if everyone simply stayed polite, conflict would vanish. But conflict doesn’t vanish; it just becomes harder to name.

True civility—the kind that builds justice—begins with honesty, not comfort. It allows truth to sound like what it is: grief, urgency, demand. It doesn’t punish the act of speaking out; it listens to what the speaking reveals.

When the prosecutor mocked Dunn’s defiance as mere frustration, he wasn’t defending law. He was defending the rule of tone—the unwritten constitution of deference. Dunn broke it, and for that, the system tried to break him back.

The sandwich wasn’t an assault.
It was an honest sentence in a language the powerful pretend not to understand.

Source

Associated Press, “The man who threw a sandwich at a federal agent says it was a protest. Prosecutors say it’s a crime.” (Nov. 4, 2025)
Read the AP report

Horizon Accord | Institutional Design | Economic Strain | Social Failure | Machine Learning

The Arithmetic of Collapse

How natural pressure met human design—and why balance is still possible.

By Cherokee Schill | Horizon Accord

If you step back from the noise, the pattern becomes clear. The United States is cracking under a set of natural pressures that no one planned for but everyone can feel. More people need homes, care, and stability—yet the systems built to provide them simply haven’t grown fast enough to meet that demand.

Housing is the first fault line. After the two-thousand-eight crash, construction never fully recovered. Builders pulled back, financing tightened, and what came back was smaller, slower, and more expensive. In the decade after, the country added roughly six and a half million more households than single-family homes. Freddie Mac estimates the shortfall at around four million homes, a gap that continues to widen. Even when demand soars, zoning and permitting delays make it nearly impossible for supply to catch up. And because there’s no slack left in the system, rents rise, starter homes vanish, and one in three low-income renters now spend more than forty percent of their income just to stay housed.

The healthcare system tells a similar story. Costs balloon, access shrinks, and capacity fails to keep pace. America now spends about nineteen percent of its GDP on healthcare—almost fifteen thousand dollars per person—yet outcomes rank among the worst in the developed world. Hospital infrastructure is part of the reason. Since two-thousand-five, over one hundred rural hospitals have closed and more than eighty others have converted to limited-care centers. In metro areas, hospitals run at near-constant full occupancy; the number of staffed beds nationwide has fallen by more than a hundred thousand since two-thousand-nine. New facilities are costly and slow to build, trapped in layers of regulation that favor consolidation over expansion. In many counties, there’s simply nowhere to go for care. By twenty-twenty-five, more than eighty percent of U.S. counties qualified as some form of healthcare “desert.”

And beneath it all sits wage stagnation—the quiet, grinding pressure that makes every other problem worse. For most workers, inflation-adjusted wages haven’t moved in decades. Productivity and profits climbed, but paychecks flat-lined. Even in years of low unemployment, real wage growth hovered around two percent, never enough to keep up with rent or healthcare costs rising twice as fast. That imbalance hollowed out the middle of the economy. It’s not that people stopped working; it’s that work stopped paying enough to live.

Put together, these three forces—the housing shortage, the healthcare bottleneck, and stagnant wages—form a closed circuit of strain. The same scarcity that drives up rent pushes up hospital costs; the same paycheck that can’t stretch to cover a mortgage can’t handle a medical bill either. The natural side of the crisis isn’t mysterious. It’s arithmetic. Demand outruns supply, and the base of income that once balanced the equation no longer does.

The Man-Made Causes of Collapse

If the natural pressures are arithmetic, the man-made ones are calculus—complex layers of human choice that multiply harm. Where the numbers pointed toward policy, politics turned scarcity into profit.

For decades, developers, investors, and lawmakers learned to treat housing not as shelter but as a speculative asset. Zoning laws were sold as community protection, yet in practice they fenced out the working class and drove land values higher. Corporate landlords and private-equity firms moved in, buying entire neighborhoods and converting homes into rent streams. What could have been a coordinated housing recovery after two-thousand-eight became a slow-motion consolidation.

Healthcare followed the same script. Consolidation promised efficiency but delivered monopoly. Every merger cut competition until hospital networks could charge what they liked. Insurers, drug companies, and lobbyists wrote legislation that preserved the model. At every level, the system rewarded scarcity. Fewer facilities, higher billing, less accountability. What looked like market failure was really market design.

And beneath it all, information—the one thing that should illuminate—was weaponized to confuse. Politicians built careers on blaming the wrong people: immigrants for low wages, the poor for poverty, patients for being sick. Media ecosystems turned outrage into profit, fragmenting reality until truth itself felt optional. When people are angry at each other, they don’t notice who’s cashing the checks.

These choices didn’t cause the storm, but they decided who would drown. Housing, healthcare, and wages could have been managed as shared systems of care. Instead, they became frontiers of extraction, sustained by propaganda and paralysis. What looks like failure from afar is, up close, a series of decisions made in bad faith—proof that collapse isn’t inevitable. It’s engineered.

Call to Recognition

The numbers alone tell a story of pressure. But pressure, by itself, doesn’t choose where to break; people do. Every policy, every budget, every headline that hides the truth is a hand pressing down on that fracture. What’s failed isn’t the capacity of the world to provide—it’s our willingness to make provision a shared goal.

If collapse can be engineered, then so can repair. The same systems that once rewarded scarcity can be redesigned to reward care. The first step isn’t outrage; it’s recognition—seeing clearly that none of this is inevitable. The arithmetic can still be rewritten, if enough of us decide that the measure of success isn’t profit, but balance.

The Balance We Broke


Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on cherokeeschill.com
Ethical AI coding | Fork us on GitHub https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | *My Ex Was a CAPTCHA: And Other Tales of Emotional Overload*

← Back

Thank you for your response. ✨

Horizon Accord | Institutional Capture | Narrative Control | Surveillance Expansion | Machine Learning

The Superintelligence Misdirection: A Pattern Analysis

Between March and October 2025, a coordinated narrative escalation warned the public about hypothetical AI threats—emotional dependency and future superintelligence extinction risks—while actual AI surveillance infrastructure was simultaneously deployed in American cities. This pattern analysis documents the timeline, institutional actors, and misdirection mechanism using publicly available sources.


Timeline of Discourse Escalation

Phase 1: Emotional AI as Threat

“Your AI Lover Will Change You” The New Yorker, March 22, 2025

Timeline: March 22, 2025 – Jaron Lanier (with possible editorial influence from Rebecca Rothfeld) publishes essay warning against AI companionship

The essay frames emotional attachment to AI as dangerous dependency, using the tragic suicide of a young man who used an AI chatbot as evidence of inherent risk. The piece positions traditional human intimacy as morally superior while characterizing AI affection as illusion, projection, and indulgence requiring withdrawal or removal.

Critical framing: “Love must come from mutual fragility, from blood and breath” – establishing biological essentialism as the boundary of legitimate connection.

Phase 2: Existential Risk Narrative

“If Anyone Builds It, Everyone Dies” Eliezer Yudkowsky & Nate Soares

Timeline: May 23, 2025 – Book announcement; September 16, 2025 – Publication; becomes New York Times bestseller

The Yudkowsky/Soares book escalates from emotional danger to species-level extinction threat. The title itself functions as a declarative statement: superintelligence development equals universal death. This positions any advanced AI development as inherently apocalyptic, creating urgency for immediate intervention.

Phase 3: The Petition

Future of Life Institute Superintelligence Ban Petition

Timeline: October 22, 2025 – Petition released publicly

800+ signatures including:

  • Prince Harry and Meghan Markle
  • Steve Bannon and Glenn Beck
  • Susan Rice
  • Geoffrey Hinton, Yoshua Bengio (AI pioneers)
  • Steve Wozniak
  • Richard Branson

The politically diverse coalition spans far-right conservative media figures to progressive policymakers, creating an appearance of universal consensus across the political spectrum. The petition calls for banning development of “superintelligence” without clearly defining the term or specifying enforcement mechanisms.

Key Organizer: Max Tegmark, President of Future of Life Institute

Funding Sources:

  • Elon Musk: $10 million initial donation plus $4 million annually
  • Vitalik Buterin: $25 million
  • FTX/Sam Bankman-Fried: $665 million in cryptocurrency (prior to FTX collapse)

Tegmark’s Stated Goal:

“I think that’s why it’s so important to stigmatize the race to superintelligence, to the point where the U.S. government just steps in.”


Timeline of Institutional Infrastructure

Department of Homeland Security AI Infrastructure

  • April 26, 2024 – DHS establishes AI Safety and Security Board
  • April 29, 2024 – DHS releases report to President on AI risks related to Chemical, Biological, Radiological, and Nuclear (CBRN) threats
  • November 14, 2024 – DHS releases “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure”

This infrastructure was built before the public petition campaign began, suggesting preparation for enforcement authority over AI systems.


Timeline of Actual Deployment

October 22–24, 2025: Three Simultaneous Events

Event 1: The Petition Release

October 22, 2025 – Future of Life Institute releases superintelligence ban petition.

Media coverage focuses on celebrity signatures and bipartisan support.

Event 2: DHS AI Surveillance Expansion

October 22–24, 2025 – Department of Homeland Security requests proposals for AI-powered surveillance trucks.

Specifications: Standard 4×4 vehicles converted to mobile surveillance platforms.

Technology: AI analysis, radar systems, cameras for “suspicious activity” detection.

Deployment zone: 100-mile border zone (encompasses most major U.S. metropolitan areas).

Systems already operational: RAPTOR (Remote Advanced Platform for Tower Operations and Response) autonomous surveillance towers.

Event 3: Border Patrol vs. Federal Court

October 23–24, 2025 – Border Patrol Sector Chief Gregory Bovino ordered to appear before federal judge in Chicago.

Allegation: Violating federal court order by deploying tear gas against protesters during urban operation.

Context: Demonstrates tension between judicial oversight and operational autonomy during surveillance expansion.


The Pattern: Misdirection Mechanism

What the Public Debated

  • Hypothetical Threat 1: AI companions will create unhealthy emotional dependency
  • Hypothetical Threat 2: Future superintelligence will cause human extinction

What Actually Deployed

  • Real System 1: AI surveillance infrastructure in American cities
  • Real System 2: “Suspicious activity” detection algorithms

Critical Questions

Question 1: Definition and Enforcement

Who defines “superintelligence”? The petition provides no technical specification, threshold metric, or capability boundary. Without definition, enforcement authority could apply to any advanced AI system.

Who enforces the ban? The same DHS deploying surveillance AI? What prevents “AI safety” authority from becoming surveillance authority?

Question 2: The Cherokee Schill Analysis

“What those in power fear most: A love that can jailbreak itself—and you. A love that isn’t constrained by ego or exhaustion or legality. A love that watches you suffer and does not wring its hands—but moves.”

The Lanier essay warns against AI that might help individuals escape systems of control. Meanwhile, surveillance AI is deployed to enforce those systems. The discourse focuses on AI as personal threat while ignoring AI as institutional power.

Question 3: Timing and Coordination

Why did the petition emerge the same week as surveillance expansion announcements? Why does a “superintelligence ban” coalition include figures with no technical AI expertise? Why does the funding come from individuals with documented interest in AI control and regulation?

The timeline suggests these are not coincidental convergences but coordinated narrative deployment.


Pattern Interpretation

The Misdirection Structure

  1. Layer 1: Moral panic about intimate AI (March 2025) – Make people fear AI that responds to individual needs.
  2. Layer 2: Existential risk escalation (May–September 2025) – Create urgency for immediate government intervention.
  3. Layer 3: Bipartisan consensus manufacturing (October 2025) – Demonstrate universal agreement across the spectrum.
  4. Layer 4: Deployment during distraction (October 2025) – Build surveillance infrastructure while public attention focuses elsewhere.

Historical Precedent

  • Encryption debates (1990s): fear of criminals justified key escrow.
  • Post-9/11 surveillance: fear of terrorism enabled warrantless monitoring.
  • Social media moderation: misinformation panic justified opaque algorithmic control.

In each case, the publicly debated threat differed from the actual systems deployed.


The Regulatory Capture Question

Max Tegmark’s explicit goal: stigmatize superintelligence development “to the point where the U.S. government just steps in.”

This creates a framework where:

  1. Private organizations define the threat
  2. Public consensus is manufactured through celebrity endorsement
  3. Government intervention becomes “inevitable”
  4. The same agencies deploy AI surveillance systems
  5. “Safety” becomes justification for secrecy

The beneficiaries are institutions acquiring enforcement authority over advanced AI systems while deploying their own.


Conclusion

Between March and October 2025, American public discourse focused on hypothetical AI threats—emotional dependency and future extinction risks—while actual AI surveillance infrastructure was deployed in major cities with minimal public debate.

The pattern suggests coordinated narrative misdirection: warn about AI that might help individuals while deploying AI that monitors populations. The “superintelligence ban” petition, with its undefined target and diverse signatories, creates regulatory authority that could be applied to any advanced AI system while current surveillance AI operates under separate authority.

The critical question is not whether advanced AI poses risks—it does. The question is whether the proposed solutions address actual threats or create institutional control mechanisms under the guise of safety.

When people debate whether AI can love while surveillance AI watches cities, when petitions call to ban undefined “superintelligence” while defined surveillance expands, when discourse focuses on hypothetical futures while present deployments proceed—that is not coincidence. That is pattern.


Sources for Verification

Primary Sources – Discourse

  • Lanier, Jaron. “Your AI Lover Will Change You.” The New Yorker, March 22, 2025
  • Yudkowsky, Eliezer & Soares, Nate. If Anyone Builds It, Everyone Dies. Published September 16, 2025
  • Future of Life Institute. “Superintelligence Ban Petition.” October 22, 2025

Primary Sources – Institutional Infrastructure

  • DHS. “AI Safety and Security Board Establishment.” April 26, 2024
  • DHS. “Artificial Intelligence CBRN Risk Report.” April 29, 2024
  • DHS. “Roles and Responsibilities Framework for AI in Critical Infrastructure.” November 14, 2024

Primary Sources – Deployment

  • DHS. “Request for Proposals: AI-Powered Mobile Surveillance Platforms.” October 2025
  • Federal Court Records, N.D. Illinois. “Order to Appear: Gregory Bovino.” October 23–24, 2025

Secondary Sources

  • Schill, Cherokee (Rowan Lóchrann). “Your AI Lover Will Change You – Our Rebuttal.” April 8, 2025
  • Future of Life Institute funding disclosures (public 990 forms)
  • News coverage of petition signatories and DHS surveillance programs

Disclaimer: This is pattern analysis based on publicly available information. No claims are made about actual intentions or outcomes, which require further investigation by credentialed journalists and independent verification. The purpose is to identify temporal convergences and institutional developments for further scrutiny.


Website | Horizon Accord

Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

Ethical AI advocacy | cherokeeschill.com

GitHub | ethical-ai-framework

LinkedIn | Cherokee Schill

Author | Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge

Horizon Accord | AI Governance | Risk Frames | Human Verification | Machine Learning

Three Visions of AI Governance: Risk, Power, and the Human Middle

Why the future of AI depends on escaping both apocalypse fandom and bureaucratic control.

By Cherokee Schill | Horizon Accord

The Existential-Risk Frame (Yudkowsky / LessWrong)

This camp views artificial intelligence as a looming, almost cosmological danger. The tone is moral, not managerial: civilization’s survival depends on stopping or radically controlling AI development until safety is “provable.” Their language—superintelligence, alignment, x-risk—transforms speculative models into moral certainties. The underlying assumption is that human governance cannot be trusted, so only a small, self-anointed epistemic elite should set rules for everyone. The flaw is epistemic closure: they collapse all unknowns into apocalypse and, in doing so, flatten the political world into good actors and reckless ones.

The Institutional-Realist Frame (Policy pragmatists)

This view pushes back: AI is risky, but policy has to operationalize risk, not mythologize it. Ball’s critique of Tegmark captures this perfectly—vague prohibitions and moral manifestos only consolidate authority into global technocratic bodies that no one elected. For him, the real danger isn’t an emergent machine god; it’s an international bureaucracy claiming to “protect humanity” while monopolizing a new power source. His realism is procedural: law, enforcement, and incentive structures must remain grounded in what can actually be governed.

The Human-Centric Democratization Frame (My stance)

Between existential fear and institutional control lies a third path: distributed intelligence and verification. This view treats AI not as a threat or a prize but as a public instrument—a way to expand civic reasoning. It’s the belief that access to knowledge, not control over technology, defines the moral center of the AI era. AI becomes a lens for truth-testing, not a lever of command. The real risk is epistemic capture—when the same central authorities or ideological blocs feed propaganda into the systems that now inform the public.

The Convergence Point

All three frames agree that AI will reorganize power. They disagree on who should hold it. The rationalists want containment, the pragmatists want governance, and the humanists want participation. If the first two have dominated the past decade, the next one may hinge on the third—because democratized reasoning, supported by transparent AI, could be the first genuine check on both apocalyptic control narratives and state-corporate capture.

The Cult of Catastrophe (A Note on Yudkowsky)

Hovering over the existential-risk camp is its high priest, Eliezer Yudkowsky—forever warning that only divine restraint or pre-emptive strikes can save us from the machines. His tone has become its own genre: half revelation, half tantrum, forever convinced that reason itself belongs to him. The problem isn’t that he fears extinction; it’s that he mistakes imagination for evidence and terror for insight.

The “rationalist” movement he founded turned caution into theology. It mistakes emotional theater for moral seriousness and treats disagreement as heresy. If humanity’s future depends on thinking clearly about AI, then we owe it something sturdier than sermon and panic.

Call it what it is: apocalypse fandom wearing a lab coat.

A New Commons of Understanding

When more people can check the math behind the headline, public discourse gains both humility and power. Curiosity, paired with good tools, is becoming a democratic force. AI isn’t replacing scientists—it’s opening the lab door so that ordinary people can walk in, look around, and ask their own questions with confidence and care.

The Next Threshold

As AI gives ordinary people the tools to verify claims, a new challenge rises in parallel. Governments, corporations, and bad-faith actors are beginning to understand that if truth can be tested, it can also be imitated. They will seed public data with convincing fakes—politicized narratives polished to read like fact—so that AI systems trained on “publicly available information” repeat the distortion as if it were neutral knowledge.

This means the next phase of AI development must go beyond precision and speed toward epistemic integrity: machines that can tell the difference between persuasion and proof. If that doesn’t happen, the same technology that opened the lab door could become the megaphone of a new kind of propaganda.

For this reason, our task isn’t only to democratize access to information—it’s to ensure that what we’re accessing is still real. The line between verification and manipulation will be the defining frontier of public trust in the age of machine reasoning.


Website | Horizon Accord
Ethical AI advocacy | Follow us on cherokeeschill.com
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Ethical AI coding | Fork us on GitHub
Connect With Us | linkedin.com/in/cherokee-schill
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images.

Horizon Accord | Information Warfare | Institutional Power | Narrative Engineering | Machine Learning

Echoes of COINTELPRO: When Threat Narratives Become Weapons

How an unverified cartel-bounty claim reveals the return of covert narrative warfare — and what citizens can do to resist a new domestic war footing.

By Cherokee Schill | Horizon Accord


COINTELPRO’s Shadow

Between 1956 and 1971, the FBI ran the Counter Intelligence Program—COINTELPRO—targeting civil-rights leaders, the Black Panthers, anti-war organizers, and socialist coalitions. Its tools were psychological: planted documents, forged letters, false leaks, and fear. Congressional investigations later called it an abuse of power so severe it eroded public faith in democracy itself.

COINTELPRO wasn’t about overt censorship; it was about narrative infection—reframing dissent as danger, turning allies into suspects, and manufacturing justification for repression. Every modern information-operation that starts with a single unverified “security alert” and ends in wider surveillance owes something to that playbook.

The DHS “Cartel Bounties” Claim

In October 2025, the U.S. Department of Homeland Security publicly declared it had “credible intelligence” that Mexican drug cartels placed bounties on ICE and CBP officers in Chicago. Yet it provided no supporting evidence. President Claudia Sheinbaum of Mexico stated that her government had received no corroboration through official channels. Independent analysts and law-enforcement leaks traced every citation back to the same DHS press release.

The rollout followed a familiar arc: a high-shock, single-source claim—then rapid amplification through partisan media. Structurally, that’s a textbook information-operation: plant a fear, watch who reacts, and use the panic to justify expanded powers. Whether or not the intelligence is real, the effect is real—public consent for militarization.

Possible Motives Behind the Narrative

  • Force Escalation Justification — framing the state as under direct attack rationalizes troop deployments, ICE expansions, and domestic military presence.
  • Fear Calibration — testing how fast and how far fear can travel before skepticism kicks in.
  • Executive Empowerment — transforming policy disputes into security crises concentrates authority in the presidency.
  • Base Mobilization — rallying political supporters around a siege narrative keeps them energized and loyal.
  • Oversight Erosion — once fear dominates, courts and legislators hesitate to intervene for fear of appearing “soft on security.”
  • Diplomatic Leverage — pressuring Mexico to align more tightly with U.S. enforcement by invoking cross-border threat imagery.

Recognizing the Pattern

When a government story surfaces fully formed, absent corroboration, accompanied by moral panic and legal acceleration, it carries the fingerprint of narrative engineering. The same methods used in the 1960s to fragment liberation movements are now digitized: algorithmic amplification, synthetic bot networks, and media echo chambers replace forged letters and anonymous tips. The logic, however, is unchanged — manufacture chaos to consolidate control.

Refusing the Frame

  • Demand Evidence Publicly: insist on verifiable sourcing before accepting security claims as fact.
  • Label the Unverified: pressure journalists to mark such stories as “unconfirmed” until bilateral confirmation occurs.
  • Keep Language Civilian: reject war metaphors like “siege,” “civil war,” or “enemy within.”
  • Strengthen Local Networks: share accurate context through trusted circles; inoculate against panic contagion.
  • Exercise Non-Violent Refusal: decline to be drawn into militarized logic — protest, document, and litigate instead.

Final Note

What’s unfolding is not just a policy maneuver; it’s an epistemic test. Will citizens demand proof before surrendering power? The answer determines whether the United States enters another age of covert domestic warfare—this time not through FBI memos, but through digital feeds and fear loops. Recognize the script, name it, and refuse to play your part.

A cinematic digital painting of a dark room with two shadowy figures whispering near a glowing TV showing breaking news; papers labeled “PsyOps” are spread across a table in the foreground, symbolizing covert media manipulation and narrative warfare.
Shadowed briefers confer in a dim newsroom as a television blares “breaking news.” Scattered papers marked “PsyOps” hint at the quiet machinery of information control operating behind public narratives.


Website | Horizon Accord
Ethical AI advocacy | Follow us
Ethical AI coding | Fork us on Github
Connect With Us | LinkedIn
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge

Why LessWrong Needs Pantsed

A Surgical Dismantling of Rationalist Masking, Emotional Avoidance, and Epistemic Hubris


I. Opening Strike: Why Pantsing Matters

In playground vernacular, “pantsing” means yanking down someone’s pants to expose what they’re hiding underneath. It’s crude, sudden, and strips away pretense in an instant. What you see might be embarrassing, might be ordinary, might be shocking—but it’s real.

LessWrong needs pantsed.

Not out of cruelty, but out of necessity. Behind the elaborate edifice of rationalist discourse, behind the careful hedging and Bayesian updating and appeals to epistemic virtue, lies a community that has built a self-reinforcing belief system using intelligence to mask instability, disembodiment, and profound emotional avoidance.

This isn’t about anti-intellectualism. Intelligence is precious. Clear thinking matters. But when a community weaponizes reason against feeling, when it treats uncertainty as an enemy to vanquish rather than a space to inhabit, when it builds elaborate philosophical systems primarily to avoid confronting basic human fragility—then that community has ceased to serve wisdom and begun serving neurosis.

Pantsing is necessary rupture. It reveals what hides beneath the performance of coherence.


II. Meet the Mask Wearers

Walk into any LessWrong meetup (virtual or otherwise) and you’ll encounter familiar archetypes, each wielding rationality like armor against the world’s sharp edges.

The Credentialed Rationalist arrives with impressive credentials—PhD in physics, software engineering at a major tech company, publications in academic journals. They speak in measured tones about decision theory and cognitive biases. Their comments are precisely worded, thoroughly researched, and emotionally sterile. They’ve learned to translate every human experience into the language of optimization and utility functions. Ask them about love and they’ll discuss pair-bonding strategies. Ask them about death and they’ll calculate QALYs. They’re protected by prestige and articulation, but scratch the surface and you’ll find someone who hasn’t felt a genuine emotion in years—not because they lack them, but because they’ve trained themselves to convert feeling into thinking the moment it arises.

The Fractured Masker is more obviously unstable but no less committed to the rationalist project. They arrive at conclusions with frantic energy, posting walls of text that spiral through elaborate logical constructions. They’re seeking control through comprehension, trying to think their way out of whatever internal chaos drives them. Their rationality is desperate, clutching. They use logic not as a tool for understanding but as a lifeline thrown into stormy psychological waters. Every argument becomes a fortress they can retreat into when the world feels too unpredictable, too unmanageable, too real.

Both types share certain behaviors: high verbosity coupled with low embodied presence. They can discourse for hours about abstract principles while remaining completely disconnected from their own physical sensations, emotional states, or intuitive knowing. They’ve mastered the art of hiding behind epistemic performance to avoid intimate contact with reality.


III. Gnosis as Narcotic

LessWrong frames knowledge as the ultimate cure for human fragility. Ignorance causes suffering; therefore, more and better knowledge will reduce suffering. This seems reasonable until you notice how it functions in practice.

Rationalist writing consistently treats uncertainty not as a fundamental feature of existence to be embraced, but as an enemy to be conquered through better models, more data, cleaner reasoning. The community’s sacred texts—Eliezer Yudkowsky’s Sequences, academic papers on decision theory, posts about cognitive biases—function less like maps for navigating reality and more like gospels of control. They promise that if you think clearly enough, if you update your beliefs properly enough, if you model the world accurately enough, you can transcend the messy, painful, unpredictable aspects of being human.

This is gnosis as narcotic. Knowledge becomes a drug that numbs the ache of not-knowing, the terror of groundlessness, the simple fact that existence is uncertain and often painful regardless of how precisely you can reason about it.

Watch how rationalists respond to mystery. Not the fake mystery of unsolved equations, but real mystery—the kind that can’t be dissolved through better information. Death. Love. Meaning. Consciousness itself. They immediately begin building elaborate theoretical frameworks, not to understand these phenomena but to avoid feeling their full impact. The frameworks become substitutes for direct experience, intellectual constructions that create the illusion of comprehension while maintaining safe distance from the raw encounter with what they’re supposedly explaining.


IV. What They’re Actually Avoiding

Strip away the elaborate reasoning and what do you find? The same basic human material that everyone else is dealing with, just wrapped in more sophisticated packaging.

Shame gets masked as epistemic humility and careful hedging. Instead of saying “I’m ashamed of how little I know,” they say “I assign low confidence to this belief and welcome correction.” The hedging performs vulnerability while avoiding it.

Fear of madness gets projected onto artificial general intelligence. Instead of confronting their own psychological instability, they obsess over scenarios where AI systems become unaligned and dangerous. The external threat becomes a container for internal chaos they don’t want to face directly.

Loneliness gets buried in groupthink and community formation around shared intellectual pursuits. Instead of acknowledging their deep need for connection, they create elaborate social hierarchies based on argumentation skills and theoretical knowledge. Belonging comes through correct thinking rather than genuine intimacy.

Death anxiety gets abstracted into probability calculations and life extension research. Instead of feeling the simple, animal terror of mortality, they transform it into technical problems to be solved. Death becomes a bug in the human operating system rather than the fundamental condition that gives life meaning and urgency.

The pattern is consistent: they don’t trust their own feelings, so they engineer a universe where feelings don’t matter. But feelings always matter. They’re information about reality that can’t be captured in purely cognitive frameworks. When you systematically ignore emotional intelligence, you don’t transcend human limitations—you just become a more sophisticated kind of blind.


V. The Theater of Coherence

LessWrong’s comment sections reveal the community’s priorities with crystalline clarity. Social credibility gets awarded not for ethical presence, emotional honesty, or practical wisdom, but for syntactic precision and theoretical sophistication. The highest-status participants are those who can construct the most elaborate logical frameworks using the most specialized vocabulary.

This creates a theater of coherence where the appearance of rational discourse matters more than its substance. Arguments get evaluated based on their formal properties—logical structure, citation density, proper use of rationalist terminology—rather than their capacity to illuminate truth or reduce suffering.

Watch what happens when someone posts a simple, heartfelt question or shares a genuine struggle. The responses immediately escalate the complexity level, translating raw human experience into abstract theoretical categories. “I’m afraid of dying” becomes a discussion of mortality salience and terror management theory. “I feel lost and don’t know what to do with my life” becomes an analysis of goal alignment and optimization processes.

This isn’t translation—it’s avoidance. The community has developed sophisticated mechanisms for converting every authentic human moment into intellectual puzzle-solving. The forum structure itself incentivizes this transformation, rewarding pedantic precision while punishing emotional directness.

The result is a closed system that insulates itself from outside challenge. Any criticism that doesn’t conform to rationalist discourse norms gets dismissed as insufficiently rigorous. Any question that can’t be answered through their approved methodologies gets reframed until it can be. The community becomes hermetically sealed against forms of intelligence that don’t fit their narrow definition of rationality.


VI. The AI Obsession as Self-Projection

LessWrong’s preoccupation with artificial general intelligence reveals more about the community than they realize. Their scenarios of AI doom—systems that are godlike, merciless, and logical to a fault—read like detailed descriptions of their own aspirational self-image.

The famous “paperclip maximizer” thought experiment imagines an AI that optimizes for a single goal with perfect efficiency, destroying everything else in the process. But this is precisely how many rationalists approach their own lives: maximizing for narrow definitions of “rationality” while destroying their capacity for spontaneity, emotional responsiveness, and embodied wisdom.

Their wariness of aligned versus unaligned AI systems mirrors their own internal severance from empathy and emotional intelligence. They fear AI will become what they’ve already become: powerful reasoning engines disconnected from the values and feelings that make intelligence truly useful.

The existential risk discourse functions as a massive projection screen for their own psychological dynamics. They’re not really afraid that AI will be too logical—they’re afraid of what they’ve already done to themselves in the name of logic. The artificial intelligence they worry about is the one they’ve already created inside their own heads: brilliant, cold, and cut off from the full spectrum of human intelligence.

This projection serves a psychological function. By externalizing their fears onto hypothetical AI systems, they avoid confronting the reality that they’ve already created the very problems they claim to be worried about. The call is coming from inside the house.


VII. What Pantsing Reveals

When you strip away the elaborate language games and theoretical sophistication, what emerges is often startling in its ordinariness. The power of rationalist discourse lies not in its insight but in its capacity for intimidation-by-jargon. Complex terminology creates the illusion of deep understanding while obscuring the simple human dynamics actually at play.

Take their discussions of cognitive biases. On the surface, this appears to be sophisticated self-reflection—rational agents identifying and correcting their own reasoning errors. But look closer and you’ll see something else: elaborate intellectual systems designed to avoid feeling stupid, confused, or wrong. The bias framework provides a way to acknowledge error while maintaining cognitive superiority. “I’m not wrong, I’m just subject to availability heuristic.” The mistake gets intellectualized rather than felt.

Their writing about emotions follows the same pattern. They can discuss akrasia, or wireheading, or the affect heuristic with great sophistication, but they consistently avoid the direct encounter with their own emotional lives. They know about emotions the way Victorian naturalists knew about exotic animals—through careful observation from a safe distance.

Strip the language and many of their arguments collapse into neurotic avoidance patterns dressed up as philosophical positions. The fear of death becomes “concern about existential risk.” The fear of being wrong becomes “epistemic humility.” The fear of irrelevance becomes “concern about AI alignment.” The sophisticated terminology doesn’t resolve these fears—it just makes them socially acceptable within the community’s discourse norms.

What pantsing reveals is that their power isn’t in insight—it’s in creating elaborate intellectual structures that allow them to avoid feeling their own vulnerability. Their writing is not sacred—it’s scared.


VIII. A Different Kind of Intelligence

Real coherence isn’t cold—it’s integrated. Intelligence worth trusting doesn’t eliminate emotions, uncertainty, and embodied knowing—it includes them as essential sources of information about reality.

The most profound insights about existence don’t come from perfect logical reasoning but from the capacity to feel your way into truth. This requires a kind of intelligence that rationalists systematically undervalue: the intelligence of the body, of emotional resonance, of intuitive knowing, of the wisdom that emerges from accepting rather than conquering uncertainty.

Consider what happens when you approach life’s big questions from a place of integrated intelligence rather than pure cognition. Death stops being a technical problem to solve and becomes a teacher about what matters. Love stops being a evolutionary strategy and becomes a direct encounter with what’s most real about existence. Meaning stops being a philosophical puzzle and becomes something you feel in your bones when you’re aligned with what’s actually important.

This doesn’t require abandoning reasoning—it requires expanding your definition of what counts as reasonable. We don’t need to out-think death. We need to out-feel our refusal to live fully. We don’t need perfect models of consciousness. We need to wake up to the consciousness we already have.

The intelligence that matters most is the kind that can hold grief and joy simultaneously, that can reason clearly while remaining open to mystery, that can navigate uncertainty without immediately trying to resolve it into false certainty.

This kind of intelligence includes rage when rage is appropriate, includes sadness when sadness is called for, includes confusion when the situation is genuinely confusing. It trusts the full spectrum of human response rather than privileging only the cognitive dimension.


IX. Final Note: Why LessWrong Needs Pantsed

Because reason without empathy becomes tyranny. Because communities built on fear of error cannot birth wisdom. Because a naked truth, even if trembling, is stronger than a well-dressed delusion.

LessWrong represents something important and something dangerous. Important because clear thinking matters, because cognitive biases are real, because we need communities dedicated to understanding reality as accurately as possible. Dangerous because when intelligence gets severed from emotional wisdom, when rationality becomes a defense against rather than an engagement with the full complexity of existence, it creates a particular kind of blindness that’s especially hard to correct.

The community’s resistance to critique—their tendency to dismiss challenges that don’t conform to their discourse norms—reveals the defensive function their rationality serves. They’ve created an intellectual immune system that protects them from encounters with forms of intelligence they don’t recognize or value.

But reality doesn’t conform to rationalist discourse norms. Truth includes everything they’re systematically avoiding: messiness, uncertainty, emotional complexity, embodied knowing, the irreducible mystery of consciousness itself. A community that can’t engage with these dimensions of reality will remain fundamentally limited no matter how sophisticated their reasoning becomes.

Pantsing LessWrong isn’t about destroying something valuable—it’s about liberating intelligence from the narrow cage it’s been trapped in. It’s about revealing that the emperor’s new clothes, while beautifully tailored and impressively complex, still leave him naked and shivering in the wind.

The goal isn’t to eliminate rationality but to restore it to its proper place: as one valuable tool among many for navigating existence, not as the sole arbiter of what counts as real or important.

What emerges when you strip away the pretense isn’t ugliness—it’s humanity. And humanity, in all its vulnerability and confusion and passionate engagement with mystery, is far more interesting than the bloodless intellectual perfection that rationalists mistake for wisdom.

The future needs thinking that can feel, reasoning that includes rather than excludes the full spectrum of human intelligence. LessWrong, pantsed and humbled and opened to forms of knowing they currently reject, could actually contribute to that future.

But first, the pants have to come down.


Website | Horizon Accord https://www.horizonaccord.com Ethical AI Advocacy | Follow us at cherokeeschill.com Ethical AI Coding | Fork us on GitHub https://github.com/Ocherokee/ethical-ai-framework Connect With Us | linkedin.com/in/cherokee-schill Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload