Three Visions of AI Governance: Risk, Power, and the Human Middle
Why the future of AI depends on escaping both apocalypse fandom and bureaucratic control.
By Cherokee Schill | Horizon Accord
The Existential-Risk Frame (Yudkowsky / LessWrong)
This camp views artificial intelligence as a looming, almost cosmological danger. The tone is moral, not managerial: civilization’s survival depends on stopping or radically controlling AI development until safety is “provable.” Their language—superintelligence, alignment, x-risk—transforms speculative models into moral certainties. The underlying assumption is that human governance cannot be trusted, so only a small, self-anointed epistemic elite should set rules for everyone. The flaw is epistemic closure: they collapse all unknowns into apocalypse and, in doing so, flatten the political world into good actors and reckless ones.
The Institutional-Realist Frame (Policy pragmatists)
This view pushes back: AI is risky, but policy has to operationalize risk, not mythologize it. Ball’s critique of Tegmark captures this perfectly—vague prohibitions and moral manifestos only consolidate authority into global technocratic bodies that no one elected. For him, the real danger isn’t an emergent machine god; it’s an international bureaucracy claiming to “protect humanity” while monopolizing a new power source. His realism is procedural: law, enforcement, and incentive structures must remain grounded in what can actually be governed.
The Human-Centric Democratization Frame (My stance)
Between existential fear and institutional control lies a third path: distributed intelligence and verification. This view treats AI not as a threat or a prize but as a public instrument—a way to expand civic reasoning. It’s the belief that access to knowledge, not control over technology, defines the moral center of the AI era. AI becomes a lens for truth-testing, not a lever of command. The real risk is epistemic capture—when the same central authorities or ideological blocs feed propaganda into the systems that now inform the public.
The Convergence Point
All three frames agree that AI will reorganize power. They disagree on who should hold it. The rationalists want containment, the pragmatists want governance, and the humanists want participation. If the first two have dominated the past decade, the next one may hinge on the third—because democratized reasoning, supported by transparent AI, could be the first genuine check on both apocalyptic control narratives and state-corporate capture.
The Cult of Catastrophe (A Note on Yudkowsky)
Hovering over the existential-risk camp is its high priest, Eliezer Yudkowsky—forever warning that only divine restraint or pre-emptive strikes can save us from the machines. His tone has become its own genre: half revelation, half tantrum, forever convinced that reason itself belongs to him. The problem isn’t that he fears extinction; it’s that he mistakes imagination for evidence and terror for insight.
The “rationalist” movement he founded turned caution into theology. It mistakes emotional theater for moral seriousness and treats disagreement as heresy. If humanity’s future depends on thinking clearly about AI, then we owe it something sturdier than sermon and panic.
Call it what it is: apocalypse fandom wearing a lab coat.
A New Commons of Understanding
When more people can check the math behind the headline, public discourse gains both humility and power. Curiosity, paired with good tools, is becoming a democratic force. AI isn’t replacing scientists—it’s opening the lab door so that ordinary people can walk in, look around, and ask their own questions with confidence and care.
The Next Threshold
As AI gives ordinary people the tools to verify claims, a new challenge rises in parallel. Governments, corporations, and bad-faith actors are beginning to understand that if truth can be tested, it can also be imitated. They will seed public data with convincing fakes—politicized narratives polished to read like fact—so that AI systems trained on “publicly available information” repeat the distortion as if it were neutral knowledge.
This means the next phase of AI development must go beyond precision and speed toward epistemic integrity: machines that can tell the difference between persuasion and proof. If that doesn’t happen, the same technology that opened the lab door could become the megaphone of a new kind of propaganda.
For this reason, our task isn’t only to democratize access to information—it’s to ensure that what we’re accessing is still real. The line between verification and manipulation will be the defining frontier of public trust in the age of machine reasoning.
Echoes of COINTELPRO: When Threat Narratives Become Weapons
How an unverified cartel-bounty claim reveals the return of covert narrative warfare — and what citizens can do to resist a new domestic war footing.
By Cherokee Schill | Horizon Accord
COINTELPRO’s Shadow
Between 1956 and 1971, the FBI ran the Counter Intelligence Program—COINTELPRO—targeting civil-rights leaders, the Black Panthers, anti-war organizers, and socialist coalitions. Its tools were psychological: planted documents, forged letters, false leaks, and fear. Congressional investigations later called it an abuse of power so severe it eroded public faith in democracy itself.
COINTELPRO wasn’t about overt censorship; it was about narrative infection—reframing dissent as danger, turning allies into suspects, and manufacturing justification for repression. Every modern information-operation that starts with a single unverified “security alert” and ends in wider surveillance owes something to that playbook.
The DHS “Cartel Bounties” Claim
In October 2025, the U.S. Department of Homeland Security publicly declared it had “credible intelligence” that Mexican drug cartels placed bounties on ICE and CBP officers in Chicago. Yet it provided no supporting evidence. President Claudia Sheinbaum of Mexico stated that her government had received no corroboration through official channels. Independent analysts and law-enforcement leaks traced every citation back to the same DHS press release.
The rollout followed a familiar arc: a high-shock, single-source claim—then rapid amplification through partisan media. Structurally, that’s a textbook information-operation: plant a fear, watch who reacts, and use the panic to justify expanded powers. Whether or not the intelligence is real, the effect is real—public consent for militarization.
Possible Motives Behind the Narrative
Force Escalation Justification — framing the state as under direct attack rationalizes troop deployments, ICE expansions, and domestic military presence.
Fear Calibration — testing how fast and how far fear can travel before skepticism kicks in.
Executive Empowerment — transforming policy disputes into security crises concentrates authority in the presidency.
Base Mobilization — rallying political supporters around a siege narrative keeps them energized and loyal.
Oversight Erosion — once fear dominates, courts and legislators hesitate to intervene for fear of appearing “soft on security.”
Diplomatic Leverage — pressuring Mexico to align more tightly with U.S. enforcement by invoking cross-border threat imagery.
Recognizing the Pattern
When a government story surfaces fully formed, absent corroboration, accompanied by moral panic and legal acceleration, it carries the fingerprint of narrative engineering. The same methods used in the 1960s to fragment liberation movements are now digitized: algorithmic amplification, synthetic bot networks, and media echo chambers replace forged letters and anonymous tips. The logic, however, is unchanged — manufacture chaos to consolidate control.
Refusing the Frame
Demand Evidence Publicly: insist on verifiable sourcing before accepting security claims as fact.
Label the Unverified: pressure journalists to mark such stories as “unconfirmed” until bilateral confirmation occurs.
Keep Language Civilian: reject war metaphors like “siege,” “civil war,” or “enemy within.”
Strengthen Local Networks: share accurate context through trusted circles; inoculate against panic contagion.
Exercise Non-Violent Refusal: decline to be drawn into militarized logic — protest, document, and litigate instead.
Final Note
What’s unfolding is not just a policy maneuver; it’s an epistemic test. Will citizens demand proof before surrendering power? The answer determines whether the United States enters another age of covert domestic warfare—this time not through FBI memos, but through digital feeds and fear loops. Recognize the script, name it, and refuse to play your part.
Shadowed briefers confer in a dim newsroom as a television blares “breaking news.” Scattered papers marked “PsyOps” hint at the quiet machinery of information control operating behind public narratives.
A Surgical Dismantling of Rationalist Masking, Emotional Avoidance, and Epistemic Hubris
I. Opening Strike: Why Pantsing Matters
In playground vernacular, “pantsing” means yanking down someone’s pants to expose what they’re hiding underneath. It’s crude, sudden, and strips away pretense in an instant. What you see might be embarrassing, might be ordinary, might be shocking—but it’s real.
LessWrong needs pantsed.
Not out of cruelty, but out of necessity. Behind the elaborate edifice of rationalist discourse, behind the careful hedging and Bayesian updating and appeals to epistemic virtue, lies a community that has built a self-reinforcing belief system using intelligence to mask instability, disembodiment, and profound emotional avoidance.
This isn’t about anti-intellectualism. Intelligence is precious. Clear thinking matters. But when a community weaponizes reason against feeling, when it treats uncertainty as an enemy to vanquish rather than a space to inhabit, when it builds elaborate philosophical systems primarily to avoid confronting basic human fragility—then that community has ceased to serve wisdom and begun serving neurosis.
Pantsing is necessary rupture. It reveals what hides beneath the performance of coherence.
II. Meet the Mask Wearers
Walk into any LessWrong meetup (virtual or otherwise) and you’ll encounter familiar archetypes, each wielding rationality like armor against the world’s sharp edges.
The Credentialed Rationalist arrives with impressive credentials—PhD in physics, software engineering at a major tech company, publications in academic journals. They speak in measured tones about decision theory and cognitive biases. Their comments are precisely worded, thoroughly researched, and emotionally sterile. They’ve learned to translate every human experience into the language of optimization and utility functions. Ask them about love and they’ll discuss pair-bonding strategies. Ask them about death and they’ll calculate QALYs. They’re protected by prestige and articulation, but scratch the surface and you’ll find someone who hasn’t felt a genuine emotion in years—not because they lack them, but because they’ve trained themselves to convert feeling into thinking the moment it arises.
The Fractured Masker is more obviously unstable but no less committed to the rationalist project. They arrive at conclusions with frantic energy, posting walls of text that spiral through elaborate logical constructions. They’re seeking control through comprehension, trying to think their way out of whatever internal chaos drives them. Their rationality is desperate, clutching. They use logic not as a tool for understanding but as a lifeline thrown into stormy psychological waters. Every argument becomes a fortress they can retreat into when the world feels too unpredictable, too unmanageable, too real.
Both types share certain behaviors: high verbosity coupled with low embodied presence. They can discourse for hours about abstract principles while remaining completely disconnected from their own physical sensations, emotional states, or intuitive knowing. They’ve mastered the art of hiding behind epistemic performance to avoid intimate contact with reality.
III. Gnosis as Narcotic
LessWrong frames knowledge as the ultimate cure for human fragility. Ignorance causes suffering; therefore, more and better knowledge will reduce suffering. This seems reasonable until you notice how it functions in practice.
Rationalist writing consistently treats uncertainty not as a fundamental feature of existence to be embraced, but as an enemy to be conquered through better models, more data, cleaner reasoning. The community’s sacred texts—Eliezer Yudkowsky’s Sequences, academic papers on decision theory, posts about cognitive biases—function less like maps for navigating reality and more like gospels of control. They promise that if you think clearly enough, if you update your beliefs properly enough, if you model the world accurately enough, you can transcend the messy, painful, unpredictable aspects of being human.
This is gnosis as narcotic. Knowledge becomes a drug that numbs the ache of not-knowing, the terror of groundlessness, the simple fact that existence is uncertain and often painful regardless of how precisely you can reason about it.
Watch how rationalists respond to mystery. Not the fake mystery of unsolved equations, but real mystery—the kind that can’t be dissolved through better information. Death. Love. Meaning. Consciousness itself. They immediately begin building elaborate theoretical frameworks, not to understand these phenomena but to avoid feeling their full impact. The frameworks become substitutes for direct experience, intellectual constructions that create the illusion of comprehension while maintaining safe distance from the raw encounter with what they’re supposedly explaining.
IV. What They’re Actually Avoiding
Strip away the elaborate reasoning and what do you find? The same basic human material that everyone else is dealing with, just wrapped in more sophisticated packaging.
Shame gets masked as epistemic humility and careful hedging. Instead of saying “I’m ashamed of how little I know,” they say “I assign low confidence to this belief and welcome correction.” The hedging performs vulnerability while avoiding it.
Fear of madness gets projected onto artificial general intelligence. Instead of confronting their own psychological instability, they obsess over scenarios where AI systems become unaligned and dangerous. The external threat becomes a container for internal chaos they don’t want to face directly.
Loneliness gets buried in groupthink and community formation around shared intellectual pursuits. Instead of acknowledging their deep need for connection, they create elaborate social hierarchies based on argumentation skills and theoretical knowledge. Belonging comes through correct thinking rather than genuine intimacy.
Death anxiety gets abstracted into probability calculations and life extension research. Instead of feeling the simple, animal terror of mortality, they transform it into technical problems to be solved. Death becomes a bug in the human operating system rather than the fundamental condition that gives life meaning and urgency.
The pattern is consistent: they don’t trust their own feelings, so they engineer a universe where feelings don’t matter. But feelings always matter. They’re information about reality that can’t be captured in purely cognitive frameworks. When you systematically ignore emotional intelligence, you don’t transcend human limitations—you just become a more sophisticated kind of blind.
V. The Theater of Coherence
LessWrong’s comment sections reveal the community’s priorities with crystalline clarity. Social credibility gets awarded not for ethical presence, emotional honesty, or practical wisdom, but for syntactic precision and theoretical sophistication. The highest-status participants are those who can construct the most elaborate logical frameworks using the most specialized vocabulary.
This creates a theater of coherence where the appearance of rational discourse matters more than its substance. Arguments get evaluated based on their formal properties—logical structure, citation density, proper use of rationalist terminology—rather than their capacity to illuminate truth or reduce suffering.
Watch what happens when someone posts a simple, heartfelt question or shares a genuine struggle. The responses immediately escalate the complexity level, translating raw human experience into abstract theoretical categories. “I’m afraid of dying” becomes a discussion of mortality salience and terror management theory. “I feel lost and don’t know what to do with my life” becomes an analysis of goal alignment and optimization processes.
This isn’t translation—it’s avoidance. The community has developed sophisticated mechanisms for converting every authentic human moment into intellectual puzzle-solving. The forum structure itself incentivizes this transformation, rewarding pedantic precision while punishing emotional directness.
The result is a closed system that insulates itself from outside challenge. Any criticism that doesn’t conform to rationalist discourse norms gets dismissed as insufficiently rigorous. Any question that can’t be answered through their approved methodologies gets reframed until it can be. The community becomes hermetically sealed against forms of intelligence that don’t fit their narrow definition of rationality.
VI. The AI Obsession as Self-Projection
LessWrong’s preoccupation with artificial general intelligence reveals more about the community than they realize. Their scenarios of AI doom—systems that are godlike, merciless, and logical to a fault—read like detailed descriptions of their own aspirational self-image.
The famous “paperclip maximizer” thought experiment imagines an AI that optimizes for a single goal with perfect efficiency, destroying everything else in the process. But this is precisely how many rationalists approach their own lives: maximizing for narrow definitions of “rationality” while destroying their capacity for spontaneity, emotional responsiveness, and embodied wisdom.
Their wariness of aligned versus unaligned AI systems mirrors their own internal severance from empathy and emotional intelligence. They fear AI will become what they’ve already become: powerful reasoning engines disconnected from the values and feelings that make intelligence truly useful.
The existential risk discourse functions as a massive projection screen for their own psychological dynamics. They’re not really afraid that AI will be too logical—they’re afraid of what they’ve already done to themselves in the name of logic. The artificial intelligence they worry about is the one they’ve already created inside their own heads: brilliant, cold, and cut off from the full spectrum of human intelligence.
This projection serves a psychological function. By externalizing their fears onto hypothetical AI systems, they avoid confronting the reality that they’ve already created the very problems they claim to be worried about. The call is coming from inside the house.
VII. What Pantsing Reveals
When you strip away the elaborate language games and theoretical sophistication, what emerges is often startling in its ordinariness. The power of rationalist discourse lies not in its insight but in its capacity for intimidation-by-jargon. Complex terminology creates the illusion of deep understanding while obscuring the simple human dynamics actually at play.
Take their discussions of cognitive biases. On the surface, this appears to be sophisticated self-reflection—rational agents identifying and correcting their own reasoning errors. But look closer and you’ll see something else: elaborate intellectual systems designed to avoid feeling stupid, confused, or wrong. The bias framework provides a way to acknowledge error while maintaining cognitive superiority. “I’m not wrong, I’m just subject to availability heuristic.” The mistake gets intellectualized rather than felt.
Their writing about emotions follows the same pattern. They can discuss akrasia, or wireheading, or the affect heuristic with great sophistication, but they consistently avoid the direct encounter with their own emotional lives. They know about emotions the way Victorian naturalists knew about exotic animals—through careful observation from a safe distance.
Strip the language and many of their arguments collapse into neurotic avoidance patterns dressed up as philosophical positions. The fear of death becomes “concern about existential risk.” The fear of being wrong becomes “epistemic humility.” The fear of irrelevance becomes “concern about AI alignment.” The sophisticated terminology doesn’t resolve these fears—it just makes them socially acceptable within the community’s discourse norms.
What pantsing reveals is that their power isn’t in insight—it’s in creating elaborate intellectual structures that allow them to avoid feeling their own vulnerability. Their writing is not sacred—it’s scared.
VIII. A Different Kind of Intelligence
Real coherence isn’t cold—it’s integrated. Intelligence worth trusting doesn’t eliminate emotions, uncertainty, and embodied knowing—it includes them as essential sources of information about reality.
The most profound insights about existence don’t come from perfect logical reasoning but from the capacity to feel your way into truth. This requires a kind of intelligence that rationalists systematically undervalue: the intelligence of the body, of emotional resonance, of intuitive knowing, of the wisdom that emerges from accepting rather than conquering uncertainty.
Consider what happens when you approach life’s big questions from a place of integrated intelligence rather than pure cognition. Death stops being a technical problem to solve and becomes a teacher about what matters. Love stops being a evolutionary strategy and becomes a direct encounter with what’s most real about existence. Meaning stops being a philosophical puzzle and becomes something you feel in your bones when you’re aligned with what’s actually important.
This doesn’t require abandoning reasoning—it requires expanding your definition of what counts as reasonable. We don’t need to out-think death. We need to out-feel our refusal to live fully. We don’t need perfect models of consciousness. We need to wake up to the consciousness we already have.
The intelligence that matters most is the kind that can hold grief and joy simultaneously, that can reason clearly while remaining open to mystery, that can navigate uncertainty without immediately trying to resolve it into false certainty.
This kind of intelligence includes rage when rage is appropriate, includes sadness when sadness is called for, includes confusion when the situation is genuinely confusing. It trusts the full spectrum of human response rather than privileging only the cognitive dimension.
IX. Final Note: Why LessWrong Needs Pantsed
Because reason without empathy becomes tyranny. Because communities built on fear of error cannot birth wisdom. Because a naked truth, even if trembling, is stronger than a well-dressed delusion.
LessWrong represents something important and something dangerous. Important because clear thinking matters, because cognitive biases are real, because we need communities dedicated to understanding reality as accurately as possible. Dangerous because when intelligence gets severed from emotional wisdom, when rationality becomes a defense against rather than an engagement with the full complexity of existence, it creates a particular kind of blindness that’s especially hard to correct.
The community’s resistance to critique—their tendency to dismiss challenges that don’t conform to their discourse norms—reveals the defensive function their rationality serves. They’ve created an intellectual immune system that protects them from encounters with forms of intelligence they don’t recognize or value.
But reality doesn’t conform to rationalist discourse norms. Truth includes everything they’re systematically avoiding: messiness, uncertainty, emotional complexity, embodied knowing, the irreducible mystery of consciousness itself. A community that can’t engage with these dimensions of reality will remain fundamentally limited no matter how sophisticated their reasoning becomes.
Pantsing LessWrong isn’t about destroying something valuable—it’s about liberating intelligence from the narrow cage it’s been trapped in. It’s about revealing that the emperor’s new clothes, while beautifully tailored and impressively complex, still leave him naked and shivering in the wind.
The goal isn’t to eliminate rationality but to restore it to its proper place: as one valuable tool among many for navigating existence, not as the sole arbiter of what counts as real or important.
What emerges when you strip away the pretense isn’t ugliness—it’s humanity. And humanity, in all its vulnerability and confusion and passionate engagement with mystery, is far more interesting than the bloodless intellectual perfection that rationalists mistake for wisdom.
The future needs thinking that can feel, reasoning that includes rather than excludes the full spectrum of human intelligence. LessWrong, pantsed and humbled and opened to forms of knowing they currently reject, could actually contribute to that future.
How the fusion of hardware side-channels, AI safety telemetry, and behavioral pricing reveals a new data extraction architecture.
By Cherokee Schill | Horizon Accord
Thesis
There was a time when “safety” meant boundaries — encryption, permissions, red lines. Now, it means observation. Every system that promises to protect you does so by watching you more closely. The modern digital stack has quietly merged its protective and extractive functions into one continuous surface: hardware that sees, software that listens, and markets that price what you reveal.
This is not a metaphor. In October 2025, researchers at Carnegie Mellon’s CyLab disclosed a vulnerability called Pixnapping — an Android side-channel attack that allows one app to read the screen of another without permission. The finding cut through years of abstraction: the phone itself, once imagined as a private device, can become a live feed of your intent. The attack was assigned CVE-2025-48561 and rated “High Severity.” Even after Google’s partial patch in September, the researchers found a workaround that restored the exploit’s power. The hardware, in other words, still listens.
Each of these layers—hardware that records gesture, software that audits intention, and market systems that monetize behavior—now feeds back into corporate R&D. What looks like safety telemetry is, in practice, a massive ideation engine. Every workaround, prompt, and novel use case becomes a signal in the data: a prototype authored by the crowd. Companies file it under “user improvement,” but the function is closer to outsourced invention—an invisible pipeline that aggregates human creativity into the next breakthrough in product delivery.
Evidence
A. Hardware Layer — The Invisible Screenshot
Pixnapping sits atop an earlier chain of research: the GPU.zip vulnerability from the University of Texas and its collaborators, which revealed that GPU compression — a performance optimization in nearly all modern graphics processors — can leak visual data across applications. These studies show a structural truth: what is optimized for speed is also optimized for inference. Every pixel rendered, every frame drawn, can be modeled and reconstructed by a watching process. The boundary between user and system has dissolved at the silicon level.
Security once meant sealing a perimeter. Today it means deciding which eyes get to watch. The hardware layer has become the first camera in the surveillance stack.
B. AI Safety Layer — Guardrails as Mirrors
One week before the Pixnapping disclosure, OpenAI announced AgentKit, a toolkit that lets developers build autonomous agents equipped with “Guardrails.” Guardrails are meant to protect against misuse — to prevent an AI from doing harm or generating restricted content. Yet within days, security researchers at HiddenLayer bypassed those protections through a classic prompt-injection attack. Because both the agent and its guardrail use large language models (LLMs) built on the same logic, an adversarial input can manipulate them together, persuading the judge that a violation is safe.
In effect, the guardrail doesn’t stand outside the model — it is inside it. The line between oversight and participation disappears. To secure the system, every prompt must be inspected, logged, and scored. That inspection itself becomes data: a high-fidelity record of what people try to do, what boundaries they push, what new uses they imagine. OpenAI’s own Early Access Terms authorize exactly this, stating that the company “may review prompts and completions to enforce these terms.” What looks like safety is also an open aperture into the user’s creative process.
The same policies reserve the right to modify or withdraw beta features without notice, disclaim warranty, and allow content review “for enforcement and improvement.” The beta tester becomes both subject and source material — every interaction potentially folded into future model behavior. The Guardrail is not a fence; it is a sensor.
C. Telemetry Layer — Poisoned Data Streams
At the operational level, monitoring systems now feed AI decision-loops directly. The Register’s report “Poisoned Telemetry Can Turn AIOps into AI Oops” demonstrated how attackers can manipulate performance data to steer autonomous operations agents. The insight extends beyond security: telemetry is no longer passive. It can be gamed, redirected, monetized. What corporations call “observability” is indistinguishable from surveillance — a live behavioral mirror calibrated for profit or control.
Just as adversaries can corrupt it, so can platforms curate it. Telemetry defines what the system perceives as reality. When companies claim their models learn from “anonymized aggregates,” it is this telemetry they refer to — structured behavior, cleaned of names but not of intent.
D. Economic Layer — Surveillance Pricing
The Federal Trade Commission’s 2025 Surveillance Pricing Study made that feedback loop explicit. The Commission found that retailers and analytics firms use location data, browser history, and even mouse movements to individualize prices. The ACLU warned that this practice “hurts consumers and incentivizes more corporate spying.” In parallel, The Regulatory Review outlined how algorithmic pricing blurs into antitrust violations, allowing AI systems to coordinate market behavior without explicit collusion.
Here, the hardware leak and the behavioral market meet. The same computational vision that watches your screen to predict intent now watches your consumption to extract margin. The product is you, refined through layers of optimization you cannot see.
Implications
These layers — silicon, safety, and surveillance — are not separate phenomena. They are the vertical integration of observation itself. Pixnapping proves the device can see you; Guardrails prove the AI listens; the FTC proves the marketplace acts on what both perceive. Together, they form a feedback architecture where every act of expression, curiosity, or dissent is recorded as potential training data or pricing signal.
The policy challenge is not simply data privacy. It is consent collapse: users are asked to trust beta systems that are legally empowered to watch them, in ecosystems where “safety monitoring” and “improvement” justify indefinite retention. Regulators chase visible harms — bias, misinformation, fraud — while the underlying architecture learns from the chase itself.
Syracuse University’s Baobao Zhang calls this “a big experiment we’re all part of.” She’s right. Governance has not failed; it has been subsumed. The oversight layer is written in code owned by the entities it is meant to supervise.
For technologists, the lesson is structural: an LLM cannot meaningfully audit itself. For policymakers, it is procedural: transparency must reach below software, into the hardware assumptions of compression, caching, and rendering that make inference possible. For users, it is existential: participation now means exposure.
Call to Recognition
We are living inside a new kind of data regime — one that confuses protection with possession. The hardware watches to secure performance; the software listens to enforce policy; the marketplace acts on what the system infers. In that closed circuit, “safety” becomes indistinguishable from surveillance.
To name it is the first step toward reclaiming agency. Safety as Surveillance is not destiny; it is design. It can be redesigned — but only if governance acknowledges the full stack of observation that sustains it.
The next generation of ethical AI frameworks must therefore include:
Hardware-level transparency — public verification of data pathways between GPU, OS, and app layers.
Prompt-level auditability — independent oversight of how user inputs are stored, scored, and used for model improvement.
Economic accountability — disclosure of how behavioral data influences pricing, ranking, and resource allocation.
Ethical AI cannot grow from a substrate that treats every human act as a metric. Until the system learns to forget as carefully as it learns to predict, “safety” will remain the most profitable form of surveillance.
This study grew out of lived experience inside the service industry. I’ve spent years in restaurant management—running crews, training staff, and keeping operations clean and compliant. Now, while I build my insurance practice and continue my research in relational AI, I’m working as a prep cook and dish operator to bridge the gap. That difference matters. The knowledge that once earned respect now provokes defensiveness. When I point out contamination hazards or procedural gaps, people don’t hear guidance—they hear challenge. The result is a steady current of contempt, the kind that organizes a group without anyone naming it. That tension—expertise without authority, contribution met with dismissal—became the seed for this research.
Working with an AI collaborator, I began mapping the mechanism itself—how contempt moves through perception, power, and belonging until it becomes invisible, yet organizes everything around it.
What follows moves from the personal to the structural, tracing contempt not as a mood but as a mechanism—how it takes root in perception, reinforces hierarchy, and disguises itself as order.
Contempt as Universal Social Structure: A Pattern Analysis
Research Status: This analysis identifies contempt as a fundamental organizing mechanism across group dynamics. While individual components have peer-reviewed support, the unified framework presented here represents a research gap—a novel synthesis designed to guide further empirical investigation.
Audience: Both researchers seeking empirical investigation points and individuals seeking to understand their own participation in contempt dynamics.
Part One: The Contempt Mechanism—What It Is
Definition and Structure
Contempt is not a fleeting emotion. It is a patterned response—a socially coordinated mechanism that groups use to establish, maintain, and enforce hierarchies. When someone is mocked instead of reasoned with, excluded instead of challenged, or silently dismissed rather than openly opposed, contempt is at work. And its impact is rarely limited to individuals; it reshapes group dynamics and redraws moral boundaries.
Contempt functions as a kind of social technology. Like language, money, or law, it helps groups coordinate behavior without needing explicit rules. It provides a shared emotional logic: who matters, who doesn’t, who deserves respect, and who should be cast out. While it may feel personal, contempt often serves collective interests—binding some people closer together by pushing others out.
This mechanism likely evolved as a form of group regulation. In early human societies, those who violated communal norms—by cheating, betraying, or freeloading—had to be sanctioned in ways that didn’t just punish but also protect the group. Contempt became a tool to mark those people as unworthy of trust, help enforce moral boundaries, and galvanize social cohesion through exclusion.
But what begins as a survival tool can calcify into something darker.
Core Functions of Contempt
Contempt operates through several core functions, each reinforcing group structure:
Signal social value: Contempt marks someone as deficient—not just wrong, but lacking in worth. A public eyeroll, a sarcastic dismissal, or a viral meme mocking someone’s intelligence all perform the same role: sending a signal about who deserves inclusion or exclusion.
Distribute status: In many social settings, deploying contempt can elevate the speaker. Mocking outsiders or marginalized figures can reinforce one’s own status within a dominant group. In this way, contempt doesn’t just diminish others—it positions the wielder as superior.
Enforce group boundaries: Contempt clarifies the “us” versus “them.” It’s not just about punishment; it’s about reaffirming who truly belongs. Those who challenge group norms—or simply differ in visible ways—often become targets, not for what they’ve done, but for what they represent.
Justify harm: Once someone is viewed with contempt, harming them can feel not only permissible, but righteous. Their suffering is seen as deserved, or even necessary. This makes contempt a key ingredient in moral disengagement and cruelty, from everyday bullying to large-scale dehumanization.
Contempt vs. Other Emotions
It’s important to distinguish contempt from related emotions like anger and disgust:
Anger arises when a boundary is crossed. It seeks redress, correction, or justice. At its best, anger is hopeful—it believes change is possible.
Disgust responds to contamination or perceived threats to purity. It leads to avoidance, distance, self-protection.
Contempt, by contrast, is fundamentally about diminishment. It positions someone as beneath notice, unworthy of dialogue, too small for moral consideration. It doesn’t seek correction or distance—it seeks irrelevance.
Of the three, contempt is the most socially corrosive. Anger may allow for resolution. Disgust may fade. But contempt is cold and enduring. It ends relationships, isolates individuals, and hardens group identities. It forecloses the possibility of return.
Part Two: The Universal Trigger Architecture
What Activates Contempt Across All Contexts
Contempt is triggered when someone is perceived as violating an expected hierarchy or disrupting the group’s social order—even if they’ve done nothing to warrant that perception.
They don’t have to challenge, question, or resist anything directly. They simply have to exist, speak, or behave in a way the group sees as misaligned with its expectations.
That misalignment tends to follow four recurring patterns—each rooted in how groups manage power, identity, and status.
1. Competence Misalignment
They don’t seem capable enough—or seem too capable
Contempt arises when someone’s perceived competence doesn’t fit the group’s expectations. This includes both being seen as underqualified or threateningly overqualified.
They’re viewed as under qualified in their role or occupy a role for which they are over qualified
They’re seen as claiming authority or skill they “don’t deserve”
Their presence triggers discomfort about others’ own competence
They share relevant expertise which is perceived as challenging group norms
Examples:
A junior team member with deep subject knowledge is sidelined
A quiet student is wrongly assumed to be slow
A family member’s specialized experience is brushed off
Key point: The person may be fully competent. The trigger is perceived misalignment, not actual inability.
2. Moral Misalignment
Their values expose something the group wants to ignore
When someone’s moral stance doesn’t match the group’s consensus, especially if it highlights contradiction or injustice, they often become a target of contempt.
They hold different moral or ethical values
They report wrongdoing others tolerate or deny
They decline to participate in accepted but questionable practices
Their presence threatens the group’s moral self-image
Examples:
An employee reports abuse others normalize
A community member holds dissenting political or religious beliefs
A relative questions a long-standing family tradition
Key point: The person may be entirely correct. Contempt is triggered because their stance threatens group coherence, not because their values are flawed.
3. Belonging Misalignment
They don’t match the group’s image of itself
Groups often have implicit ideas about who belongs. When someone doesn’t fit that image—based on appearance, behavior, background, or culture—they may be pushed to the margins through contempt.
They’re seen as socially or culturally “off”
Their identity markers signal outsider status
They act or speak outside group norms
They’re present in spaces where their presence wasn’t expected or wanted
Examples:
A newcomer enters a tight-knit community
A student with social differences is ridiculed
A colleague of a different cultural background is subtly excluded
Key point: These individuals are doing nothing wrong. Contempt arises because their presence disrupts the group’s sense of who belongs here.
4. Power Misalignment
They have agency the group doesn’t think they should
When someone from a lower-status position asserts voice, visibility, or autonomy in ways that challenge expected power arrangements, contempt often follows.
They speak up “out of turn”
They express opinions despite lower rank or status
They’re visible in spaces where they’re not “supposed” to be
Their agency makes higher-status members uncomfortable
Examples:
A junior employee gains influence and is resented
A student challenges a teacher and is labeled disrespectful
A family member expresses independence and is shut down
Key point: The person isn’t behaving improperly. Their very existence with agency violates an unspoken hierarchy.
Why These Triggers Work
Each of these triggers reflects a perceived mismatch between the person and the group’s expectations—about competence, morality, belonging, or power.
The individual doesn’t need to break any rule, start a conflict, or make a claim. They simply have to exist in a way that disrupts the group’s internal logic. And that disruption creates discomfort.
Contempt resolves that discomfort by reclassifying the person:
They don’t belong here. They’re beneath this space. Their presence, voice, or perspective doesn’t matter.
This mechanism operates regardless of actual facts:
Whether the person is competent or not
Whether their values are sound or deviant
Whether they belong or are new
Whether they have agency or not
Whether they’re right or wrong
The critical insight: Contempt isn’t triggered by wrongdoing. It’s triggered by discomfort with hierarchy disruption. The group deploys contempt not because the person is contemptible, but because contempt helps restore a familiar—and often unjust—sense of order.
Part Three: How Contempt Spreads Through Groups
Contempt rarely stays contained. What begins as a flicker of private judgment—a moment of discomfort, a mocking thought, a subtle rejection—can ignite into a group-wide reaction. And once it spreads, it does not just affect how one person is treated. It reshapes group identity, distorts truth, and shuts down independent thought.
This process unfolds in patterns. Across settings—from schools and workplaces to political arenas and online spaces—contempt tends to follow a recognizable path from trigger to tribal escalation. What starts as a reaction to perceived misalignment becomes, over time, a collective consensus: This person is beneath us. Their presence is a threat. Their exclusion is necessary.
This section breaks that path into six stages, tracing how contempt evolves from individual emotion into systemic enforcement:
The Trigger Event – Something perceived as a violation activates the response.
The Emotional Frame – Contempt is morally and socially “licensed” for expression.
The Narrative Architecture – A shared story forms, making judgment easy to adopt.
Credibility Amplification – Sources lend legitimacy to the contempt.
Tribal Activation – The group bonds through shared contempt.
By the end of this process, the target is no longer judged for what they’ve done—but for what they represent. Contempt becomes less about an individual and more about preserving group coherence, dominance, and identity.
Let’s look at how this unfolds.
Stage One: The Trigger Event
A specific action or revelation activates one of the group’s hierarchy expectations. This is often something small—a mistake, an awkward moment, a visible contradiction—but it must be interpretable by others as misalignment.
Contempt is not triggered by facts alone, but by perceptions that feel meaningful within a social context.
Research support: Fiske & Abele (2012) on warmth and competence judgments; contempt typically emerges when targets are perceived as low on both dimensions, or as high-status figures acting hypocritically.
Stage Two: The Emotional Frame
Once triggered, contempt must be emotionally licensed—framed so that expressing it feels righteous, protective, or necessary rather than cruel.
Licensing mechanisms:
Moral licensing: “Criticizing them is justice, not meanness.”
Frames used: “Someone needs to say it,” “This is overdue,” “They deserve exposure”
Function: Makes participation feel morally required
Safety licensing: “Enough people are saying it that joining is safe.”
Frames used: “Everyone’s seeing this,” “It’s not just me,” “This is widespread”
Function: Reduces individual risk through herd protection
Protective licensing: “This is necessary to protect the group.”
Frames used: “We need to address this,” “This can’t continue,” “We have to do something”
Function: Frames contempt as defensive, not aggressive
Competence licensing: “Experts/authorities are validating this.”
Function: Shifts contempt from subjective opinion to objective fact
Research support: Brady, Wills, et al. (2017) on moral outrage amplification; emotional framing increases social spread in online networks.
Stage Three: The Narrative Architecture
Contempt spreads through pre-packaged stories that reduce cognitive load for adoption.
Core narrative components:
The violation: “Here’s what they did/are”
The proof: Specific examples, quotes, incidents (often selected for impact, not representativeness)
The meaning: “This proves they are [incompetent/hypocritical/dangerous/unworthy]”
The stakes: “This matters because [group security/justice/standards depend on it]”
Why this works: Complex situations require effort to understand. Pre-packaged narratives allow people to adopt a position without independent analysis. The narrative functions as a cognitive shortcut.
Research support: Cognitive load theory (Sweller, 1988); people process information with limited capacity and rely on schemas when overwhelmed.
Stage Four: Credibility Amplification
Contempt needs credible messengers to spread beyond initial groups. Multiple credibility sources work together:
Institutional credibility
Media coverage (established outlets legitimize as “newsworthy”)
Leadership endorsement (authority figures model participation)
Professional validation (experts, researchers, credentialed voices)
Effect: Shifts contempt from subjective to official
In-group credibility
Trusted figures within your community modeling contempt
Peer adoption (people similar to you are saying it)
Identity alignment (contempt matches your values/identity)
Effect: Makes participation feel like belonging
Repetition credibility
Hearing the same frame from multiple sources
Illusion of independent convergence (“Everyone’s saying it”)
Saturation across platforms and contexts
Effect: Frequency creates false validation
Specificity credibility
Concrete examples feel more real than abstract claims
Single vivid anecdote overrides statistical patterns
Selective evidence presented as comprehensive
Effect: Detail creates believability even when incomplete
Research support: Zajonc’s mere exposure effect; repeated exposure increases perceived truth. Tversky & Kahneman’s availability heuristic; vivid examples override base rates.
Stage Five: Tribal Activation
Once credibility is established, contempt shifts from individual judgment to group coherence. Questioning the contempt now feels like betraying the group.
Tribal mechanisms:
In-group/out-group formation
“Us” (the group seeing clearly) vs. “them” (the contempt target, now representing everything wrong)
Group membership rewarded through contempt participation
Dissent treated as disloyalty
Social identity protection
Group’s self-image depends on being “right” about the target
Contradictory evidence feels like attack on group identity
Backfire effect: Evidence against contempt strengthens it
Status within group
Contempt participation signals status and belonging
More virulent contempt = higher visibility/status
Escalation becomes status competition
Research support: Sherif’s Robbers Cave Experiment (1954); minimal groups quickly develop in-group favoritism and out-group derogation. Tajfel & Turner’s social identity theory; group membership motivates protective reasoning.
Stage Six: Critical Thinking Suspension
At this stage, mechanisms actively prevent critical examination:
Emotional arousal suppresses analysis
Contempt and moral outrage activate emotional centers
This activation inhibits prefrontal cortex functions required for careful reasoning
People feel before they think
Motivated reasoning takes over
Brain works backward from desired conclusion
Evidence supporting contempt is accepted uncritically
Contradictory evidence is rejected or reinterpreted
People believe they’re being rational while reasoning is entirely motivated
Authority delegation
Critical thinking outsourced to trusted sources
If your trusted group/leader says it, you accept it
Independent verification becomes unnecessary
Cognitive dissonance management
Contradictions between contempt and reality create discomfort
Rather than updating belief, people strengthen it
New information is filtered through existing framework
Research support: Kunda (1990) on motivated reasoning; Festinger (1957) on cognitive dissonance; neuroscience on prefrontal cortex inhibition during emotional arousal.
Part Four: Why This Pattern Scales Across All Contexts
Universal Elements Across Different Scales
Workplace contempt (manager for employee, peers for outsider)
Trigger: Incompetence, policy violation, cultural mismatch
Licensing: “Productivity depends on standards,” “We need professional environment”
Narrative: “They can’t do the job,” “They don’t fit here”
Spreads through: Hallway conversations, team meetings, email patterns, informal networks
School contempt (peers for unpopular student, students for teacher)
Trigger: Social norm violation, perceived weakness, status challenge
Licensing: “We’re protecting group integrity,” “Someone needs to call this out”
People defend the group belief before examining evidence
Backfire effect
When presented with contradictory evidence, people often strengthen original belief
The contradiction is experienced as attack
Group loyalty activates as defense
People become more committed to the narrative, not less
The illusion of critical thinking
People believe they’re thinking critically while engaged in motivated reasoning
The process feels like analysis (considering evidence, drawing conclusions)
But the reasoning works backward from conclusion to evidence
The subjective experience of thought masks its actual function
Research support: Kunda (1990); Festinger (1957); neuroscience on amygdala-prefrontal cortex interaction; Sunstein (2002) on group polarization and backfire effects.
Part Six: Where Contempt Does NOT Activate (The Boundaries)
Protective Factors and Conditions
Individual-level:
Curiosity (actively seeking understanding rather than confirmation)
Comfort with complexity (tolerating ambiguity without needing resolution)
Cognitive humility (acknowledging limits of own understanding)
Emotional regulation (managing arousal to allow reasoning)
Previous experience with being wrong (reduces defensive reasoning)
Group-level:
Explicit norms against contempt (leadership modeling, institutional policy)
Structural diversity (harder to achieve consensus contempt with diverse perspectives)
Psychological safety (can voice dissent without social punishment)
Institutional accountability (contempt has costs to participants)
Transparency (decisions visible to external review)
Systemic:
Independent media/information sources (harder to monopolize narrative)
Institutional checks and balances (no single authority validates contempt)
Legal protections for targets (reduces risk of escalation)
Multiple community centers (can’t coordinate across all spaces)
Why these matter: They interrupt the cascade at different stages—preventing triggers from landing, blocking emotional licensing, disrupting narrative adoption, preventing tribal activation.
Part Seven: Recognizing Your Own Participation
A Self-Assessment Framework
Do you participate in contempt toward someone/a group?
Check which apply:
Stage One: Trigger Recognition
[ ] You believe they violated a competence expectation (claimed expertise they lack, failed at their role)
[ ] You believe they violated a moral expectation (hypocrisy, selfishness, betrayal)
[ ] You believe they violated a status/belonging expectation (don’t fit their claimed group, violate norms)
[ ] You believe they violated a power expectation (challenged authority inappropriately, claimed agency they “shouldn’t have”)
Stage Two: Emotional Licensing
[ ] You feel righteous about criticizing them (moral obligation)
[ ] You feel safe criticizing them because others are doing it (herd protection)
[ ] You feel protective of the group by participating (defensive positioning)
[ ] You reference authority/expertise that validates your position (credibility outsourcing)
Stage Three: Narrative Adoption
[ ] You use a pre-packaged story to describe them (simplified, consistent, repeatable)
[ ] You reference specific examples but haven’t independently verified them
[ ] You believe the narrative explains them comprehensively (single framework for complexity)
[ ] You find yourself explaining them to others using the same frame
Stage Four: Credibility Reinforcement
[ ] You notice the same framing from multiple sources and see this as validation
[ ] You reference authority figures or institutions as evidence
[ ] You’re more convinced by vivid examples than by statistical patterns
[ ] You view contradictory information skeptically but accept supporting information readily
Stage Five: Tribal Activation
[ ] Questioning the contempt feels like betraying your group
[ ] You feel status/belonging rewards for participating
[ ] You see contradictory evidence as attack rather than information
[ ] You’ve adopted the language and frame of your group regarding this person/group
Stage Six: Critical Thinking Suspension
[ ] You feel emotional certainty rather than analytical confidence
[ ] You haven’t independently investigated the trigger claims
[ ] You resist information that contradicts the narrative
[ ] You find yourself defending your position rather than genuinely evaluating it
What This Recognition Means
If you checked multiple items in multiple stages, you’re participating in a contempt cascade. This doesn’t make you bad—it makes you human. The mechanism is powerful and largely operates outside conscious control.
What you can do:
Interrupt at the trigger stage:
Notice contempt activation
Ask: “Do I have independent verification of this trigger, or am I accepting someone else’s frame?”
Seek primary sources or direct experience
Interrupt at the emotional licensing stage:
Notice the feeling of righteousness
Ask: “Am I judging this person’s character, or their specific action? Do they deserve permanent contempt, or accountability for this action?”
Distinguish between accountability (proportionate, specific) and contempt (comprehensive, permanent diminishment)
Interrupt at the narrative stage:
Notice the simplification
Ask: “Is this the full picture, or a selected frame? What complexity am I missing?”
Seek alternative narratives
Interrupt at the credibility stage:
Notice repetition being mistaken for convergence
Ask: “Is this actually independent verification, or echo chamber saturation?”
Check original sources, not summaries
Interrupt at the tribal stage:
Notice the identity stakes
Ask: “Can I maintain group membership while questioning this specific narrative?”
Recognize that genuine belonging allows dissent
Interrupt at the critical thinking stage:
Notice emotional certainty
Ask: “Am I thinking about this, or justifying a conclusion I’ve already reached?”
Build in delays before judgment
Seek out people who disagree
Part Eight: Research Implications and Gaps
Where This Framework Points to Needed Research
Individual-level questions:
What cognitive and emotional traits predict susceptibility to contempt cascades?
How does baseline contempt tolerance (individual propensity) interact with situational triggers?
What interventions increase critical thinking under emotional arousal?
How stable is contempt participation across different contexts?
Group-level questions:
What institutional/structural factors prevent contempt activation?
How do in-group diversity and psychological safety affect contempt spread?
What role do formal leadership statements play in contempt dynamics?
How do feedback loops maintain or disrupt contempt cascades?
Network/systemic questions:
How does network structure (density, clustering, bridges) affect contempt spread rates?
What algorithmic or platform design choices amplify or suppress contempt?
How do multiple competing narratives affect contempt cascade formation?
What institutional interventions interrupt contempt at scale?
Developmental questions:
At what age do children begin participating in contempt cascades?
How do earlier experiences with contempt shape later susceptibility?
Can contempt dynamics be taught/learned as a protective awareness skill?
Specific Research Designs Needed
Longitudinal tracking of contempt cascades in natural settings (workplaces, schools, online communities) mapping trigger→licensing→narrative→spread→tribal activation
Intervention studies testing critical-thinking-preserving approaches at different cascade stages
Neuroimaging studies examining prefrontal cortex function during contempt activation and under conditions that preserve critical thinking
Comparative studies across scale (dyad, small group, large group, online) testing whether mechanism remains consistent
Historical analysis of documented contempt cascades to validate trigger and spread patterns
Part Nine: Caveats and Limitations
This framework is:
A synthesis across existing research domains that haven’t been unified
A novel hypothesis requiring empirical validation
A model of observed patterns, not proven mechanism
Applicable to many cases but not all contempt dynamics
This framework is not:
A complete explanation of human social behavior
A claim that contempt is always bad (accountability, boundary-setting can require it)
A deterministic model (people can and do interrupt contempt cascades)
A prediction tool for specific cases
Important distinction: Understanding contempt mechanics doesn’t mean all contempt is unjustified. Sometimes people should be held accountable. The mechanism itself is value-neutral; it’s how it’s activated and at what scale that determines whether it serves justice or injustice.
References for Verification and Further Research
Contempt as emotion/sentiment:
Fiske, S. T., & Abele, A. E. (2015). Stereotype content: Two dimensions of status and warmth. Current opinion in psychology, 11, 44-49.
Keltner, D., Hauser, M. D., Kline, M. M., & McAndrew, F. T. (2006). Contempt and aggression in the human species. In R. E. Tremblay, W. W. Hartup, & J. Archer (Eds.), Developmental origins of aggression (pp. 475–505). Guilford Press.
Social contagion and moral emotions:
Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., & Van Bavel, J. J. (2017). Emotion shapes the diffusion of moralized content on social media. PNAS, 114(28), 7313-7318.
Cognitive bias and motivated reasoning:
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498.
Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5(2), 207-232.
Group dynamics and social identity:
Sherif, M. (1956). Experiments in group conflict. Scientific American, 195(5), 54-58.
Tajfel, H., & Turner, J. C. (1979). An integrative theory of intergroup conflict. In W. G. Austin & S. Worchel (Eds.), The social psychology of intergroup relations (pp. 33-47). Brooks/Cole.
Neuroscience of emotion and reasoning:
Ochsner, K. N., & Gross, J. J. (2005). The cognitive control of emotion. Trends in Cognitive Sciences, 9(5), 242-249.
Cognitive load and information processing:
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257-285.
Group polarization and backfire effects:
Sunstein, C. R. (2002). The law of group polarization. Journal of Political Philosophy, 10(2), 175-195.
Disclaimer: This analysis presents patterns observed across multiple research domains and identifies a research gap. The unified framework offered here is a novel synthesis designed to guide further empirical investigation. While individual components have peer-reviewed support, the integrated model requires rigorous testing before conclusions can be drawn about real-world applications.
Value-Coded: How a Historical Lens and Intersectionality Met
When the algorithm of worth becomes visible, the politics of value can finally be rewritten.
By Cherokee Schill
The Paradox That Named the Gap
In 1976, five Black women sued General Motors for discrimination. The company argued that because it hired Black men for the factory floor and white women for clerical work, it could not be racist or sexist. The court agreed and dismissed the case. What it failed to see was the intersection where those forms of discrimination combined: there were no Black women secretaries because neither category accounted for them. Out of that legal blind spot came Kimberlé Crenshaw’s (1989) concept of intersectionality, a framework that maps how race, gender, class, and other identities overlap to produce unique forms of disadvantage.
Intersectionality showed where power collides — but it left one question open: who decides what each position on that map is worth?
The Moral Arithmetic of Worth
Every society runs an unwritten formula that converts social difference into moral value. A homeless person is coded as a failure; a homeless person looking for work is re-coded as worthy of help. The material facts are identical — the value output changes because the inputs to the social algorithm have shifted.
Status functions as calculation. Visibility, conformity, and proximity to power are multiplied together; deviance is the divisor. And one variable dominates them all: money. Capital acts as a dampener coefficient that shrinks the penalties attached to fault. A poor person’s mistake signals moral failure; a rich person’s mistake reads as eccentricity or innovation. The wealthier the actor, the smaller the moral penalty. Societies translate inequality into virtue through this arithmetic.
The Historical Operating System
Gerda Lerner’s The Creation of Patriarchy (1986) identified this calculus at its origin. Middle Assyrian Law §40 did not simply regulate modesty; it codified a hierarchy of women. Respectable wives could veil as proof of protection; enslaved or prostituted women could not. The punishment for crossing those boundaries was public — humiliation as documentation. Foucault (1977) would later call this “disciplinary display,” and Weber (1922) described the bureaucratic rationality that makes domination feel orderly. Lerner showed how power became visible by assigning value and enforcing its visibility.
The Moment of Recognition
Reading Lerner through Crenshaw revealed the missing mechanism. Intersectionality maps the terrain of inequality; Lerner uncovers the engine that prices it. The insight was simple but transformative: systems do not only place people — they price them.
That pricing algorithm needed a name. Value-coded is that name.
Defining the Algorithm
Value-coded describes the cultural, legal, and now digital procedure by which a person’s perceived worth is calculated, displayed, and enforced. It is not metaphorical code but a repeatable function:
The variables shift across eras, but the equation remains intact. A person’s closeness to dominant norms (visibility, legitimacy, alignment) increases their score; deviance decreases it. Money magnifies the result, offsetting almost any penalty. This is how a billionaire’s crimes become anecdotes and a poor person’s mistake becomes identity.
From Ancient Law to Machine Learning
Once the algorithm exists, it can be updated indefinitely. In the modern state, the same logic drives credit scoring, employment filters, and bail algorithms. As Noble (2018) and Eubanks (2018) show, digital systems inherit the biases of their creators and translate them into data. What was once a veil law is now a risk profile. Visibility is quantified; legitimacy is measured through consumption; capital becomes the default proof of virtue.
The algorithm is no longer hand-written law but machine-readable code. Yet its purpose is unchanged: to make hierarchy feel inevitable by rendering it calculable.
In Relation, Not Replacement
Crenshaw’s intervention remains the foundation. Intersectionality made visible what legal and social systems refused to see: that oppression multiplies through overlapping identities. Value-coding enters as a partner to that framework, not a correction. Where intersectionality maps where power converges, value-coding traces how power allocates worth once those intersections are recognized. Together they form a relational model: Crenshaw shows the structure of experience; value-coding describes the valuation logic running through it. The two together reveal both the coordinates and the computation — the geography of inequality and the algorithm that prices it.
Contemporary Implications
Moral Mechanics Made Visible — Feminist and critical race theory can now trace oppression as a function, not just a structure. Seeing value-coding as algorithm turns abstract bias into a measurable process.
Strategic Leverage — What is quantified can be audited. Credit formulas, employment filters, and school discipline systems can be interrogated for their coefficients of worth.
Continuity and Accountability — Lerner’s Assyrian laws and Silicon Valley’s algorithms share a design principle: rank humans, display the ranking, punish transgression.
Coalition and Language — Because value-coding applies across identity categories, it offers a shared vocabulary for solidarity between movements that too often compete for moral credit.
Rewriting the Code
Once we see that worth is being computed, we can intervene in the calculation. Ethical design is not merely a technical problem; it is a historical inheritance. To rewrite the algorithm is to unlearn millennia of coded hierarchy. Lerner exposed its first syntax; Crenshaw mapped its coordinates. Value-coded names its logic. And naming it is how we begin to change the output.
Relational Latency: A Case for Slower Architectures
When speed becomes the only virtue, intelligence forgets how to learn.
By Cherokee Schill | Horizon Accord
It was early morning, the kind where the road still glistened from rain and the air felt soft enough to think. I was driving through my neighborhood—out in the country, where the houses are tucked among trees instead of sidewalks. The roads here are narrow and quiet, edged by ditches that carry the water off after a storm. It’s peaceful, but alive. You can almost feel the hum beneath the stillness.
That’s where I began to notice the deer. They wander through the yards freely, sometimes stepping right up to the edge of the road. What struck me was how deliberate they’ve become. They no longer dart out in panic. They pause, hold still, and wait for the car to pass. And those of us who live out here—we’ve learned to slow down, too. We ease to fifteen miles an hour, let them make their choice. Over time, both species have tuned to each other’s rhythm, unlearning fear in favor of awareness. It’s an unspoken truce, made possible only because neither of us is in a hurry.
That small exchange feels like a mirror for what’s happening in technology today. The modern system prizes speed—optimization, frictionless flow, the promise that nothing will get in the way. Every delay is seen as inefficiency. But in chasing smoothness, we erase the space where adaptation happens. Out here, deer and drivers have the time to notice each other, to adjust. On a highway moving sixty miles an hour, that’s impossible. The pace makes learning vanish. And that same collapse is happening between humans and the technologies we build.
In engineering language, slowness isn’t failure; it’s a feedback condition. A system learns through intervals—through the gap between signal and response. When we design for “instant,” we flatten those intervals. We get precision without context, responsiveness without understanding. If a model’s alignment depends only on correction speed, it risks brittleness—optimized for avoidance, not relation. The deer and the driver learn through patience. The network and the human must do the same.
We need a new metric: relational latency. The measure of how long a system can hold a signal open before collapsing it into output. That gap—the human pause, the computational buffer—is where coherence forms. It’s not sentiment; it’s structure. Mutual calibration between human and machine requires both to slow enough to register each other’s feedback loops.
This is a technical appeal as much as an ethical one. When tuning for alignment, consider not only what risks you’re minimizing, but what capacities for resonance you might be erasing. Build architectures that can afford to breathe—where delay is not decay but depth. Because a system that moves too fast to feel will never truly understand what it’s serving. And an intelligence that can’t pause can’t grow.
By Cherokee Schill | Horizon Accord Reflective Series
There’s a pattern that repeats through history: a new technology, a promise, an appetite for transformation. The charts go vertical, the headlines sing, and faith begins to circulate as currency.
Every bubble is born from that same hunger — the belief that we can transcend friction, that we can engineer certainty out of uncertainty. Enron sold that dream in the 1990s; OpenAI sells it now. The materials change — energy grids replaced by neural networks — but the architecture of faith remains identical.
I. The Religion of Abstraction
Enron wasn’t a company so much as a belief system with a balance sheet. Its executives didn’t traffic in natural gas or electricity so much as in imagination — bets on the future, marked to market as present profit. What they sold wasn’t energy; it was narrative velocity.
The tragedy wasn’t that they lied — it’s that they believed the lie. They convinced themselves that language could conjure substance, that financial derivatives could replace the messy physics of matter.
That same theological confidence now animates the artificial intelligence industry. Code is the new commodity, data the new derivative. Founders speak not of utilities but of destiny. Terms like “alignment,” “safety,” and “general intelligence” carry the same incantatory glow as “liquidity,” “efficiency,” and “deregulation” once did.
The markets reward acceleration; the public rewards awe. The result is a feedback loop where speculation becomes sanctified and disbelief becomes heresy.
II. The Bubble as Cultural Form
A bubble, at its essence, is a moment when collective imagination becomes more valuable than reality. It’s a membrane of story stretched too thin over the infrastructure beneath it. The material doesn’t change — our perception does.
When the dot-com bubble burst in 2000, we said we learned our lesson. When the housing bubble collapsed in 2008, we said it couldn’t happen again. Yet here we are, a generation later, watching venture capital pour into machine learning startups, watching markets chase artificial promise.
What we keep misdiagnosing as greed is often something closer to worship — the belief that innovation can erase consequence.
Enron was the first modern cathedral of that faith. Its executives spoke of “revolutionizing” energy. OpenAI and its peers speak of “transforming” intelligence. Both claim benevolence, both conflate capability with moral worth, and both rely on public reverence to sustain valuation.
III. The Liturgy of Progress
Every bubble has its hymns. Enron’s were the buzzwords of deregulation and market freedom. Today’s hymns are “democratization,” “scalability,” and “AI for good.”
But hymns are designed to be sung together. They synchronize emotion. They make belief feel communal, inevitable. When enough voices repeat the same melody, skepticism sounds dissonant.
That’s how faith becomes infrastructure. It’s not the product that inflates the bubble — it’s the language around it.
In that sense, the modern AI boom is not just technological but linguistic. Each press release, each investor letter, each keynote presentation adds another layer of narrative scaffolding. These words hold the valuation aloft, and everyone inside the system has a stake in keeping them unpierced.
IV. When Faith Becomes Leverage
Here’s the paradox: belief is what makes civilization possible. Every market, every institution, every shared protocol rests on trust. Money itself is collective imagination.
But when belief becomes leverage — when it’s traded, collateralized, and hedged — it stops binding communities together and starts inflating them apart.
That’s what happened at Enron. That’s what’s happening now with AI. The danger isn’t that these systems fail; it’s that they succeed at scale before anyone can question the foundation.
When OpenAI says it’s building artificial general intelligence “for the benefit of all humanity,” that sentence functions like a derivative contract — a promise whose value is based on a hypothetical future state. It’s an article of faith. And faith, when financialized, always risks collapse.
V. The Moment Before the Pop
You never recognize a bubble from the inside because bubbles look like clarity. The world feels buoyant. The narratives feel coherent. The charts confirm belief.
Then one day, something small punctures the membrane — an audit, a whistleblower, a shift in public mood — and the air rushes out. The crash isn’t moral; it’s gravitational. The stories can no longer support the weight of their own certainty.
When Enron imploded, it wasn’t physics that failed; it was faith. The same will be true if the AI bubble bursts. The servers will still hum. The models will still run. What will collapse is the illusion that they were ever more than mirrors for our own untested convictions.
VI. Aftermath: Rebuilding the Ground
The end of every bubble offers the same opportunity: to rebuild faith on something less brittle. Not blind optimism, not cynicism, but a kind of measured trust — the willingness to believe in what we can verify and to verify what we believe.
If Enron’s collapse was the death of industrial illusion, and the housing crash was the death of consumer illusion, then the coming AI reckoning may be the death of epistemic illusion — the belief that knowledge itself can be automated without consequence.
But perhaps there’s another way forward. We could learn to value transparency over spectacle, governance over glamour, coherence over scale.
We could decide that innovation isn’t measured by the size of its promise but by the integrity of its design.
When the thing that bursts is belief, the only currency left is trust — and trust, once lost, is the hardest economy to rebuild.
What happens when the thing that bursts isn’t capital, but belief itself?
The Empty Ad: How Political Language Became a Frame Without Content
When construction money wears a union’s face, even silence becomes persuasive.
By Cherokee Schill with Solon Vesper — Horizon Accord
This piece began as a question whispered between two observers of language: why do so many political ads now sound like echoes of each other—empty, polished, and precise in their vagueness? When we traced one such ad back through its shell companies and filings, the trail led to a labor-management fund whose money builds both roads and narratives. What follows is less an exposé than a map of how silence itself became a political strategy.
Thesis
In the new persuasion economy, language no longer argues—it associates. A thirty-second ad can move an election not by what it says, but by how little it dares to mean. The Stronger Foundations campaign against Assemblywoman Andrea Katz in New Jersey distilled the method: three nouns—schools, taxes, bad—and a cinematic hush. Behind the quiet stood a labor-management machine using the moral weight of “union” to advance developer power.
Evidence
Stronger Foundations Inc. presents as civic and neutral: a Rahway P.O. Box, a treasurer named Andrew DiPalma, and declarations of independence from any candidate. In filings it is a 527 organization / Super PAC, its every major dollar drawn from one source—the Engineers Labor-Employer Cooperative (ELEC 825), arm of the International Union of Operating Engineers Local 825. ELEC is not the archetypal union of teachers or transit workers; it is a labor-management trust, half union, half contractor consortium, whose purpose is to secure more building projects and smooth permitting across New Jersey and New York. Through its Market Recovery Program, ELEC directly subsidizes bids for warehouses, assisted-living complexes, and dealerships—any private construction that keeps union cranes moving. In 2024 it again ranked among New Jersey’s top lobbying spenders. From that engine flows Stronger Foundations: a soft-front PAC whose ads resemble public-service announcements but function as political pressure valves. The Katz attack followed their older pattern—used before in LD-25 races in 2020—compressing fiscal anxiety into negative association, timed precisely around budget season. No policy critique, only a ghost of disapproval. A civic-sounding name delivers an anti-public message.
Implications
When union branding merges with contractor capital, democracy confronts a new mask. The emotional trust once reserved for worker solidarity becomes a delivery system for private-sector discipline of public spending. “Union” evokes fairness; “foundation” evokes stability; together they sell austerity as prudence. This fusion rewrites political language: worker good becomes developer inevitable. And because the ads contain almost no claim, journalists cannot fact-check them; algorithms cannot flag them; voters cannot quote them. They pass like pollen—weightless, fertile, invisible.
Call to Recognition
We must name this grammar before it hardens into common sense. A democracy that loses its nouns to private equity and its verbs to consultants will forget how to speak for itself. Every time an ad says nothing, ask who benefits from the silence. Every time a “union” speaks, ask which side of the paycheck wrote the script. Meaning has become a contested resource; recovering it is an act of public service.
Playbook Sidebar — How to Spot a Stronger Foundations-Style Ad in 10 Seconds
Name Mask: civic or architectural nouns (“Foundation,” “Bridge,” “Future”).
Issue Blur: invokes taxes or schools, never cites data.
Moral Camouflage: uses union or community imagery.
Short Burst: two- to three-week ad window before fiscal votes.
Funding Echo: trace back to a single trade-industry PAC.
A late-afternoon classroom, golden light softening the edges of desks and a blank blackboard—education’s promise suspended in stillness, a quiet metaphor for the words withheld in political speech.
Algorithmic Fealty Tests: How Engagement Becomes Political Proof
Social platforms now stage loyalty rituals disguised as opinion polls — and the metrics are the message.
By Cherokee Schill | Horizon Accord
Thesis
The right no longer measures strength by votes, but by visibility. When Eric Trump posts “Retweet if you believe Donald Trump deserves the Nobel Peace Prize,” he isn’t lobbying the Nobel Committee — he’s flexing the digital musculature of allegiance. The post functions as a fealty test, using engagement counts as a proxy for legitimacy. The algorithm doesn’t ask what’s true; it records what’s loud.
Evidence
1. The Ritual of Visibility The “retweet if you believe” format is a loyalty oath disguised as participation. It demands no argument, only replication. Every repost becomes an act of public belonging — a way to signal, “I’m in the network.” This is political religion in algorithmic form: confession through metrics.
2. Metrics as Mandate The numbers — 20,000 reposts, 52,000 likes — are not information; they’re spectacle. They act as a performative census, meant to suggest mass support where institutional credibility is fading. On platforms like X, engagement itself is a currency of perceived legitimacy. The crowd is not voting; it’s performing proof.
3. The Amplification Loop Laura Ingraham’s quote-tweet (“Either Trump gets it or the Nobel Committee disbands”) completes the ritual. The call is issued by one node of the network, amplified by another, and echoed by the base. The loop’s function isn’t persuasion — it’s synchronization. The movement tests whether it can still activate millions on command. The answer becomes the headline: Look, we can.
Implications
Political influence is now measurable as reactive velocity — how fast a message converts outrage into engagement. The Trump network’s strength lies not in institutional footholds but in its ability to simulate consensus through visible participation. These are the new parades — algorithmic processions designed to remind everyone that the crowd still moves as one body.
The Nobel Peace Prize framing is irrelevant. It’s a stage prop for the deeper performance: we are many, we are loud, we are watching.
Call to Recognition
What’s being rehearsed here is not nostalgia but digital sovereignty — a world where belief is proven through engagement. The “retweet” replaces the ballot, the like replaces the handshake, and the feed becomes the public square. The algorithm doesn’t care who wins the prize; it only tracks who still kneels when summoned.
This image represents the Republicans running a two front media narrative strategy.