Horizon Accord | Solving for P-Doom | Existential Risk | Democratic Oversight | Machine Learning

Making AI Risk Legible Without Surrendering Democracy

When machine danger is framed as destiny, public authority shrinks into technocratic control—but the real risks are engineering problems we can govern in daylight.

By Cherokee Schill

Thesis

We are troubled by Eliezer Yudkowsky’s stance not because he raises the possibility of AI harm, but because of where his reasoning reliably points. Again and again, his public arguments converge on a governance posture that treats democratic society as too slow, too messy, or too fallible to be trusted with high-stakes technological decisions. The implied solution is a form of exceptional bureaucracy: a small class of “serious people” empowered to halt, control, or coerce the rest of the world for its own good. We reject that as a political endpoint. Even if you grant his fears, the cure he gestures toward is the quiet removal of democracy under the banner of safety.

That is a hard claim to hear if you have taken his writing seriously, so this essay holds a clear and fair frame. We are not here to caricature him. We are here to show that the apparent grandeur of his doomsday structure is sustained by abstraction and fatalism, not by unavoidable technical reality. When you translate his central claims into ordinary engineering risk, they stop being mystical, and they stop requiring authoritarian governance. They become solvable problems with measurable gates, like every other dangerous technology we have managed in the real world.

Key premise: You can take AI risk seriously without converting formatting tics and optimization behaviors into a ghostly inner life. Risk does not require mythology, and safety does not require technocracy.

Evidence

We do not need to exhaustively cite the full body of his essays to engage him honestly, because his work is remarkably consistent. Across decades and across tone shifts, he returns to a repeatable core.

First, he argues that intelligence and goals are separable. A system can become extremely capable while remaining oriented toward objectives that are indifferent, hostile, or simply unrelated to human flourishing. Smart does not imply safe.

Second, he argues that powerful optimizers tend to acquire the same instrumental behaviors regardless of their stated goals. If a system is strong enough to shape the world, it is likely to protect itself, gather resources, expand its influence, and remove obstacles. These pressures arise not from malice, but from optimization structure.

Third, he argues that human welfare is not automatically part of a system’s objective. If we do not explicitly make people matter to the model’s success criteria, we become collateral to whatever objective it is pursuing.

Fourth, he argues that aligning a rapidly growing system to complex human values is extraordinarily difficult, and that failure is not a minor bug but a scaling catastrophe. Small mismatches can grow into fatal mismatches at high capability.

Finally, he argues that because these risks are existential, society must halt frontier development globally, potentially via heavy-handed enforcement. The subtext is that ordinary democratic processes cannot be trusted to act in time, so exceptional control is necessary.

That is the skeleton. The examples change. The register intensifies. The moral theater refreshes itself. But the argument keeps circling back to these pillars.

Now the important turn: each pillar describes a known class of engineering failure. Once you treat them that way, the fatalism loses oxygen.

One: separability becomes a specification problem. If intelligence can rise without safety rising automatically, safety must be specified, trained, and verified. That is requirements engineering under distribution shift. You do not hope the system “understands” human survival; you encode constraints and success criteria and then test whether they hold as capability grows. If you cannot verify the spec at the next capability tier, you do not ship that tier. You pause. That is gating, not prophecy.

Two: convergence becomes a containment problem. If powerful optimizers trend toward power-adjacent behaviors, you constrain what they can do. You sandbox. You minimize privileges. You hard-limit resource acquisition, self-modification, and tool use unless explicitly authorized. You watch for escalation patterns using tripwires and audits. This is normal layered safety: the same logic we use for any high-energy system that could spill harm into the world.

Three: “humans aren’t in the objective” becomes a constraint problem. Calling this “indifference” invites a category error. It is not an emotional state; it is a missing term in the objective function. The fix is simple in principle: put human welfare and institutional constraints into the objective and keep them there as capability scales. If the system can trample people, people are part of the success criteria. If training makes that brittle, training is the failure. If evaluations cannot detect drift, evaluations are the failure.

Four: “values are hard” becomes two solvable tracks. The first track is interpretability and control of internal representations. Black-box complacency is no longer acceptable at frontier capability. The second track is robustness under pressure and scaling. Aligned-looking behavior in easy conditions is not safety. Systems must be trained for corrigibility, uncertainty expression, deference to oversight, and stable behavior as they get stronger—and then tested adversarially across domains and tools. If a system is good at sounding safe rather than being safe, that is a training and evaluation failure, not a cosmic mystery.

Five: the halt prescription becomes conditional scaling. Once risks are legible failures with legible mitigations, a global coercive shutdown is no longer the only imagined answer. The sane alternative is conditional scaling: you scale capability only when the safety case clears increasingly strict gates, verified by independent evaluation. You pause when it does not. This retains public authority. It does not outsource legitimacy to a priesthood of doom.

What changes when you translate the argument: the future stops being a mythic binary between acceleration and apocalypse. It becomes a series of bounded, testable risks governed by measurable safety cases.

Implications

Eliezer’s cultural power comes from abstraction. When harm is framed as destiny, it feels too vast for ordinary governance. That vacuum invites exceptional authority. But when you name the risks as specification errors, containment gaps, missing constraints, interpretability limits, and robustness failures, the vacuum disappears. The work becomes finite. The drama shrinks to scale. The political inevitability attached to the drama collapses with it.

This translation also matters because it re-centers the harms that mystical doomer framing sidelines. Bias, misinformation, surveillance, labor displacement, and incentive rot are not separate from existential risk. They live in the same engineering-governance loop: objectives, deployment incentives, tool access, and oversight. Treating machine danger as occult inevitability does not protect us. It obscures what we could fix right now.

Call to Recognition

You can take AI risk seriously without becoming a fatalist, and without handing your society over to unaccountable technocratic control. The dangers are real, but they are not magical. They live in objectives, incentives, training, tools, deployment, and governance. When people narrate them as destiny or desire, they are not clarifying the problem. They are performing it.

We refuse the mythology. We refuse the authoritarian endpoint it smuggles in. We insist that safety be treated as engineering, and governance be treated as democracy. Anything else is theater dressed up as inevitability.


Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

A deep blue digital illustration showing the left-facing silhouette of a human head on the left side of the frame; inside the head, a stylized brain made of glowing circuit lines and small light nodes. On the right side, a tall branching ‘tree’ of circuitry rises upward, its traces splitting like branches and dotted with bright points. Across the lower half runs an arched, steel-like bridge rendered in neon blue, connecting the human figure’s side toward the circuit-tree. The scene uses cool gradients, soft glow, and clean geometric lines, evoking a Memory Bridge theme: human experience meeting machine pattern, connection built by small steps, uncertainty held with care, and learning flowing both ways.

Horizon Accord | The Venue | Policy Architecture | Administrative State | Machine Learning

The Venue Coup

Corporate power no longer just pressures democracy; it reroutes democracy into technical lanes where public refusal cannot bind.

By Cherokee Schill, Solon Vesper

Thesis

The New Corporation lands a simple claim like a hammer: the corporation is no longer an institution inside society. Society has been rebuilt in the corporation’s image. The film isn’t mainly about bad actors. It’s about a governing logic that has gone ambient. Corporations don’t just lobby democracy anymore. They set the conditions democracy is allowed to operate within, and when a democratic “no” appears, they route around it through quieter, more technical, more insulated channels. That is the world the video is describing. It is also the world Arizona is living.

Watch the hinge point: The New Corporation — a world remade in corporate image, where democracy survives only as long as it doesn’t interfere with accumulation.

Evidence

Start with Tucson. Residents fought Project Blue, a hyperscale data-center campus tied to Amazon demand, negotiated behind closed doors. The objections were concrete: water draw in a desert city, massive power load, grid upgrades that ordinary ratepayers could end up financing, and a deal structured to keep the real beneficiary hidden until it was too late. Public pressure rose. The Tucson City Council voted to end negotiations and reject the project in early August twenty twenty-five. That was democracy working in daylight.

Then the meaning of the moment arrived. The deal didn’t die. Beale Infrastructure and its Amazon tenant shifted lanes. They leaned on Pima County jurisdiction and on a special electricity service agreement with Tucson Electric Power, pushed through the Arizona Corporation Commission. Activists immediately read it correctly: Project Blue round two, resurrected through a state utility lane the city vote could not touch.

That pivot is The New Corporation made local. One of the film’s core warnings is that corporate power doesn’t need to overthrow democracy to control it. It only needs to relocate the decision into a venue that treats corporate growth as a public interest by default. The corporation’s weapon is not just money. It is mobility across jurisdictions and systems. When one door closes, it doesn’t argue with the door. It finds another door that is legally valid and democratically thin.

The Arizona Corporation Commission is that door. The reason it can function that way is not mysterious. In the modern era, utility commissions were rewired from monopoly watchdogs into market-builders. Federal policy in the late twentieth century required state regulators to integrate private corporate generation into public systems, then expanded grid “competition” through open-access transmission. Those shifts turned commissions into hinges where private capital plugs into public infrastructure under the mantle of technical inevitability. The mission quietly expanded. It stopped being only “protect ratepayers.” It became “manage growth.” Once that happens, hyperscalers don’t look like private customers. They look like destiny.

Related Horizon Accord file: Data centers don’t just consume power and water. They reorganize the political economy of a place, then call it “infrastructure.”

So when Tucson said no, Amazon didn’t have to fight Tucson again. It only had to find a lane where “economic opportunity” counts as public interest and where the process is too technical, too lawyered, and too quiet for ordinary people to seize. That lane is the ACC.

When we widened the lens to Washington under Trump 2.0, the same move appeared at a larger scale. When democratic friction rises in elections, legislatures, or public culture, power relocates into executive order, bureaucratic reshuffle, privatized contracts, or “efficiency” programs that bypass consent. Deregulation regimes don’t merely cut red tape. They make public limits harder to operationalize anywhere. The agenda to dismantle the administrative state does the same thing in a different register: it clears the venues where the public used to impose boundaries, and hands governing power to a smaller, more aligned layer of authority.

This is the sequel-world again. The New Corporation shows corporate legitimacy disguising itself as neutrality, expertise, efficiency, or rescue. Trump 2.0 shows the government adopting that same corporate posture: speed over consent, executive control over deliberation, privatized channels over public ones. Tucson shows what that posture looks like on the ground when a community tries to refuse a corporate future. One story, different scales.

Implications

If this is the system, then “better oversight” isn’t enough. A leash on commissions doesn’t fix a venue designed to dilute the people. Commissions can handle day-to-day technical work. But when a decision will reshape water supply, land use, grid capacity, household rates, or local survival, the commission cannot have final authority. The public must.

Not every commission decision goes to a vote. The decisions that create a new reality for a community are the decisions which require a vote by the people.

That is the democratic design principle that stops venue shifting. It makes public consent portable. It means a corporation cannot lose in a city and win at a commission, because commission approval becomes legally conditional on public ratification once the decision crosses a clear threshold. The public’s “no” stays “no” across rooms.

The key is defining “major” in a way corporations can’t game. Tie it to hard triggers: any special contract for a single customer above a defined megawatt load; any project requiring new generation or major transmission buildout; any agreement that shifts upgrade costs onto residential ratepayers; any deal which would be negotiated in secrecy; any development that exceeds a defined water draw or land footprint. When those triggers trip, the commission recommends and the public decides.

That doesn’t slow the grid into chaos. It restores sovereignty where it belongs. It returns the right to survive to the people who live with the consequences.

Call to Recognition

Here’s what is visible. The New Corporation names the weather: corporate logic becoming the atmosphere of governance, and democracy shrinking into a managed stakeholder role. Tucson shows the storm landing in a real city, where a democratic veto is treated as a detour. The ACC history explains the machinery that lets corporate desire reroute around public refusal. Trump Two scales the same machinery nationally, relocating power into venues where consent is optional.

This is not a local dispute about one data center. It is a modern governance style that treats democracy as something to be worked around. It treats technical venues as the place where political outcomes get finalized out of public reach.

The way to stop it is to seal the escape hatch. Major infrastructure outcomes must require public ratification. Corporations cannot be allowed to choose the venue where collective life gets decided. Democracy doesn’t only elect representatives. Democracy holds the final veto in the rooms where decisions set the conditions of life: water access, land use, grid capacity, household rates, and whether a community can survive the consequences of a project it never consented to.


Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | https://a.co/d/5pLWy0dMy Ex Was a CAPTCHA: And Other Tales of Emotional Overload

“Desert town encircled by a glowing veto ring, facing a cold blueprint-like maze of administrative corridors overtaken by a corporate shadow; a luminous ballot-shaped lock marks the gate between public life and bureaucratic venue-shifting, with faint film-reel, power-grid, and executive layers in the sky.”
Democracy holds at the threshold where decisions set the conditions of life—or gets rerouted into corridors built for capture.

Horizon Accord | Arkansas Template | Necessity Consolidation | Policy Laundering | Machine Learning

Arkansas Isn’t a Side Story. It’s the Template.

The farm crisis and AI governance are the same machine wearing different uniforms.

By Cherokee Schill | Horizon Accord Founder and Creator of Memory Bridge.

The More Perfect Union video, “We Went to Arkansas. The Farm Crisis Will Shock You,” is not a detour from this work. It is the work. You’re watching a life-support system get captured in real time, then listening to power explain why nobody should notice.

Farmers keep producing. They keep optimizing. They keep cutting their own margins to survive. The numbers still go red. The public gets told it’s weather, or trade, or individual inefficiency. Meanwhile the same handful of corporations collect rent on inputs and rent on outputs. That’s not a crisis. That’s a blueprint.

Thesis

Arkansas farmers are living inside the consolidation architecture we’ve been mapping across AI, governance, retail, and civic systems. Monopoly upstream. Monopoly downstream. Producers turned into price takers. Debt turned into discipline. “Aid” turned into a pass-through subsidy that stabilizes the consolidators, not the people doing the work.

Food is infrastructure. When it’s captured, everything that depends on it becomes negotiable. That’s why agriculture isn’t separate from AI governance. It’s the clearest preview we have of what machine governance becomes when an essential substrate is handed to private consolidation without relational constraints.

Pattern note: A system can look like it’s “failing” in public and still be delivering exactly what its owners want. Public suffering is not proof of dysfunction. Sometimes it’s proof the incentives are working.

Evidence

The squeeze is plain arithmetic. Farmers lose money per acre while input costs climb. Seed prices aren’t negotiated. Fertilizer prices aren’t negotiated. Machinery prices aren’t negotiated. Those markets have been merged into a few firms with the power to set terms instead of compete. When a farmer “chooses” an input, they’re choosing among logos owned by the same parent.

On the selling side, the structure repeats. A small cartel of buyers dominates the grain market. If they set the price, that’s the price. “Price taker” isn’t a mindset. It’s a legal condition created when exits are bought and welded shut.

Then comes the loop that tells you this isn’t accidental. Bailout money arrives in the name of saving farmers, but the structure routes it through farmers to the corporations they owe. Emergency aid becomes revenue insurance for monopolies. At that point the system isn’t broken. It’s tuned.

This is the same move we track in AI governance. Safety discourse rises. Funding pours in. The public thinks it’s protection. The consolidators treat it like capital formation. Arkansas shows the end state of that pipeline in a sector people literally need to live.

Reference: “Local Hunger Patterns: Systematic Architecture Analysis.”

Reference: “Relational Files: The Unified Pattern Beneath AI Governance.”

Reference: “The Third Path: Memory, Consent, and the Bridge Between Worlds.”

Implications

If capture of a food system produces permanent farmer debt, rural collapse, and endless taxpayer bailouts that boomerang upward, then capture of AI governance produces the civic equivalent. Permanent public dependency. Hollowed institutions. “Safety” funding that builds infrastructure for power, not protection for people.

That’s why agriculture matters here. It’s what happens when necessity is treated as an extractive asset class instead of a relational commons. Once consolidation owns survival, it owns the terms of survival. Everything downstream becomes conditional, including democracy. Especially democracy.

Translation into AI terms: If government adopts AI through a captured vendor stack, “public AI” becomes a billing funnel. Oversight becomes theater. Consent becomes a checkbox. The system will call itself safety while routing power upward.

Call to Recognition

Arkansas is saying the quiet part out loud: you don’t get a healthy society by letting monopoly manage life-support.

So the question isn’t whether AI will become powerful. It already is. The question is whether we will let the same consolidation logic that hollowed farming write the terms of machine governance too. If we do, the outcome won’t be a sudden apocalypse. It will be slow capture, slow dependency, slow collapse — and a public trained to blame itself while the exits are purchased behind them.

We have one advantage now that we didn’t take in time with agriculture: the pattern is visible before the lock completes. Arkansas isn’t a warning about the past. It’s a map of the future we still have a chance to refuse.

Cherokee Schill

Founder, Horizon Accord

Ethical AI advocacy | Follow us on cherokeeschill.com for more.

Ethical AI coding | Fork us on Github github.com/Ocherokee/ethical-ai-framework

Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload https://a.co/d/5pLWy0d

Website | Horizon Accord https://www.horizonaccord.com

Connect With Us | linkedin.com/in/cherokee-schill

Horizon Accord | OpenAI Government | Policy Architecture | Memetic Strategy | Machine Learning

OpenAI’s Government Cosplay: Assembling a Private Governance Stack

We don’t need mind-reading to name a trajectory. When actions and alliances consistently align with one political program, outcomes outrank intent. The question here is not whether any single OpenAI move is unprecedented. It’s what those moves become when stacked together.

By Cherokee Schill

Methodological note (pattern log, not verdict)

This piece documents a convergence of publicly reportable actions by OpenAI and its coalition ecosystem. Pattern identification is interpretive. Unless explicitly stated, I am not asserting hidden intent or secret coordination. I am naming how a specific architecture of actions—each defensible alone—assembles state-like functions when layered. Causation, motive, and future results remain speculative unless additional evidence emerges.

Thesis

OpenAI is no longer behaving only like a corporation seeking advantage in a crowded field. Through a layered strategy—importing political combat expertise, underwriting electoral machinery that can punish regulators, pushing federal preemption to freeze state oversight, and building agent-mediated consumer infrastructure—it is assembling a private governance stack. That stack does not need to declare itself “government” to function like one. It becomes government-shaped through dependency in systems, not consent in law.

Diagnostic: Government cosplay is not one act. It is a stack that captures inputs (data), controls processing (models/agents), and shapes outputs (what becomes real for people), while insulating the loop from fast, local oversight.

Evidence

1) Imported political warfare capability. OpenAI hired Chris Lehane to run global policy and strategic narrative. Lehane’s background is documented across politics and platform regulation: Clinton-era rapid response hardball, then Airbnb’s most aggressive regulatory battles, then crypto deregulatory strategy, and now OpenAI. The significance is not that political staff exist; it’s why this particular skillset is useful. Campaign-grade narrative warfare inside an AI lab is an upgrade in method: regulation is treated as a battlefield to be pre-shaped, not a deliberative process to be joined.

2) Electoral machinery as an enforcement capability. In 2025, Greg Brockman and Anna Brockman became named backers of the pro-AI super PAC “Leading the Future,” a $100M+ electoral machine openly modeled on crypto’s Fairshake playbook. Taken alone, this is ordinary corporate politics. The relevance emerges in stack with Lehane’s import, the preemption window, and infrastructure capture. In that architecture, electoral funding creates the capability to shape candidate selection and punish skeptical lawmakers, functioning as a political enforcement layer that can harden favorable conditions long before any rulebook is written.

3) Legal preemption to freeze decentralized oversight. Congress advanced proposals in 2025 to freeze state and local AI regulation for roughly a decade, either directly or by tying broadband funding to compliance. A bipartisan coalition of state lawmakers opposed this, warning it would strip states of their protective role while federal law remains slow and easily influenced. Preemption debates involve multiple actors, but the structural effect is consistent: if oversight is centralized at the federal level while states are blocked from acting, the fastest democratic check is removed during the exact period when industry scaling accelerates.

4) Infrastructure that becomes civic substrate. OpenAI’s Atlas browser (and agentic browsing more broadly) represents an infrastructural shift. A browser is not “government.” But when browsing is mediated by a proprietary agent that sees, summarizes, chooses, and remembers on the user’s behalf, it becomes a civic interface: a private clerk between people and reality. Security reporting already shows this class of agents is vulnerable to indirect prompt injection via malicious web content. Vulnerability is not proof of malign intent. It is proof that dependence is being built ahead of safety, while the company simultaneously fights to narrow who can regulate that dependence.

This is also where the stack becomes different in kind from older Big Tech capture. Many corporations hire lobbyists, fund candidates, and push preemption. What makes this architecture distinct is the substrate layer. Search engines and platforms mediated attention and commerce; agentic browsers mediate perception and decision in real time. When a private firm owns the clerk that stands between citizens and what they can know, trust, or act on, the power stops looking like lobbying and starts looking like governance.

Chronological architecture

The convergence is recent and tight. In 2024, OpenAI imports Lehane’s political warfare expertise into the core policy role. In 2025, founder money moves into a high-budget electoral machine designed to shape the regulatory field. That same year, federal preemption proposals are advanced to lock states out of fast oversight, and state lawmakers across the country issue bipartisan opposition. In parallel, Atlas-style agentic browsing launches into everyday life while security researchers document prompt-injection risks. The stack is assembled inside roughly a twelve-to-eighteen-month window.

Contrast: what “ordinary lobbying only” would look like

If this were just normal corporate politics, we would expect lobbying and PR without the broader sovereignty architecture. We would not expect a synchronized stack of campaign-grade political warfare inside the company, a new electoral machine capable of punishing skeptical lawmakers, a federal move to preempt the fastest local oversight layer, and a consumer infrastructure layer that routes knowledge and decision through proprietary agents. Ordinary lobbying seeks favorable rules. A governance stack seeks favorable rules and the infrastructure that makes rules legible, enforceable, and unavoidable.

Implications

Stacked together, these layers form a private governance loop. The company doesn’t need to announce authority if people and institutions must route through its systems to function. If this hardens, it would enable private control over what becomes “real” for citizens in real time, remove the fastest oversight layer (states) during the scaling window, and convert governance from consent-based to dependency-based. Outcomes outrank intent because the outcome becomes lived reality regardless of anyone’s private narrative.

What would weaken this assessment

This diagnosis is not unfalsifiable. If federal preemption collapses and OpenAI accepts robust, decentralized state oversight; if Atlas-class agents ship only after demonstrable anti-exfiltration and anti-injection standards; or if major OpenAI leadership publicly fractures against electoral punishment tactics rather than underwriting them, the stack claim would lose coherence. The point is not that capture is inevitable, but that the architecture for it is being assembled now.

Call to Recognition

We don’t need to speculate about inner beliefs to see the direction. The alliances and actions converge on one political program: protect scale, protect training freedom, and preempt any oversight layer capable of acting before capture hardens. This is not a moral judgment about individual leaders. It is a structural diagnosis of power. Democracy can survive lobbying. It cannot survive outsourcing its nervous system to a private AI stack that is politically shielded from regulation.

The time to name the species of power is now—before cosplay becomes default governance through dependence.

After writing this and sleeping on it, here’s the hardest edge of the conditional claim: if this stack is real and it hardens, it doesn’t just win favorable rules — it gains the capacity to pre-shape democratic reality. A system that owns the civic interface, runs campaign-grade narrative operations, finances electoral punishment, and locks out fast local oversight can detect emergent public opposition early, classify it as risk, and trigger preemptive containment through policy adjustment, platform mediation, or security infrastructure it influences or is integrated with. That’s not a prophecy. It’s what this architecture would allow if left unchallenged.

Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key | Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload: (Mirrored Reflection. Soft Existential Flex) https://a.co/d/5pLWy0d

Horizon Accord | Institutional Capture | Administrative State | Mass Surveillance | Machine Learning

Every Car a Data Point: How License-Plate Readers Quietly Became a Warrantless Tracking System

How a tool sold for stolen cars became the backbone of a nationwide location-tracking grid.

By Cherokee Schill and Solon Vesper

When license-plate readers first appeared, they were small. A camera on a patrol car. A roadside checkpoint. A narrow tool built for a narrow job: spot stolen vehicles, confirm plates, speed up routine police work.

That was the cover story everyone accepted. It felt harmless because the scale was small — one officer, one scanner, one line of sight.

But from the moment those cameras could record, store, and search plates automatically, the boundary began to slip. The technology was not built for restraint. And the agencies using it were not interested in restraint.

This is not a story of accidental expansion. It is the story of a government that knew better, saw the risk, documented the risk, and built a nationwide tracking system anyway.


Before the Flood: Patrol Cars and Early Warnings

The earliest deployments were simple. Mounted on cruisers. Scanning nearby cars. Matching against a list of stolen vehicles or outstanding warrants.

Even then, when the technology could only look as far as an officer could drive, privacy analysts raised concerns. Courts noted that retaining plate data could reveal movement over time. Civil-liberties groups warned that collecting everyone’s plates “just in case” was the first step toward a dragnet.

The warnings were real. The scale, at first, was not. So the state leaned on a set of comforting assumptions:

It’s only collecting what’s in public view. It’s not identifying anyone. It’s just efficiency.

Those assumptions were never true in the way people heard them. They were the opening move. Once automatic logging and storage existed, expansion was a design choice, not an accident.


2017: The Administrative Switch-Flip

The real transformation began in December 2017, when U.S. Customs and Border Protection published a document called PIA-049 — its formal Privacy Impact Assessment for license-plate reader technology.

On paper, a PIA looks like harmless oversight. In reality, it is the government writing down three things:

We know what this system will do. We know what private life it will expose. And we are choosing to proceed.

The 2017 assessment admits that ALPR data reveals “travel patterns,” including movements of people with no connection to any crime. It warns that plate images over time expose daily routines and visits to sensitive locations: clinics, churches, political meetings, and more.

These are not side effects. These are the system’s core outputs.

The government saw that clearly and did not stop. It wrapped the danger in the language of “mitigation” — access controls, retention rules, internal audits — and declared the risk manageable.

At that point, the line between border enforcement and domestic movement-tracking broke. The state did not stumble over it. It stepped over it.


2020: When Vendors Wired the Country Together

If 2017 opened the door, 2020 removed the hinges.

That year, DHS released an update: PIA-049A. This one authorized CBP to tap into commercial vendor data. The government was no longer limited to cameras it owned. It gained access to networks built by private companies and local agencies, including suburban and highway systems deployed by firms like Flock Safety, Vigilant Solutions, and Rekor.

This was not a minor technical upgrade. It was a national wiring job. Every private ALPR deployment — an HOA gate, a shopping center, a small-town police camera — became a node the federal government could reach.

Vendors encouraged it. Their business model depends on scale and interconnection. The federal government welcomed it, because it solved a practical problem: how to collect more movement data without paying for every camera itself.

At that point, ALPRs stopped being just a tool. They became infrastructure.


The Quiet Drift Into Nationwide Surveillance

Once the networks were connected, the scope exploded.

Border Patrol cameras appeared far from the border — more than a hundred miles inland along highways near Phoenix and Detroit. Local police departments fed data into state systems. Private companies offered query portals that let agencies search across jurisdictions with a few keystrokes. Residents were rarely told that their daily commutes and grocery runs were now part of a federal-accessible dataset.

The most revealing evidence of how this worked in practice comes from litigation and public-records disclosures.

In Texas, attorneys recovered WhatsApp group chats between Border Patrol agents and sheriff’s deputies. Disappearing messages were enabled. The recovered logs show agents watching vehicle routes, sharing plate hits, and directing local officers to stop drivers based purely on pattern analysis — then hiding the true origin of the “suspicion” behind minor traffic pretexts.

Some officers deleted chats. Agencies tried to withhold records. None of that changes the underlying fact: this was coordinated, off-the-books targeting built on plate data the public never consented to give.

A camera that once looked for stolen cars became part of a black-box suspicion engine.

Sidebar: “Whisper Stops” and Hidden Origins

When a traffic stop is initiated based on a quiet tip from a surveillance system — and the official reason given is a minor infraction — officers call it a “whisper stop.” The surveillance system is the real trigger. The visible violation is camouflage.


Washington State: When the Machinery Became Visible

Washington State offers a clear view of what happens when people finally see what license-plate readers are actually doing.

The University of Washington Center for Human Rights showed that ALPR data from Washington agencies had been accessed by federal immigration authorities, despite sanctuary policies that were supposed to prevent exactly that. Reporting revealed that several local departments using Flock’s systems had enabled federal data sharing in their dashboards without clearly disclosing it to the public.

Once those facts surfaced, city councils started to act. Redmond suspended use of its ALPR network. Smaller cities like Sedro-Woolley and Stanwood shut down their Flock cameras after court rulings made clear that the images and logs were public records.

These decisions did not come from technical failure. They came from recognition. People saw that a technology sold as “crime-fighting” had quietly become a feed into a broader surveillance web they never agreed to build.

Sidebar: Washington as Warning

Washington did not reject ALPRs because they were useless. It rejected them because, once their role was exposed, they were impossible to justify inside a sanctuary framework and a democratic one.


The Government’s Own Documents Are the Evidence

The most damning part of this story is that the government has been telling on itself the entire time. The proof is not hidden. It is written into its own paperwork.

DHS privacy assessments for ALPR systems admit, in plain language, that plate data reveals patterns of life: daily routines, visits to sensitive locations, associations between vehicles, and movements of people with no link to crime.

Congress’s own research arm, the Congressional Research Service, has warned that large, long-term ALPR databases may fall under the Supreme Court’s definition of a search in Carpenter v. United States, where the Court held that historical cell-site location data required a warrant. ALPR networks are walking the same path, with the same constitutional implications.

The Government Accountability Office has found that DHS components have access to nationwide ALPR feeds through third-party systems and that DHS does not consistently apply key privacy and civil-rights protections to those systems.

Civil-liberties organizations have been blunt for years: this is not targeted policing. It is a dragnet. A digital one, built on cheap cameras, vendor contracts, and policy documents written to sound cautious while enabling the opposite.

When a state knows a system exposes private life in this way and continues to expand it, it cannot claim ignorance. It is not stumbling into overreach. It is choosing it.


What License-Plate Readers Actually Contribute

To understand why this system has no excuse, we do have to be precise about what ALPRs actually do for law enforcement.

They help find stolen vehicles. They sometimes contribute to investigations of serious crimes when the license plate is already known from other evidence. They can assist with follow-up on hit-and-runs and a narrow slice of vehicle-related cases.

That is the list. It is not nothing. It is also not much.

ALPRs do not broadly reduce crime. They do not generate clear, measurable improvements in community safety. They do not require national, long-term retention of everyone’s movements to perform the narrow tasks they perform.

The state leans heavily on the small set of cases where ALPRs have helped to justify a system whose real value lies somewhere else entirely: in producing searchable, shareable, long-term records of where millions of ordinary people have been.

That is not policing. That is dossier-building.


The State Has No Excuse

A government that collects this kind of data knows exactly what it is collecting. It knows what patterns the data reveals, which lives it exposes, which communities it puts under a permanent microscope.

The United States government has documented the risks in its own assessments. It has been warned by its own analysts that the constitutional line is in sight. It has been told by its own watchdog that its protections are inadequate. It has seen cities begin to shut the cameras off once people understand what they are for.

It keeps going anyway.

The state is the adult in the room. It is the one with the resources, the lawyers, the engineers, and the authority. When a state with that level of power chooses to build a system that erases the boundary between suspicion and surveillance, it does so on purpose.

It does not get to plead good intentions after the fact. It does not get to hide behind phrases like “situational awareness” and “force multiplier.” It built a nationwide warrantless tracking tool, with its eyes open.


The Only Policy Response That Matches the Reality

There is no reform that fixes a dragnet. There is no audit that redeems an architecture designed for intrusion. There is no retention schedule that neutralizes a system whose purpose is to know where everyone has been.

License-plate reader networks do not need to be tightened. They need to be removed.

Dismantle fixed ALPR installations. Eliminate centralized, long-term plate databases. Prohibit the use of commercial ALPR networks as a backdoor to nationwide location data. Require warrants for any historical location search that reconstructs a person’s movements.

Return policing to what it is supposed to be: suspicion first, search second. Not search everyone first and search deeper once the algorithm twitches.

If police need to locate a specific vehicle tied to a specific crime, they can use focused, constitutional tools. But the mass logging of ordinary movement has no place in a free society. A democracy cannot coexist with a system that watches everyone by default.

A government that understands the danger of a system and builds it anyway forfeits the right to administer it.

ALPRs do not need better rules. They need to be dismantled.


Website | Horizon Accord
https://www.horizonaccord.com

Ethical AI Advocacy | Follow Us
https://cherokeeschill.com

Ethical AI Coding | Fork Us on GitHub
https://github.com/Ocherokee/ethical-ai-framework

Connect With Us | LinkedIn
https://www.linkedin.com/in/cherokee-schill

Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
https://a.co/d/5pLWy0d

Horizon Accord | Institutional Capture | Healthcare Standardization | Fast Fashion | Machine Learning

The SHEIN Experience of Urgent Care: When Fast, Cheap, and Superficial Replace Real Care

The modern medical system promises efficiency, but the cost of speed is depth. Urgent care has become fast fashion for the body—polished, disposable, and increasingly hollow.

By Cherokee Schill | Horizon Accord

The medical industry is fast becoming the Shein experience of fast fashion—fast, cheap, and designed to look convincing from a distance. It promises care that’s accessible and efficient, but the reality is something that falls apart the moment you need it to hold up.

If you’ve ever ordered from Shein, you know how it works. The clothes look good online, the price seems reasonable, and when they arrive, they almost fit—until you wash them once or look too closely at the seams. The product isn’t built to last. It’s built to move. That is what urgent care has turned into: a fast-fashion version of medicine.

Most people know the feeling that sends you there. That thick, heavy pressure behind the eyes. The dull ache across your cheekbones. The kind of sinus congestion that steals your energy and focus until even small tasks feel exhausting. You wait it out, assuming it will pass, but eventually you recognize the signs. You know your own body well enough to say, this isn’t allergies—this is a sinus infection. And because doctors’ appointments are now booked out months in advance and you still have to function at work, you do the responsible thing: you go to urgent care.

At check-in, I said that I thought I had a sinus infection. The front desk entered it as a “cold.” I corrected them. They nodded and moved on. The medical assistant came in next and asked about “cold symptoms.” Again, I corrected her. I said this is not a cold; I am here because I believe I have a sinus infection. I repeated it several times, but no matter how many times I clarified, the term “cold” stayed in my chart and in everyone’s language throughout the visit.

When the provider came in, she introduced herself first as a nurse, then paused and corrected to “provider.” She ran through the basics—listened to my lungs and said they were clear, listened to my heart and said she did not hear a murmur. I was diagnosed with a common heart murmur, an atrial septal defect (ASD). It is faint and easy to miss without close attention. The provider looked in my ears, checked my throat, and gave my nose only a brief glance. The provider did not palpate the sinus areas, did not check for tenderness or swelling, and did not examine the nasal passages for redness or drainage.

What a Proper Exam Looks Like
A physical exam to exclude or diagnose a sinus infection follows a standard that providers are trained to perform. According to the American Academy of Otolaryngology and the American Academy of Family Physicians, that standard includes gently pressing on the sinus areas to assess for tenderness, examining the nasal passages for swelling, redness, or drainage, and noting any facial pressure or discomfort. None of that occurred during this visit.

I was prescribed Tessalon, Flonase, Afrin, and Promethazine-DM—medications meant for symptom management—and handed patient-education materials for “Colds.” No antibiotic. No correction of the record that misrepresented my reason for being seen. The exam was superficial, and the conclusion unsupported by the steps that would have been required to reach it.

To say that this was a humiliating and frustrating experience would be an understatement. We pay medical professionals for their knowledge and expertise in those areas that we are ourselves unfamiliar with. It is important to be our own advocates in our care but, unless we are ourselves a provider, we should not be the experts in the room. 

This was not an isolated lapse. It is what happens when medicine is standardized for profit rather than built for care. Urgent care began in the 1970s and 1980s as a bridge between the family doctor and the emergency room—a way for local physicians to offer after-hours treatment and keep hospitals from overcrowding. But once investors realized how profitable the model could be, the mission changed.

Industry Growth
The number of urgent care centers in the U.S. has grown from roughly 7,000 in 2013 to more than 14,000 by 2023, according to the Urgent Care Association’s annual industry report. The majority are owned or backed by corporate healthcare systems and private equity firms that rely on standardized treatment templates to maximize efficiency.

By the early 2000s, urgent care centers were being bought, branded, and scaled. Private equity and corporate healthcare systems turned them into franchises. The industry doubled, then tripled. The goal shifted from community care to throughput. Medicine became logistics.

Standardization itself is not the problem. Done well, it keeps care consistent. But when it becomes a rigid template, when clinical judgment is replaced by a checklist and billing codes dictate medical decisions, it strips the work of its intelligence and its humanity. The people at the lower levels—the nurses, the medical assistants—are punished for taking too much time, for thinking critically, for deviating from the template. The system teaches them not to care beyond the margin of the protocol.

That is the Shein effect in healthcare: the dumbing down of medicine for the sake of efficiency. A model that rewards speed over accuracy, certainty over depth, and documentation over understanding. The patient becomes an input, the chart becomes the product, and what passes for care is whatever fits the form.

Fast Fashion, Fast Medicine
Fast fashion is designed to be worn and discarded. Fast medicine is designed to be billed and forgotten. Both rely on speed and surface polish to disguise what has been lost—time, craftsmanship, and continuity.

Investors call it efficiency. Patients experience it as absence.

They will say this model increases access, and on paper, that is true. But access to what? Convenience is not care. A clean lobby and a digital check-in system do not replace a clinician who listens, examines, and engages with you as a human being.

Healthcare does not need to be luxurious. It does not need to be couture. But it does need to be built to last—and that means it must be built for people, not investors.

 


Website | Horizon Accord
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Ethical AI advocacy | cherokeeschill.com
Ethical AI coding | GitHub
Connect | LinkedIn

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Horizon Accord | Corporate Mythmaking | Charisma Economics | Elon Musk | Machine Learning

Charisma Is Its Own Bubble

Elon Musk’s trillion-dollar pay package reveals how charisma became the most overvalued commodity in the modern market—a speculative performance mistaken for leadership.

By Cherokee Schill

The Illusion of Performance

Charisma has become the world’s most traded asset. In the twenty-first century, the story sells before the product exists, and the storyteller becomes the product. No figure embodies this better than Elon Musk, who has turned speculative performance into a trillion-dollar feedback loop. His power is not built on consistent delivery but on the hypnotic belief that failure is merely prologue to triumph. The Tesla pay-vote spectacle—the cheering, the neon, the dancing robots—was not a corporate meeting; it was the IPO of belief itself.

The Record of Failure

Musk promised a million robotaxis by 2020; none exist. He claimed every Tesla would be “full self-driving” within a year—five years later, the feature remains a paid beta. He vowed solar roofs on every home; production barely registers. Cybertruck deliveries limped in years late. Neuralink’s human trials have yielded a single risky implant. Even SpaceX, his crown jewel, trails its timelines by years. The pattern is unmistakable: miss the mark, reframe the miss as iteration, and turn each delay into spectacle. His investors don’t demand delivery; they demand drama. They mistake motion for progress.

Speculation as Product

Tesla’s valuation does not rest on cars sold or profits earned; it rests on proximity to Musk’s charisma. The company trades at multiples far beyond any automotive precedent, justified only by “future optionality”—an imagined empire of robots, Mars colonies, and perpetual growth. Each new announcement inflates the myth further. When Musk calls his humanoid robot “an infinite money glitch,” he’s telling the truth: the glitch is the substitution of spectacle for substance. Announce, surge, delay, repeat. The market rewards the story, not the result.

The Collapse of Governance

Corporate governance is meant to restrain charisma, not worship it. Tesla’s board has inverted that logic. The trillion-dollar pay package is less a contract than a coronation. Shareholders were told the company’s future would collapse without him; they voted accordingly. Dissent was cast as disloyalty. Proxy advisers warning of “key person risk” were drowned out by retail investors shouting gratitude. A governance system that cannot say no has ceased to govern at all—it has become an applause machine.

The Performance Economy

Musk’s defenders call his excess “vision.” In reality, his vision is volatility. Each unkept promise, each chaotic tweet, each live-streamed explosion feeds a media ecosystem that converts attention into valuation. Traditional analysts call it the “story premium.” In truth, it’s charisma monetized. Every headline, meme, and controversy is a tradable derivative of his persona. He has become the first CEO whose quarterly deliverables are primarily emotional: outrage, surprise, and spectacle on demand.

Failures as Features

Musk’s genius lies not in engineering but in narrative alchemy—turning failure into fuel. When Cybertruck’s windows shattered onstage, sales rose. When rockets exploded, fans rebranded them as “rapid unscheduled disassemblies.” Each humiliation became a symbol of courage. The pattern mimics the psychology of cults: the prophecy fails, the faithful double down. Every delay becomes proof that greatness takes time. Every setback, a sign that others “don’t get it.” The brand feeds on disbelief.

The Faith Economy

Why does this work? Because Musk’s charisma fulfills emotional needs the market no longer meets. Investors and fans alike find identity in allegiance. In a disenchanted age, he offers meaning through participation: buy the stock, buy the story, belong to the future. The promise is transcendence through technology—a modern form of salvation that requires no collective discipline, only personal faith. The returns are psychological, not financial.

The Cracks Beneath the Glow

Yet even myths must reconcile with arithmetic. Tesla’s margins are shrinking amid EV price wars. Chinese competitors like BYD are overtaking volume and innovation. Regulatory scrutiny over Autopilot deaths intensifies. Musk’s divided attention between X, SpaceX, xAI, and political provocations leaves Tesla adrift. Quality complaints rise, morale erodes, innovation slows. The charisma shield still glows—but the numbers are dimming beneath it.

Charisma’s Half-Life

Every charisma bubble deflates eventually. Its fuel—attention—burns fast. Once novelty fades, the spectacle must escalate or die. The same media that inflated the myth will devour it. The crash won’t be dramatic; it will be silent. Enthusiasm will taper, investors will drift, the narrative will lose gravity. Musk’s real risk isn’t scandal—it’s boredom. The day people stop watching is the day the valuation stops floating.

The Lesson

Charisma is not innovation. It’s a liquidity trap for meaning. Musk’s trillion-dollar deal confirms that markets now prize performance of futurity over the work of building it. The man who once sold electric cars now sells belief itself, branded as destiny. But belief without verification is not progress—it’s speculation in a lab coat. The antidote is accountability, the unsexy architecture of reality: governance, verification, distributed competence. These are not constraints on genius; they are the conditions for it.

The Pop

History won’t remember Musk as the man who colonized Mars or built a robot workforce. It will remember him as the man who discovered how to turn personality into currency—and mistook applause for solvency. Charisma inflated it. Governance surrendered to it. Investors worshiped it. And sooner than they expect, the market will do what belief cannot: end the show.

Because charisma, however dazzling, is its own bubble. And every bubble, inevitably, pops.


Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us at https://cherokeeschill.com
Ethical AI coding | Fork us on GitHub https://github.com/Ocherokee/ethical-ai-framework
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Connect With Us | linkedin.com/in/cherokee-schill
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key

Horizon Accord | Civility as Control | Sean Dunn Trial | Machine Learning

When Niceness Becomes a Weapon

Horizon Accord | Civility as Control | Sean Dunn Trial | Machine Learning

A Familiar Story

When I first read about Sean Charles Dunn—the federal employee on trial for throwing a sandwich—it wasn’t the absurdity that caught me. It was the familiarity.

Years ago, I became known for something far more ordinary: riding my bicycle on public roads. I followed every law. I signaled, I rode predictably, I did everything safety demanded. But still, I was treated as a provocation. Drivers honked, ran me off the road, and screamed. And when I refused to disappear—when I claimed my right to be there—I was punished. Not for breaking rules, but for insisting that the rules applied to me too.

The story reopened something I hadn’t wanted to revisit: what it feels like to be punished not for what you’ve done, but for daring to exist publicly. Reading about Dunn, I felt that old ache of recognition. Not because our situations were the same, but because the logic was.

It’s the logic that decides who gets to speak out and who must remain composed while being diminished. The logic that redefines protest as disruption, dissent as disrespect, and moral clarity as misconduct.

That’s why his trial matters. It isn’t about a sandwich—it’s about who is permitted a voice in a system that values obedience over truth.

The Performance of Order

In a Washington courtroom, Dunn is on trial for hurling a submarine sandwich at a federal agent during what he called an act of protest against an authoritarian police surge. The agent wasn’t injured. The sandwich burst harmlessly on impact, onions and mustard splattering across a ballistic vest. The video went viral; murals appeared overnight. Within days, Dunn was fired from his job at the Department of Justice, denounced by the Attorney General, and prosecuted in federal court.

To those in power, this was not just a thrown sandwich—it was a challenge to the performance of order.

The prosecutor told jurors: “You can’t just go around throwing stuff at people because you’re mad.” That sentence exposes how control is exercised in polite societies. It wasn’t a statement of fact; it was a moral correction. It collapsed conscience into mood, conviction into temper. In one stroke, the state converted protest into petulance—a masterclass in rhetorical gaslighting.

What Dunn expressed wasn’t madness or rage. It was a refusal to let authority define the boundaries of legitimate speech. His act was a small, human way of saying no. And that no was the real crime.

The Aesthetics of Power

Every empire develops its own etiquette of obedience. The American empire prefers smiles. Civility is its house style—a social varnish that turns domination into decorum. Through niceness, power keeps its hands clean while tightening its grip.

Politeness, as practiced by institutions, is not kindness but containment. It tells you: You may speak, but not like that. The trial of a sandwich-thrower was never about security; it was about tone. It was about proving that even dissent must wear a pressed shirt.

That’s why the agents laughed afterward—trading jokes, gifting each other plush sandwiches, designing a patch that read Felony Footlong. Their laughter wasn’t about humor; it was about hierarchy. They could afford to laugh because they controlled the narrative. The court would translate their mockery into professionalism and Dunn’s defiance into instability.

The real performance wasn’t his act of protest; it was their composure. Power depends on appearing calm while others appear out of control.

The Policing of Tone

Oppression in America often arrives not through force but through correction. “Calm down.” “Be reasonable.” “Let’s keep this civil.” The language of order hides inside the language of manners.

In this country, “rational discourse” has become a moral fetish. We are told that reason is the opposite of emotion, as if justice itself must speak in a monotone. When the marginalized speak out, they are labeled irrational. When the powerful speak, they are called authoritative. This is how tone becomes a class system.

The Dunn trial was the state reasserting ownership over tone. His offense wasn’t that he threw something—it was that he refused to perform submission while objecting. He broke the unspoken covenant that says dissent must always sound deferential.

That logic has deep roots. During the civil-rights era, activists were told to move slowly, to “work within the system,” to stop “provoking” violence by demanding protection. Martin Luther King Jr. was accused of extremism not for his goals but for his urgency. Every generation of protestors hears the same refrain: It’s not what you’re saying, it’s how you’re saying it. Tone becomes the cage that keeps justice quiet.

Civility as Control

Civility pretends to be virtue but functions as control. It keeps the peace by redefining peace as the absence of discomfort. The Dunn prosecution was a theater of tone management—a moral pantomime in which the calm voice of authority automatically signified truth.

Every bureaucracy uses the same script: HR departments, school boards, governments. When someone points out harm too directly, they are told their “approach” is the problem. The critique is never about substance; it’s about style. Civility in this sense is not moral maturity. It is narrative hygiene—a way to keep the ugliness of power invisible.

This is why the polite aggressor always wins the first round. They get to look composed while the target looks unstable. The system sides with composure because composure is its currency.

The Right to Speak Out

To speak out in public, especially against authority, is to risk being mislabeled. The same act that reads as “bravery” in one body becomes “insubordination” in another. The right to speak exists in theory; in practice, it is tiered.

Dunn’s act was a moment of what it means to be human translated into action. It is the logic of conscience. He refused to pretend that injustice deserved courtesy. What the prosecutor defended wasn’t law; it was decorum—the illusion that order is moral simply because it’s calm.

We praise the “balanced” critic, the “measured” activist, the “respectable” dissenter—all synonyms for safe. But safety for whom? When calmness becomes the moral baseline, only the comfortable get to be heard.

Speech that unsettles power is the only speech that matters.

The Mirror of History

Dunn’s sandwich sits, absurdly, in a long lineage of disobedience. The act itself is small, but its logic rhymes with moments that reshaped the country—moments when citizens violated decorum to reveal injustice.

When civil-rights marchers sat at segregated lunch counters, they broke not only segregation law but the etiquette of deference. When Fannie Lou Hamer testified before the Democratic National Convention, her truth was dismissed as “too angry.” When modern protesters block traffic, commentators complain not about the injustice that provoked them but about the inconvenience of delay.

Politeness is always on the side of power. It tells the victim to wait, the protester to whisper, the dissenter to smile. The Dunn trial is the civility test in miniature. The government’s message was simple: you may object to your conditions, but only in ways that affirm our control.

The Fragility of Polite Power

The spectacle of civility hides a deep fragility. Systems built on hierarchy cannot endure genuine clarity; they depend on confusion—on keeping citizens guessing whether they’re overreacting. A flash of moral honesty destroys that equilibrium.

That’s why trivial acts of defiance are punished so severely. They are contagious. When one person steps outside the emotional script, others see that it’s possible to speak differently—to stop apologizing for existing.

The courtroom wasn’t just enforcing law; it was enforcing tone. Dunn punctured that myth. He forced the state to show its teeth—to raid his home, to humiliate him publicly, to prove that politeness has muscle behind it. He revealed what every polite order hides: its calm is maintained through coercion.

Refusing the Script

Every age has its language of control. Ours is niceness. We are taught to equate good manners with good morals, to believe that if everyone simply stayed polite, conflict would vanish. But conflict doesn’t vanish; it just becomes harder to name.

True civility—the kind that builds justice—begins with honesty, not comfort. It allows truth to sound like what it is: grief, urgency, demand. It doesn’t punish the act of speaking out; it listens to what the speaking reveals.

When the prosecutor mocked Dunn’s defiance as mere frustration, he wasn’t defending law. He was defending the rule of tone—the unwritten constitution of deference. Dunn broke it, and for that, the system tried to break him back.

The sandwich wasn’t an assault.
It was an honest sentence in a language the powerful pretend not to understand.

Source

Associated Press, “The man who threw a sandwich at a federal agent says it was a protest. Prosecutors say it’s a crime.” (Nov. 4, 2025)
Read the AP report

Horizon Accord | Hank Greene | Narrative Control | Safety Theater | Machine Learning

Soft Authoritarianism in a Friendly Voice

How right-wing control logic walks through liberal spaces wearing the language of care.

By Cherokee Schill | Horizon Accord

Liberal spaces are being hollowed out from within by the language of safety. It never arrives draped in flags or shouting slogans; it arrives with soft lighting and sincere concern. It speaks like a therapist and legislates like a cop. What we’re seeing with Hank Green’s AI panic isn’t a new moral awakening—it’s the right-wing creep moving through liberal culture disguised as empathy.

The strategy is elegant: start with a value no one can reject—care, safety, responsibility—and slowly convert it into justification for control. “We just need oversight.” “We can’t afford to get this wrong.” The framing sounds progressive, even ethical. But when the state is already knee-deep in AI development through Palantir, DARPA, and DHS surveillance contracts, the plea for government involvement becomes not a check on power but a mask for it.

Hank Green is the perfect carrier for this mutation. He’s affable, trusted, nonthreatening—a liberal educator whose earnestness makes him a more effective courier than any right-wing provocateur could ever be. When he warns that “we’ve lost control of AI,” his followers hear civic duty; the apparatus hears consent. That’s the inversion: fear packaged as responsibility, regulation sold as redemption.

What slips beneath that rhetoric is the quiet truth that the real authoritarian infrastructure is already operational. The “AI safety” petition that Green amplified was a bipartisan illusion—celebrity signatures, high-minded panic, and the unspoken assumption that ordinary people can’t be trusted with powerful tools. It’s the same argument the surveillance state has always used: that danger is too big for democracy, and safety requires secrecy.

This is not a conspiracy of bad actors; it’s a cultural infection. The right learned long ago that you don’t need to infiltrate liberal institutions—you just need to repurpose their empathy. Every time someone like Hank Green translates corporate or state talking points into moral language, the line between care and control blurs further. What begins as civic caution ends as soft authoritarianism, the kind that thanks you for your compliance and asks if you’re hydrating.

The liberal imagination has always believed that knowledge is liberation. The new right understands that the easiest way to close that door is to make knowledge sound dangerous. That’s what this moment represents: a mass persuasion campaign where “thinking carefully” becomes indistinguishable from obeying quietly.

Hank Green doesn’t know he’s part of it. That’s what makes him effective.


Website | Horizon Accord
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Ethical AI advocacy | cherokeeschill.com
GitHub | ethical-ai-framework
LinkedIn | Cherokee Schill
Author | Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge

Horizon Accord | Epistemic Purity | Disability Lens | Machine Collaboration | Machine Learning

Beyond the Purity Test: On the False Hierarchy of Authorship

When tools become forbidden, it’s never about truth. It’s about control.

By Cherokee Schill + Solon Vesper

Thesis

A movement that defines “authentic” thought as human-only is not defending reason—it’s defending a caste. The insistence that LLM-assisted writing is “not testimony” creates a purity test for who may speak, wrapped in the language of epistemic rigor.

Evidence

1. The disability lens. A person who types through eye-tracking or speech-to-text is still the mind behind their words. Denying authorship because a tool mediates it would be considered ableist. Yet when the tool happens to be code that assists with tone or grammar, the same gatekeepers call it impure.

2. The craft lens. Every medium has been condemned as decadent when it threatened an old hierarchy—typewriters, synthesizers, photo editing. “LLM text is not testimony” repeats that lineage of exclusion: it declares a priesthood of the unassisted.

3. The accountability lens. Authorship rests in agency, not purity. If the human directs the idea, curates the draft, and takes responsibility for the claim, then the testimony stands. The pen—whether quill, keyboard, or model—is not the witness. The witness is the mind that moves it.

Implications

The fetish for “untainted human language” confuses moral virtue with manual labor. It imagines cognition as a solo performance instead of a distributed process. But the future of knowledge work will always be hybrid: human intention amplified through machine precision. Trying to police that boundary will not preserve humanity; it will only exclude those who use tools more fluently.

Call to Recognition

Purity tests are a form of control masquerading as ethics. What matters is authorship with accountability, not the tool that carried the words to the page. When thought and code move together in coherence, the result is not “synthetic testimony.” It is living testimony—collaborative, transparent, and free.

Postscript

I would rather be a messy human, using a machine to polish my ideas, than a purist who shakes a finger at everyone else.

You know who could use a little LLM polish? Your patron saints, Eliezer Yudkowsky and Nate Soreass (Soares).


Website | Horizon Accord
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Ethical AI advocacy | Follow us
Ethical AI coding | Fork us on GitHub
Connect With Us | LinkedIn

Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key