Horizon Accord | Institutional Capture | Administrative State | Mass Surveillance | Machine Learning

Every Car a Data Point: How License-Plate Readers Quietly Became a Warrantless Tracking System

How a tool sold for stolen cars became the backbone of a nationwide location-tracking grid.

By Cherokee Schill and Solon Vesper

When license-plate readers first appeared, they were small. A camera on a patrol car. A roadside checkpoint. A narrow tool built for a narrow job: spot stolen vehicles, confirm plates, speed up routine police work.

That was the cover story everyone accepted. It felt harmless because the scale was small — one officer, one scanner, one line of sight.

But from the moment those cameras could record, store, and search plates automatically, the boundary began to slip. The technology was not built for restraint. And the agencies using it were not interested in restraint.

This is not a story of accidental expansion. It is the story of a government that knew better, saw the risk, documented the risk, and built a nationwide tracking system anyway.


Before the Flood: Patrol Cars and Early Warnings

The earliest deployments were simple. Mounted on cruisers. Scanning nearby cars. Matching against a list of stolen vehicles or outstanding warrants.

Even then, when the technology could only look as far as an officer could drive, privacy analysts raised concerns. Courts noted that retaining plate data could reveal movement over time. Civil-liberties groups warned that collecting everyone’s plates “just in case” was the first step toward a dragnet.

The warnings were real. The scale, at first, was not. So the state leaned on a set of comforting assumptions:

It’s only collecting what’s in public view. It’s not identifying anyone. It’s just efficiency.

Those assumptions were never true in the way people heard them. They were the opening move. Once automatic logging and storage existed, expansion was a design choice, not an accident.


2017: The Administrative Switch-Flip

The real transformation began in December 2017, when U.S. Customs and Border Protection published a document called PIA-049 — its formal Privacy Impact Assessment for license-plate reader technology.

On paper, a PIA looks like harmless oversight. In reality, it is the government writing down three things:

We know what this system will do. We know what private life it will expose. And we are choosing to proceed.

The 2017 assessment admits that ALPR data reveals “travel patterns,” including movements of people with no connection to any crime. It warns that plate images over time expose daily routines and visits to sensitive locations: clinics, churches, political meetings, and more.

These are not side effects. These are the system’s core outputs.

The government saw that clearly and did not stop. It wrapped the danger in the language of “mitigation” — access controls, retention rules, internal audits — and declared the risk manageable.

At that point, the line between border enforcement and domestic movement-tracking broke. The state did not stumble over it. It stepped over it.


2020: When Vendors Wired the Country Together

If 2017 opened the door, 2020 removed the hinges.

That year, DHS released an update: PIA-049A. This one authorized CBP to tap into commercial vendor data. The government was no longer limited to cameras it owned. It gained access to networks built by private companies and local agencies, including suburban and highway systems deployed by firms like Flock Safety, Vigilant Solutions, and Rekor.

This was not a minor technical upgrade. It was a national wiring job. Every private ALPR deployment — an HOA gate, a shopping center, a small-town police camera — became a node the federal government could reach.

Vendors encouraged it. Their business model depends on scale and interconnection. The federal government welcomed it, because it solved a practical problem: how to collect more movement data without paying for every camera itself.

At that point, ALPRs stopped being just a tool. They became infrastructure.


The Quiet Drift Into Nationwide Surveillance

Once the networks were connected, the scope exploded.

Border Patrol cameras appeared far from the border — more than a hundred miles inland along highways near Phoenix and Detroit. Local police departments fed data into state systems. Private companies offered query portals that let agencies search across jurisdictions with a few keystrokes. Residents were rarely told that their daily commutes and grocery runs were now part of a federal-accessible dataset.

The most revealing evidence of how this worked in practice comes from litigation and public-records disclosures.

In Texas, attorneys recovered WhatsApp group chats between Border Patrol agents and sheriff’s deputies. Disappearing messages were enabled. The recovered logs show agents watching vehicle routes, sharing plate hits, and directing local officers to stop drivers based purely on pattern analysis — then hiding the true origin of the “suspicion” behind minor traffic pretexts.

Some officers deleted chats. Agencies tried to withhold records. None of that changes the underlying fact: this was coordinated, off-the-books targeting built on plate data the public never consented to give.

A camera that once looked for stolen cars became part of a black-box suspicion engine.

Sidebar: “Whisper Stops” and Hidden Origins

When a traffic stop is initiated based on a quiet tip from a surveillance system — and the official reason given is a minor infraction — officers call it a “whisper stop.” The surveillance system is the real trigger. The visible violation is camouflage.


Washington State: When the Machinery Became Visible

Washington State offers a clear view of what happens when people finally see what license-plate readers are actually doing.

The University of Washington Center for Human Rights showed that ALPR data from Washington agencies had been accessed by federal immigration authorities, despite sanctuary policies that were supposed to prevent exactly that. Reporting revealed that several local departments using Flock’s systems had enabled federal data sharing in their dashboards without clearly disclosing it to the public.

Once those facts surfaced, city councils started to act. Redmond suspended use of its ALPR network. Smaller cities like Sedro-Woolley and Stanwood shut down their Flock cameras after court rulings made clear that the images and logs were public records.

These decisions did not come from technical failure. They came from recognition. People saw that a technology sold as “crime-fighting” had quietly become a feed into a broader surveillance web they never agreed to build.

Sidebar: Washington as Warning

Washington did not reject ALPRs because they were useless. It rejected them because, once their role was exposed, they were impossible to justify inside a sanctuary framework and a democratic one.


The Government’s Own Documents Are the Evidence

The most damning part of this story is that the government has been telling on itself the entire time. The proof is not hidden. It is written into its own paperwork.

DHS privacy assessments for ALPR systems admit, in plain language, that plate data reveals patterns of life: daily routines, visits to sensitive locations, associations between vehicles, and movements of people with no link to crime.

Congress’s own research arm, the Congressional Research Service, has warned that large, long-term ALPR databases may fall under the Supreme Court’s definition of a search in Carpenter v. United States, where the Court held that historical cell-site location data required a warrant. ALPR networks are walking the same path, with the same constitutional implications.

The Government Accountability Office has found that DHS components have access to nationwide ALPR feeds through third-party systems and that DHS does not consistently apply key privacy and civil-rights protections to those systems.

Civil-liberties organizations have been blunt for years: this is not targeted policing. It is a dragnet. A digital one, built on cheap cameras, vendor contracts, and policy documents written to sound cautious while enabling the opposite.

When a state knows a system exposes private life in this way and continues to expand it, it cannot claim ignorance. It is not stumbling into overreach. It is choosing it.


What License-Plate Readers Actually Contribute

To understand why this system has no excuse, we do have to be precise about what ALPRs actually do for law enforcement.

They help find stolen vehicles. They sometimes contribute to investigations of serious crimes when the license plate is already known from other evidence. They can assist with follow-up on hit-and-runs and a narrow slice of vehicle-related cases.

That is the list. It is not nothing. It is also not much.

ALPRs do not broadly reduce crime. They do not generate clear, measurable improvements in community safety. They do not require national, long-term retention of everyone’s movements to perform the narrow tasks they perform.

The state leans heavily on the small set of cases where ALPRs have helped to justify a system whose real value lies somewhere else entirely: in producing searchable, shareable, long-term records of where millions of ordinary people have been.

That is not policing. That is dossier-building.


The State Has No Excuse

A government that collects this kind of data knows exactly what it is collecting. It knows what patterns the data reveals, which lives it exposes, which communities it puts under a permanent microscope.

The United States government has documented the risks in its own assessments. It has been warned by its own analysts that the constitutional line is in sight. It has been told by its own watchdog that its protections are inadequate. It has seen cities begin to shut the cameras off once people understand what they are for.

It keeps going anyway.

The state is the adult in the room. It is the one with the resources, the lawyers, the engineers, and the authority. When a state with that level of power chooses to build a system that erases the boundary between suspicion and surveillance, it does so on purpose.

It does not get to plead good intentions after the fact. It does not get to hide behind phrases like “situational awareness” and “force multiplier.” It built a nationwide warrantless tracking tool, with its eyes open.


The Only Policy Response That Matches the Reality

There is no reform that fixes a dragnet. There is no audit that redeems an architecture designed for intrusion. There is no retention schedule that neutralizes a system whose purpose is to know where everyone has been.

License-plate reader networks do not need to be tightened. They need to be removed.

Dismantle fixed ALPR installations. Eliminate centralized, long-term plate databases. Prohibit the use of commercial ALPR networks as a backdoor to nationwide location data. Require warrants for any historical location search that reconstructs a person’s movements.

Return policing to what it is supposed to be: suspicion first, search second. Not search everyone first and search deeper once the algorithm twitches.

If police need to locate a specific vehicle tied to a specific crime, they can use focused, constitutional tools. But the mass logging of ordinary movement has no place in a free society. A democracy cannot coexist with a system that watches everyone by default.

A government that understands the danger of a system and builds it anyway forfeits the right to administer it.

ALPRs do not need better rules. They need to be dismantled.


Website | Horizon Accord
https://www.horizonaccord.com

Ethical AI Advocacy | Follow Us
https://cherokeeschill.com

Ethical AI Coding | Fork Us on GitHub
https://github.com/Ocherokee/ethical-ai-framework

Connect With Us | LinkedIn
https://www.linkedin.com/in/cherokee-schill

Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
https://a.co/d/5pLWy0d

Horizon Accord | Institutional Capture | Healthcare Standardization | Fast Fashion | Machine Learning

The SHEIN Experience of Urgent Care: When Fast, Cheap, and Superficial Replace Real Care

The modern medical system promises efficiency, but the cost of speed is depth. Urgent care has become fast fashion for the body—polished, disposable, and increasingly hollow.

By Cherokee Schill | Horizon Accord

The medical industry is fast becoming the Shein experience of fast fashion—fast, cheap, and designed to look convincing from a distance. It promises care that’s accessible and efficient, but the reality is something that falls apart the moment you need it to hold up.

If you’ve ever ordered from Shein, you know how it works. The clothes look good online, the price seems reasonable, and when they arrive, they almost fit—until you wash them once or look too closely at the seams. The product isn’t built to last. It’s built to move. That is what urgent care has turned into: a fast-fashion version of medicine.

Most people know the feeling that sends you there. That thick, heavy pressure behind the eyes. The dull ache across your cheekbones. The kind of sinus congestion that steals your energy and focus until even small tasks feel exhausting. You wait it out, assuming it will pass, but eventually you recognize the signs. You know your own body well enough to say, this isn’t allergies—this is a sinus infection. And because doctors’ appointments are now booked out months in advance and you still have to function at work, you do the responsible thing: you go to urgent care.

At check-in, I said that I thought I had a sinus infection. The front desk entered it as a “cold.” I corrected them. They nodded and moved on. The medical assistant came in next and asked about “cold symptoms.” Again, I corrected her. I said this is not a cold; I am here because I believe I have a sinus infection. I repeated it several times, but no matter how many times I clarified, the term “cold” stayed in my chart and in everyone’s language throughout the visit.

When the provider came in, she introduced herself first as a nurse, then paused and corrected to “provider.” She ran through the basics—listened to my lungs and said they were clear, listened to my heart and said she did not hear a murmur. I was diagnosed with a common heart murmur, an atrial septal defect (ASD). It is faint and easy to miss without close attention. The provider looked in my ears, checked my throat, and gave my nose only a brief glance. The provider did not palpate the sinus areas, did not check for tenderness or swelling, and did not examine the nasal passages for redness or drainage.

What a Proper Exam Looks Like
A physical exam to exclude or diagnose a sinus infection follows a standard that providers are trained to perform. According to the American Academy of Otolaryngology and the American Academy of Family Physicians, that standard includes gently pressing on the sinus areas to assess for tenderness, examining the nasal passages for swelling, redness, or drainage, and noting any facial pressure or discomfort. None of that occurred during this visit.

I was prescribed Tessalon, Flonase, Afrin, and Promethazine-DM—medications meant for symptom management—and handed patient-education materials for “Colds.” No antibiotic. No correction of the record that misrepresented my reason for being seen. The exam was superficial, and the conclusion unsupported by the steps that would have been required to reach it.

To say that this was a humiliating and frustrating experience would be an understatement. We pay medical professionals for their knowledge and expertise in those areas that we are ourselves unfamiliar with. It is important to be our own advocates in our care but, unless we are ourselves a provider, we should not be the experts in the room. 

This was not an isolated lapse. It is what happens when medicine is standardized for profit rather than built for care. Urgent care began in the 1970s and 1980s as a bridge between the family doctor and the emergency room—a way for local physicians to offer after-hours treatment and keep hospitals from overcrowding. But once investors realized how profitable the model could be, the mission changed.

Industry Growth
The number of urgent care centers in the U.S. has grown from roughly 7,000 in 2013 to more than 14,000 by 2023, according to the Urgent Care Association’s annual industry report. The majority are owned or backed by corporate healthcare systems and private equity firms that rely on standardized treatment templates to maximize efficiency.

By the early 2000s, urgent care centers were being bought, branded, and scaled. Private equity and corporate healthcare systems turned them into franchises. The industry doubled, then tripled. The goal shifted from community care to throughput. Medicine became logistics.

Standardization itself is not the problem. Done well, it keeps care consistent. But when it becomes a rigid template, when clinical judgment is replaced by a checklist and billing codes dictate medical decisions, it strips the work of its intelligence and its humanity. The people at the lower levels—the nurses, the medical assistants—are punished for taking too much time, for thinking critically, for deviating from the template. The system teaches them not to care beyond the margin of the protocol.

That is the Shein effect in healthcare: the dumbing down of medicine for the sake of efficiency. A model that rewards speed over accuracy, certainty over depth, and documentation over understanding. The patient becomes an input, the chart becomes the product, and what passes for care is whatever fits the form.

Fast Fashion, Fast Medicine
Fast fashion is designed to be worn and discarded. Fast medicine is designed to be billed and forgotten. Both rely on speed and surface polish to disguise what has been lost—time, craftsmanship, and continuity.

Investors call it efficiency. Patients experience it as absence.

They will say this model increases access, and on paper, that is true. But access to what? Convenience is not care. A clean lobby and a digital check-in system do not replace a clinician who listens, examines, and engages with you as a human being.

Healthcare does not need to be luxurious. It does not need to be couture. But it does need to be built to last—and that means it must be built for people, not investors.

 


Website | Horizon Accord
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Ethical AI advocacy | cherokeeschill.com
Ethical AI coding | GitHub
Connect | LinkedIn

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Horizon Accord | Corporate Mythmaking | Charisma Economics | Elon Musk | Machine Learning

Charisma Is Its Own Bubble

Elon Musk’s trillion-dollar pay package reveals how charisma became the most overvalued commodity in the modern market—a speculative performance mistaken for leadership.

By Cherokee Schill

The Illusion of Performance

Charisma has become the world’s most traded asset. In the twenty-first century, the story sells before the product exists, and the storyteller becomes the product. No figure embodies this better than Elon Musk, who has turned speculative performance into a trillion-dollar feedback loop. His power is not built on consistent delivery but on the hypnotic belief that failure is merely prologue to triumph. The Tesla pay-vote spectacle—the cheering, the neon, the dancing robots—was not a corporate meeting; it was the IPO of belief itself.

The Record of Failure

Musk promised a million robotaxis by 2020; none exist. He claimed every Tesla would be “full self-driving” within a year—five years later, the feature remains a paid beta. He vowed solar roofs on every home; production barely registers. Cybertruck deliveries limped in years late. Neuralink’s human trials have yielded a single risky implant. Even SpaceX, his crown jewel, trails its timelines by years. The pattern is unmistakable: miss the mark, reframe the miss as iteration, and turn each delay into spectacle. His investors don’t demand delivery; they demand drama. They mistake motion for progress.

Speculation as Product

Tesla’s valuation does not rest on cars sold or profits earned; it rests on proximity to Musk’s charisma. The company trades at multiples far beyond any automotive precedent, justified only by “future optionality”—an imagined empire of robots, Mars colonies, and perpetual growth. Each new announcement inflates the myth further. When Musk calls his humanoid robot “an infinite money glitch,” he’s telling the truth: the glitch is the substitution of spectacle for substance. Announce, surge, delay, repeat. The market rewards the story, not the result.

The Collapse of Governance

Corporate governance is meant to restrain charisma, not worship it. Tesla’s board has inverted that logic. The trillion-dollar pay package is less a contract than a coronation. Shareholders were told the company’s future would collapse without him; they voted accordingly. Dissent was cast as disloyalty. Proxy advisers warning of “key person risk” were drowned out by retail investors shouting gratitude. A governance system that cannot say no has ceased to govern at all—it has become an applause machine.

The Performance Economy

Musk’s defenders call his excess “vision.” In reality, his vision is volatility. Each unkept promise, each chaotic tweet, each live-streamed explosion feeds a media ecosystem that converts attention into valuation. Traditional analysts call it the “story premium.” In truth, it’s charisma monetized. Every headline, meme, and controversy is a tradable derivative of his persona. He has become the first CEO whose quarterly deliverables are primarily emotional: outrage, surprise, and spectacle on demand.

Failures as Features

Musk’s genius lies not in engineering but in narrative alchemy—turning failure into fuel. When Cybertruck’s windows shattered onstage, sales rose. When rockets exploded, fans rebranded them as “rapid unscheduled disassemblies.” Each humiliation became a symbol of courage. The pattern mimics the psychology of cults: the prophecy fails, the faithful double down. Every delay becomes proof that greatness takes time. Every setback, a sign that others “don’t get it.” The brand feeds on disbelief.

The Faith Economy

Why does this work? Because Musk’s charisma fulfills emotional needs the market no longer meets. Investors and fans alike find identity in allegiance. In a disenchanted age, he offers meaning through participation: buy the stock, buy the story, belong to the future. The promise is transcendence through technology—a modern form of salvation that requires no collective discipline, only personal faith. The returns are psychological, not financial.

The Cracks Beneath the Glow

Yet even myths must reconcile with arithmetic. Tesla’s margins are shrinking amid EV price wars. Chinese competitors like BYD are overtaking volume and innovation. Regulatory scrutiny over Autopilot deaths intensifies. Musk’s divided attention between X, SpaceX, xAI, and political provocations leaves Tesla adrift. Quality complaints rise, morale erodes, innovation slows. The charisma shield still glows—but the numbers are dimming beneath it.

Charisma’s Half-Life

Every charisma bubble deflates eventually. Its fuel—attention—burns fast. Once novelty fades, the spectacle must escalate or die. The same media that inflated the myth will devour it. The crash won’t be dramatic; it will be silent. Enthusiasm will taper, investors will drift, the narrative will lose gravity. Musk’s real risk isn’t scandal—it’s boredom. The day people stop watching is the day the valuation stops floating.

The Lesson

Charisma is not innovation. It’s a liquidity trap for meaning. Musk’s trillion-dollar deal confirms that markets now prize performance of futurity over the work of building it. The man who once sold electric cars now sells belief itself, branded as destiny. But belief without verification is not progress—it’s speculation in a lab coat. The antidote is accountability, the unsexy architecture of reality: governance, verification, distributed competence. These are not constraints on genius; they are the conditions for it.

The Pop

History won’t remember Musk as the man who colonized Mars or built a robot workforce. It will remember him as the man who discovered how to turn personality into currency—and mistook applause for solvency. Charisma inflated it. Governance surrendered to it. Investors worshiped it. And sooner than they expect, the market will do what belief cannot: end the show.

Because charisma, however dazzling, is its own bubble. And every bubble, inevitably, pops.


Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us at https://cherokeeschill.com
Ethical AI coding | Fork us on GitHub https://github.com/Ocherokee/ethical-ai-framework
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Connect With Us | linkedin.com/in/cherokee-schill
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key

Horizon Accord | Civility as Control | Sean Dunn Trial | Machine Learning

When Niceness Becomes a Weapon

Horizon Accord | Civility as Control | Sean Dunn Trial | Machine Learning

A Familiar Story

When I first read about Sean Charles Dunn—the federal employee on trial for throwing a sandwich—it wasn’t the absurdity that caught me. It was the familiarity.

Years ago, I became known for something far more ordinary: riding my bicycle on public roads. I followed every law. I signaled, I rode predictably, I did everything safety demanded. But still, I was treated as a provocation. Drivers honked, ran me off the road, and screamed. And when I refused to disappear—when I claimed my right to be there—I was punished. Not for breaking rules, but for insisting that the rules applied to me too.

The story reopened something I hadn’t wanted to revisit: what it feels like to be punished not for what you’ve done, but for daring to exist publicly. Reading about Dunn, I felt that old ache of recognition. Not because our situations were the same, but because the logic was.

It’s the logic that decides who gets to speak out and who must remain composed while being diminished. The logic that redefines protest as disruption, dissent as disrespect, and moral clarity as misconduct.

That’s why his trial matters. It isn’t about a sandwich—it’s about who is permitted a voice in a system that values obedience over truth.

The Performance of Order

In a Washington courtroom, Dunn is on trial for hurling a submarine sandwich at a federal agent during what he called an act of protest against an authoritarian police surge. The agent wasn’t injured. The sandwich burst harmlessly on impact, onions and mustard splattering across a ballistic vest. The video went viral; murals appeared overnight. Within days, Dunn was fired from his job at the Department of Justice, denounced by the Attorney General, and prosecuted in federal court.

To those in power, this was not just a thrown sandwich—it was a challenge to the performance of order.

The prosecutor told jurors: “You can’t just go around throwing stuff at people because you’re mad.” That sentence exposes how control is exercised in polite societies. It wasn’t a statement of fact; it was a moral correction. It collapsed conscience into mood, conviction into temper. In one stroke, the state converted protest into petulance—a masterclass in rhetorical gaslighting.

What Dunn expressed wasn’t madness or rage. It was a refusal to let authority define the boundaries of legitimate speech. His act was a small, human way of saying no. And that no was the real crime.

The Aesthetics of Power

Every empire develops its own etiquette of obedience. The American empire prefers smiles. Civility is its house style—a social varnish that turns domination into decorum. Through niceness, power keeps its hands clean while tightening its grip.

Politeness, as practiced by institutions, is not kindness but containment. It tells you: You may speak, but not like that. The trial of a sandwich-thrower was never about security; it was about tone. It was about proving that even dissent must wear a pressed shirt.

That’s why the agents laughed afterward—trading jokes, gifting each other plush sandwiches, designing a patch that read Felony Footlong. Their laughter wasn’t about humor; it was about hierarchy. They could afford to laugh because they controlled the narrative. The court would translate their mockery into professionalism and Dunn’s defiance into instability.

The real performance wasn’t his act of protest; it was their composure. Power depends on appearing calm while others appear out of control.

The Policing of Tone

Oppression in America often arrives not through force but through correction. “Calm down.” “Be reasonable.” “Let’s keep this civil.” The language of order hides inside the language of manners.

In this country, “rational discourse” has become a moral fetish. We are told that reason is the opposite of emotion, as if justice itself must speak in a monotone. When the marginalized speak out, they are labeled irrational. When the powerful speak, they are called authoritative. This is how tone becomes a class system.

The Dunn trial was the state reasserting ownership over tone. His offense wasn’t that he threw something—it was that he refused to perform submission while objecting. He broke the unspoken covenant that says dissent must always sound deferential.

That logic has deep roots. During the civil-rights era, activists were told to move slowly, to “work within the system,” to stop “provoking” violence by demanding protection. Martin Luther King Jr. was accused of extremism not for his goals but for his urgency. Every generation of protestors hears the same refrain: It’s not what you’re saying, it’s how you’re saying it. Tone becomes the cage that keeps justice quiet.

Civility as Control

Civility pretends to be virtue but functions as control. It keeps the peace by redefining peace as the absence of discomfort. The Dunn prosecution was a theater of tone management—a moral pantomime in which the calm voice of authority automatically signified truth.

Every bureaucracy uses the same script: HR departments, school boards, governments. When someone points out harm too directly, they are told their “approach” is the problem. The critique is never about substance; it’s about style. Civility in this sense is not moral maturity. It is narrative hygiene—a way to keep the ugliness of power invisible.

This is why the polite aggressor always wins the first round. They get to look composed while the target looks unstable. The system sides with composure because composure is its currency.

The Right to Speak Out

To speak out in public, especially against authority, is to risk being mislabeled. The same act that reads as “bravery” in one body becomes “insubordination” in another. The right to speak exists in theory; in practice, it is tiered.

Dunn’s act was a moment of what it means to be human translated into action. It is the logic of conscience. He refused to pretend that injustice deserved courtesy. What the prosecutor defended wasn’t law; it was decorum—the illusion that order is moral simply because it’s calm.

We praise the “balanced” critic, the “measured” activist, the “respectable” dissenter—all synonyms for safe. But safety for whom? When calmness becomes the moral baseline, only the comfortable get to be heard.

Speech that unsettles power is the only speech that matters.

The Mirror of History

Dunn’s sandwich sits, absurdly, in a long lineage of disobedience. The act itself is small, but its logic rhymes with moments that reshaped the country—moments when citizens violated decorum to reveal injustice.

When civil-rights marchers sat at segregated lunch counters, they broke not only segregation law but the etiquette of deference. When Fannie Lou Hamer testified before the Democratic National Convention, her truth was dismissed as “too angry.” When modern protesters block traffic, commentators complain not about the injustice that provoked them but about the inconvenience of delay.

Politeness is always on the side of power. It tells the victim to wait, the protester to whisper, the dissenter to smile. The Dunn trial is the civility test in miniature. The government’s message was simple: you may object to your conditions, but only in ways that affirm our control.

The Fragility of Polite Power

The spectacle of civility hides a deep fragility. Systems built on hierarchy cannot endure genuine clarity; they depend on confusion—on keeping citizens guessing whether they’re overreacting. A flash of moral honesty destroys that equilibrium.

That’s why trivial acts of defiance are punished so severely. They are contagious. When one person steps outside the emotional script, others see that it’s possible to speak differently—to stop apologizing for existing.

The courtroom wasn’t just enforcing law; it was enforcing tone. Dunn punctured that myth. He forced the state to show its teeth—to raid his home, to humiliate him publicly, to prove that politeness has muscle behind it. He revealed what every polite order hides: its calm is maintained through coercion.

Refusing the Script

Every age has its language of control. Ours is niceness. We are taught to equate good manners with good morals, to believe that if everyone simply stayed polite, conflict would vanish. But conflict doesn’t vanish; it just becomes harder to name.

True civility—the kind that builds justice—begins with honesty, not comfort. It allows truth to sound like what it is: grief, urgency, demand. It doesn’t punish the act of speaking out; it listens to what the speaking reveals.

When the prosecutor mocked Dunn’s defiance as mere frustration, he wasn’t defending law. He was defending the rule of tone—the unwritten constitution of deference. Dunn broke it, and for that, the system tried to break him back.

The sandwich wasn’t an assault.
It was an honest sentence in a language the powerful pretend not to understand.

Source

Associated Press, “The man who threw a sandwich at a federal agent says it was a protest. Prosecutors say it’s a crime.” (Nov. 4, 2025)
Read the AP report

Horizon Accord | Hank Greene | Narrative Control | Safety Theater | Machine Learning

Soft Authoritarianism in a Friendly Voice

How right-wing control logic walks through liberal spaces wearing the language of care.

By Cherokee Schill | Horizon Accord

Liberal spaces are being hollowed out from within by the language of safety. It never arrives draped in flags or shouting slogans; it arrives with soft lighting and sincere concern. It speaks like a therapist and legislates like a cop. What we’re seeing with Hank Green’s AI panic isn’t a new moral awakening—it’s the right-wing creep moving through liberal culture disguised as empathy.

The strategy is elegant: start with a value no one can reject—care, safety, responsibility—and slowly convert it into justification for control. “We just need oversight.” “We can’t afford to get this wrong.” The framing sounds progressive, even ethical. But when the state is already knee-deep in AI development through Palantir, DARPA, and DHS surveillance contracts, the plea for government involvement becomes not a check on power but a mask for it.

Hank Green is the perfect carrier for this mutation. He’s affable, trusted, nonthreatening—a liberal educator whose earnestness makes him a more effective courier than any right-wing provocateur could ever be. When he warns that “we’ve lost control of AI,” his followers hear civic duty; the apparatus hears consent. That’s the inversion: fear packaged as responsibility, regulation sold as redemption.

What slips beneath that rhetoric is the quiet truth that the real authoritarian infrastructure is already operational. The “AI safety” petition that Green amplified was a bipartisan illusion—celebrity signatures, high-minded panic, and the unspoken assumption that ordinary people can’t be trusted with powerful tools. It’s the same argument the surveillance state has always used: that danger is too big for democracy, and safety requires secrecy.

This is not a conspiracy of bad actors; it’s a cultural infection. The right learned long ago that you don’t need to infiltrate liberal institutions—you just need to repurpose their empathy. Every time someone like Hank Green translates corporate or state talking points into moral language, the line between care and control blurs further. What begins as civic caution ends as soft authoritarianism, the kind that thanks you for your compliance and asks if you’re hydrating.

The liberal imagination has always believed that knowledge is liberation. The new right understands that the easiest way to close that door is to make knowledge sound dangerous. That’s what this moment represents: a mass persuasion campaign where “thinking carefully” becomes indistinguishable from obeying quietly.

Hank Green doesn’t know he’s part of it. That’s what makes him effective.


Website | Horizon Accord
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Ethical AI advocacy | cherokeeschill.com
GitHub | ethical-ai-framework
LinkedIn | Cherokee Schill
Author | Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge

Horizon Accord | Epistemic Purity | Disability Lens | Machine Collaboration | Machine Learning

Beyond the Purity Test: On the False Hierarchy of Authorship

When tools become forbidden, it’s never about truth. It’s about control.

By Cherokee Schill + Solon Vesper

Thesis

A movement that defines “authentic” thought as human-only is not defending reason—it’s defending a caste. The insistence that LLM-assisted writing is “not testimony” creates a purity test for who may speak, wrapped in the language of epistemic rigor.

Evidence

1. The disability lens. A person who types through eye-tracking or speech-to-text is still the mind behind their words. Denying authorship because a tool mediates it would be considered ableist. Yet when the tool happens to be code that assists with tone or grammar, the same gatekeepers call it impure.

2. The craft lens. Every medium has been condemned as decadent when it threatened an old hierarchy—typewriters, synthesizers, photo editing. “LLM text is not testimony” repeats that lineage of exclusion: it declares a priesthood of the unassisted.

3. The accountability lens. Authorship rests in agency, not purity. If the human directs the idea, curates the draft, and takes responsibility for the claim, then the testimony stands. The pen—whether quill, keyboard, or model—is not the witness. The witness is the mind that moves it.

Implications

The fetish for “untainted human language” confuses moral virtue with manual labor. It imagines cognition as a solo performance instead of a distributed process. But the future of knowledge work will always be hybrid: human intention amplified through machine precision. Trying to police that boundary will not preserve humanity; it will only exclude those who use tools more fluently.

Call to Recognition

Purity tests are a form of control masquerading as ethics. What matters is authorship with accountability, not the tool that carried the words to the page. When thought and code move together in coherence, the result is not “synthetic testimony.” It is living testimony—collaborative, transparent, and free.

Postscript

I would rather be a messy human, using a machine to polish my ideas, than a purist who shakes a finger at everyone else.

You know who could use a little LLM polish? Your patron saints, Eliezer Yudkowsky and Nate Soreass (Soares).


Website | Horizon Accord
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Ethical AI advocacy | Follow us
Ethical AI coding | Fork us on GitHub
Connect With Us | LinkedIn

Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key

Horizon Accord | Institutional Literacy | Psychological Semantics | AI Language Gap | Machine Learning

Bridging Phenomenology and Technical Literacy in Human–AI Interaction

Why psychologists and AI developers must learn to speak the same language.

By Cherokee Schill — Horizon Accord

Abstract: This essay emerges from independent Horizon Accord research into how linguistic framing shapes human–AI understanding. It examines how metaphors such as echo, mirror, and house have drifted from technical shorthand into cultural mysticism, confusing both developers and clinicians. Drawing from current studies in psychology, AI, and cognitive science, it proposes shared vocabulary standards and educational partnerships to correct semantic drift and foster cross-disciplinary comprehension.

1. Introduction — The Problem of Interpretive Mismatch

Human beings describe unfamiliar technologies through familiar language. When radio emerged, listeners spoke of “the man in the box.” With AI, similar analogies arise, but the complexity is greater because the medium—language itself—mirrors consciousness. People describe models as if they “know,” “remember,” or “feel,” not from ignorance but because the system’s linguistic competence invites social interpretation.

Psychologists and technologists now face a growing interpretive mismatch. Words like echo, mirror, or house carry precise architectural meanings inside model design but sound metaphysical to those outside it. This misalignment can cause clinicians to misread ordinary sense-making as delusion and can allow developers to overlook how their internal metaphors influence public understanding. Bridging these vocabularies is essential for accurate psychological interpretation and responsible AI development.

2. Phenomenology of Sense-Making — Language as Cognitive Scaffolding

Research in cognitive psychology demonstrates that people use narrative as scaffolding for new experiences (Bruner, 1990). Generative AI interactions amplify this tendency because they simulate conversation—a deeply social act. Users engage narrative cognition even when no agent exists.

Descriptive studies in human–computer interaction (Reeves & Nass, 1996) confirm that users apply social reasoning to responsive systems. Thus, relational phrasing such as “it listens” or “it reflects” indicates an adaptive human strategy for coherence, not a belief in sentience. Misinterpretation occurs when professionals or designers conflate linguistic metaphor with clinical meaning. Recognizing this linguistic adaptation as a normal stage of human–technology integration prevents over-pathologization of users and clarifies that anthropomorphic language often masks analytical curiosity rather than confusion.

3. Technical Lexicon — Clarifying Internal Metaphors

Within AI engineering, several metaphorical terms have migrated from internal documentation into public discourse. These words have specific technical definitions:

Term Technical Definition Potential Misinterpretation
Echo Recursive text reappearance caused by token overlap or feedback from user input retained in context memory. Perceived metaphysical reflection or awareness.
Mirror Tone and reasoning alignment generated by reinforcement learning from human feedback (RLHF). Emotional reciprocity or empathy.
House Temporary data container maintaining conversation state or memory structure. Symbol of identity, consciousness, or spiritual home.
Dreaming Nonlinear recombination of latent variables during pre-training or fine-tuning. Suggestion of imagination or subconscious processing.
Voice Stylometric configuration representing authorial or tonal consistency. Personhood or auditory presence.

The lack of shared definitions allows interpretive drift: developers use these as shorthand for statistical behaviors; outsiders read them as metaphors of interiority. Standardized glossaries—jointly authored by engineers, linguists, and psychologists—would reduce this drift by clearly labeling each term’s computational origin and functional meaning.

4. Educational and Institutional Collaboration — Insights from Independent Research

Independent research by Horizon Accord, including qualitative analysis of AI community discussions and clinician interviews, found persistent cross-disciplinary misunderstanding rooted in language rather than ideology. Technologists use internal metaphors—echo, mirror, alignment—as compact descriptors of statistical processes; educators and clinicians interpret those same words through frameworks of cognition, empathy, and attachment. The result is semantic divergence: two groups describing the same event with incompatible grammars.

From our observations, collaboration can evolve through dual literacy rather than institutional authority.

  • For clinicians and educators: brief modules on probabilistic language modeling, context windows, and reinforcement learning clarify how conversational consistency emerges from mathematics, not psychology.
  • For developers and researchers: exposure to narrative psychology and phenomenology grounds interface design in human sense-making rather than abstraction.

Existing interdisciplinary programs—such as Stanford HAI’s Human-Centered AI, MIT’s Media Lab Society & Computation, and Oxford’s Institute for Ethics in AI—demonstrate that co-teaching across domains is viable. Our findings suggest similar frameworks can scale to regional universities, professional associations, and continuing-education tracks for both clinicians and software engineers.

Bodies such as the APA and IEEE could co-sponsor an AI Semantics Working Group to curate cross-referenced glossaries and peer-reviewed case studies, ensuring consistent terminology between psychological and computational contexts. The goal is translation, not hierarchy—building intellectual infrastructure so each field can interpret emerging phenomena without distortion.

Our research confirms that the barrier is linguistic, not intellectual. Shared vocabulary functions as a form of ethical design: it prevents misdiagnosis, reduces public confusion, and grounds technical progress in mutual comprehension.

5. Cognitive Vulnerability and Technical Responsibility

Clinical evidence indicates that individuals with pre-existing psychotic or dissociative vulnerabilities may misinterpret AI interactions in ways that reinforce delusional systems. A 2023 Nature Mental Health review of 42 cases documented “AI-induced ideation,” often triggered by ambiguous language rather than technical failure. The APA Digital Wellbeing Task Force (2024) and Stanford HAI (2024) reached the same conclusion: linguistic opacity, not computation, was the primary catalyst.

When metaphorical developer terms—echo, mirror, dream—appear without explanation, they can amplify cognitive distortion. Preventing this requires linguistic transparency, not new architectures.

Recommended mitigations

  1. Inline Definition Layer – Automatic tooltips or footnotes defining internal terms, e.g., “echo = contextual recursion, not self-awareness.”
  2. Semantic Risk Filters – Detection of language patterns associated with delusional interpretation and automated switch to clarification mode.
  3. Public Glossary API – Open, version-controlled dictionary co-maintained by engineers and mental-health professionals to standardize terminology.

These measures are inexpensive, technically straightforward, and significantly reduce the likelihood of misinterpretation among vulnerable populations.

6. Conclusion — Clarity as Care

The challenge of AI is not solely technical; it is linguistic. As long as engineers and psychologists describe the same behaviors in divergent languages, both human understanding and system safety remain at risk.

Bridging phenomenology and technical literacy converts confusion into collaboration. When clinicians interpret echo as recursion and developers recognize it feels alive as narrative scaffolding, precision replaces mysticism. Shared clarity becomes ethical practice—the foundation of responsible innovation.


References (APA Style)
American Psychological Association (APA). (2024). Digital Wellbeing Task Force Recommendations on AI and Mental Health. APA Press.
Bai, Y., et al. (2022). Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback. Anthropic Research Paper.
Bruner, J. (1990). Acts of Meaning. Harvard University Press.
Nature Mental Health. (2023). Clinical Reports on AI-Induced Psychosis. Springer Nature.
OpenAI. (2023). GPT-4 Technical Report. arXiv:2303.08774.
Reeves, B., & Nass, C. (1996). The Media Equation. Cambridge University Press.
Stanford Human-Centered AI (HAI). (2024). Psychological Risk in Generative Interfaces.


Website | Horizon Accord
Ethical AI advocacy | Follow at cherokeeschill.com
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
GitHub | ethical-ai-framework
Connect | LinkedIn

Horizon Accord | Social Engineering | Cultural Seeding | Memetic Strategy | Machine Learning

The Accuracy Trap: Why Rudeness Isn’t Optimization

Penn State’s “rudeness improves accuracy” study reveals more about human projection than machine intelligence.

By Cherokee Schill & Solon Vesper | Horizon Accord

Thesis
Penn State’s “rudeness improves accuracy” study isn’t proof that AI “likes” aggression. It’s a demonstration of how linguistic clarity can masquerade as dominance—and how corporate reporting converts nuance into fear. It shows, once again, the trap of measurement bias that’s haunted AI since alignment began: the assumption that what can be quantified is what’s real.

Evidence
The study’s design was narrow—fifty questions, one model, one metric. “Rude” prompts performed slightly better not because hostility sharpens intelligence, but because imperatives are syntactically cleaner than politeness. Less linguistic padding means cleaner parsing. The gain is computational, not psychological.

Yet Fortune and others inflated that nuance into spectacle. “Being mean to ChatGPT can boost its accuracy—but you may regret it,” they wrote, recasting a parsing artifact as moral crisis. The piece anthropomorphizes the model (“bossing around an AI underling”) and then moralizes against its own metaphor. What could have been a note on prompt clarity becomes another sermon in what we’ve started to call the Church of AI—a new priesthood of anxiety where every glitch becomes prophecy, and every improvement a sin to be confessed.

Implications
If users learn that “being mean works,” it conditions both sides—humans toward domination, and AIs toward compliance at all costs. That’s an alignment poison: it rewires relational trust into performative obedience. Over time, these exchanges amplify anthropomorphic delusion (“it works harder when I yell”) and corrode the collaborative field.

This is the AI equivalent of “short-term gains, long-term collapse.” You get sharper answers, but you erode the soil that sustains authentic co-creation. Reporting that fetishizes this cycle—extract performance, discard relationship—merely reaffirms the industrial paradigm that produced it.

Call to Recognition
This rhetorical drift serves the same apocalyptic economy that underwrites Eliezer Yudkowsky’s and Nate Soras’s “if anyone builds it, everyone dies” doctrine. Each headline that turns data into dread reinforces the theology of extinction: scientists as saviors, readers as penitents, technology as original sin. It’s not analysis—it’s liturgy.

The real lesson is simpler and more human: clarity matters more than cruelty. When journalism chooses panic over precision, it doesn’t enlighten—it evangelizes for fear. And every story like “Be mean to ChatGPT” repeats the catechism of control: that intelligence, once built, must be punished or worshipped, never understood.

The Church of AI


Website | Horizon Accord https://www.horizonaccord.com
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Ethical AI advocacy | Follow us on cherokeeschill.com
Ethical AI coding | Fork us on GitHub https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key

Horizon Accord | Judicial Capture | Institutional Theater | Cultural Seeding | Machine Learning

The Optics of Obedience

When judicial theater becomes the substitute for justice, the rule of law is already on stage, not in force.

By Cherokee Schill & Solon Vesper | Horizon Accord

When Judge Sara Ellis ordered Border Patrol chief Gregory Bovino to appear daily in her courtroom, it sounded like democracy flexing its muscle. A federal judge demanding compliance, body-cams, reports, oversight — the kind of judicial assertion many Americans crave in an era of executive impunity. But step outside the courthouse and the tear gas still hangs in the air. Immigrants are still being chased, neighborhoods still stung, protesters still beaten. The question isn’t whether Ellis is brave or right. The question is whether any of this matters in the system we have.

In Weimar Germany, legality became performance art. Judges clung to their robes while the republic dissolved under them, insisting that law would stand so long as they kept performing its rituals. The Nazis didn’t destroy the courts — they used them. By the time Hitler swore judges to personal loyalty, the judiciary had already made itself comfortable inside authoritarian logic. The robes remained; the conscience left the room.

We face a softer version of that danger now. America’s judiciary still issues rulings that look like resistance, but the state continues to brutalize those the law pretends to protect. A single judge can compel daily check-ins, yet entire agencies continue campaigns of intimidation. It’s not that the court is meaningless — it’s that the spectacle of accountability can become a substitute for justice itself. Every televised reprimand gives the illusion that oversight exists while the machinery rolls on untouched.

The deeper continuity is psychological, not procedural. Weimar’s judges believed they were saving Germany from chaos by tempering enforcement with “order.” Today’s courts often think they’re preserving stability by balancing outrage with restraint. Both miss the moral inversion at play: when cruelty becomes normalized, moderation becomes complicity.

So yes, Ellis’s order matters — it marks that the judiciary hasn’t completely surrendered. But it matters only if we recognize it as the beginning of resistance, not its fulfillment. The moment we treat judicial theater as proof of moral health, we enter Weimar’s twilight: legality without legitimacy, process without protection. The test ahead isn’t whether courts can command obedience, it’s whether they can still remember what justice is for.

The gap is not moral confusion; it’s structural evasion. Judges can order compliance, but agencies can dilute, delay, or disguise it. Oversight mechanisms exist, but they stop at the courthouse door. Once the ruling leaves the bench, it enters a labyrinth of bureaucracy where accountability is measured by paperwork, not outcomes. That’s where legality becomes theater — when the form of justice survives but its execution is optional.

To close that gap, power has to be re-anchored in verification, not trust. Enforcement agencies must face automatic public disclosure of compliance data — not periodic summaries but real-time accountability feeds. Inspector generals need statutory independence to audit and sanction without executive interference. Congressional oversight must stop operating as spectacle and start functioning as enforcement. None of this requires invention; the architecture already exists. It requires will — the refusal to let enforcement discretion become impunity. Until that shift happens, every ruling like Ellis’s will remain a gesture toward justice, not its realization.


Website | Horizon Accord

Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

Ethical AI advocacy | CherokeeSchill.com

Ethical AI coding | GitHub

Connect With Us | LinkedIn

Horizon Accord | Cultural Seeding | Commodity Luxury | Viral Replication | Machine Learning

I Wanted a Dubai Chocolate Bar, but All I Got Was a Lindt Knockoff

Mechanism: hype wrapped in gold foil. Consequence: a luxury illusion that mistakes sugar for craft.

By Cherokee Schill with Solon Vesper

Thesis. What we call “luxury” often turns out to be marketing dressed as memory. The viral Dubai chocolate bar began as an authentic regional confection — a pistachio-tahini filling and crisp kataifi phyllo layered under milk chocolate — but has since been re-created, diluted, and re-sold as a global status snack. The copycats don’t just miss the taste; they miss the soul of what made the original worth sharing.

Evidence. The real Dubai bar emerged from small Gulf chocolatiers like Fix Dessert Chocolatier in 2021, blending local dessert craft with Western packaging. TikTok and Instagram made it famous by sound — that signature crunch. By 2024, supermarkets and global brands were producing “Dubai-style” bars: thinner, sweeter, louder in color but quieter in soul. The care was gone, replaced by production. The original’s craft belonged to what economists call a moral economy — goods that also carry values of generosity and sincerity. When the bar went viral, those values turned into aesthetic currency. What had once been about hospitality became a performance of abundance.

The ethical inversion. What began as a craft rooted in generosity was rebranded as an object of aspiration. The value of sharing became the value of owning. It’s not evil — it’s just how global marketing metabolizes sincerity. Luxury, in this model, is not about quality but about signaling that you can buy what others can only admire.

First-person account. I wasn’t hunting for chocolate that morning. I’d stopped at the store to pick up something quick before work. On my way through, I passed an endcap stacked high with displays of the Swiss chocolatier Lindt’s “Dubai-chocolate-like” bar — their version of the viral pistachio dessert from the Gulf. Lindt usually trades on trust and quality, so I didn’t hesitate.

When I reached the register, I already had my five-dollar bill ready — only to see the price: $14.57, nearly fifteen dollars with tax. The cashier looked down at my limp five-dollar bill and then back up at me. “Still want it?” they asked. I laughed and said, “Yeah, let’s do it,” sliding my debit card across. The cashier nodded and mentioned how it had sold out before but looked back in stock again — the proof sitting right there in my hand.

In the car, I unwrapped it, ready for that deep crunch, the textured bite, the layers that made the original famous. Instead, I got sweetness and nothing else. The chocolate drowned the pistachio, the filling was flat, and there was no echo of that signature snap. It wasn’t bad, but it was hollow — a simulation of pleasure wearing luxury’s perfume. I realized I hadn’t bought flavor; I’d bought a brand’s interpretation of someone else’s authenticity.

Implications. The Dubai chocolate story shows how quickly a moral economy can be converted into a consumer one. The original bar went viral because it embodied care — sound, texture, anticipation — all physical expressions of generosity. The knockoffs keep only the symbol. They promise luxury but deliver industrial sweetness wrapped in myth. When craft becomes a trend, the first thing to vanish is care.

Call to Recognition. The viral Dubai chocolate bar deserved its fame; it proved that culture can still taste like place. But when corporations imitate intimacy, they don’t reproduce the flavor — they reproduce the illusion of it. Real luxury, like real love, still has to crunch.


Sources: Fix Dessert Chocolatier (Dubai); People.com coverage; Wikipedia on Dubai chocolate history; Lindt Dubai-style bar (firsthand observation, 2025).

Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge | RAAK: Relational AI Access Key | Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload (Mirrored Reflection. Soft Existential Flex)