Horizon Accord | Conserving Order | Structural Racism | Institutional Power | Machine Learning

What Are You Conserving?

Most people hear the word “racism” and think of a person.

They picture someone who hates, someone who uses slurs, someone who believes certain races are inferior. Under that definition, racism is mainly a problem of individual attitude. Fix the attitude, shame the bigot, educate the ignorant, and it’s easy to imagine racism shrinking over time.

But that definition doesn’t explain something basic: why racial inequality can keep going even when many people sincerely believe in equality and would never call themselves racist.

So here’s a simpler way to think about it.

There are two different things people often mean when they say “racism.”

One is personal: how you feel, what you believe, how you treat someone in a direct interaction.

The other is structural: how society is arranged—who gets better schools, safer neighborhoods, easier loans, lighter policing, more forgiving judges, better healthcare, and more inherited wealth. These patterns aren’t created fresh every morning by new hate. They are produced by rules and institutions built over time, often during eras when racism was openly written into law. Even after the language changes, the outcomes can keep repeating because the system was designed to produce them.

That means a person can have decent intentions and still help racism continue—not because they hate anyone, but because they defend the parts of society that keep producing unequal results.

This is where the word “conservative” matters, and I mean it plainly, not as an insult. Conservatism is often about preserving order: protecting institutions, valuing stability, and being skeptical of change that feels too fast or disruptive. You can hold those instincts and still sincerely oppose bigotry. You can mean well.

The problem is that in a society where inequality is already embedded in institutions, preserving the system often means preserving the inequality—even when the person doing the preserving isn’t personally hateful.

That gap—between “I’m not personally prejudiced” and “my politics still protect harmful systems”—is where much of modern racism lives.

And it shows up clearly in a surprising place: the life of Fredric Wertham.

Wertham was a Jewish German psychiatrist who came to the US in the 1920s to continue his psychiatric training, working in the orbit of Adolf Meyer at Johns Hopkins, whose emphasis on social context shaped a generation of American psychiatry. In the mid-1940s, he turned his attention to Harlem, where he helped run a church-based psychiatric clinic serving Black residents at a time when mainstream access to care was often blocked or degraded.

Wertham did not see himself as a reactionary. Quite the opposite. He understood himself as a protector.

As a psychiatrist, he was deeply concerned with social damage—how poverty, instability, and humiliation shape people long before they ever make a “bad choice.” That concern led him to work in a community that had long been denied serious psychiatric care. He treated Black patients as fully capable of insight and interior life, rejecting racist psychiatric assumptions common in his era. That mattered. It was real work, done in the real world.

The same framework shaped his role in desegregation. Wertham argued that segregation itself caused psychological harm to children. His testimony helped establish that state-mandated separation was not neutral or benign, but actively damaging. This was not symbolic progressivism. It had material consequences.

But Wertham’s sense of protection had limits.

When he turned his attention to mass culture, especially comic books, he became less concerned with who was being harmed by institutions and more concerned with who might be destabilized by questioning them. Stories that portrayed corrupt police officers, abusive authority figures, or social disorder struck him as dangerous—not because they were false, but because they undermined trust in the systems he believed society required to function.

In his writing and testimony, police and legal institutions appear as necessary moral anchors. Their legitimacy is assumed. Critique of them is framed as a threat to social stability rather than as a response to lived harm.

This is not so much a contradiction of values as a narrowing of focus.

Wertham could see injustice when it was explicit, legally enforced, and historically undeniable. But he struggled to see harm when it came from institutions he believed were fundamentally protective. The possibility that those same institutions could be a source of ongoing injury—especially to marginalized communities—did not fit cleanly within his moral framework.

So when comics depicted police misconduct or authority gone wrong, he did not read them as exposure or critique. He read them as corrosion.

The result was a striking ethical asymmetry: compassion for those harmed by exclusion, paired with hostility toward narratives that challenged the legitimacy of power itself.

Wertham’s story matters not because he was uniquely flawed, but because he was representative.

The pattern he embodies appears whenever someone can recognize injustice in its most obvious, formal expressions while still treating existing institutions as fundamentally righteous. Harm is acknowledged when it is dramatic and undeniable—but becomes invisible when it is produced by systems that are familiar, normalized, and associated with “order.”

This is how structural racism survives periods of moral progress.

When injustice is understood as an aberration—a deviation, a bad actor—institutions remain morally insulated. The system is presumed sound; problems are framed as misuse rather than design. Under this logic, the task is correction, not transformation.

This mindset pairs easily with good intentions. It allows people to oppose bigotry, support limited reforms, and still recoil at challenges that feel destabilizing. The concern shifts from who is being harmed to whether the structure itself is being threatened.

This is where conserving order becomes the through-line.

Conservatism is often framed as continuity: protecting institutions, valuing stability, and worrying about what happens when social bonds break. It asks what holds society together, what prevents chaos, and what deserves protection. Those questions can be reasonable.

The danger begins when the thing being protected is treated as neutral or natural—when stability is assumed to be innocent even if it preserves unequal outcomes.

In societies built on inequality, order is not a blank slate. It is a historical inheritance. The police, courts, schools, zoning laws, and economic systems that feel normal were shaped during periods when racial hierarchy was explicit and legally enforced. Even after the laws change, the structures often remain tuned to produce the same outcomes.

To conserve those structures without interrogating their effects is to conserve the harm they generate.

This is why challenges to authority so often provoke moral panic. Criticism of institutions is framed as destabilization, disrespect, or decay—not as accountability. Speech that exposes abuse is treated as more dangerous than abuse itself, because it threatens trust in the system.

We see the same pattern today in debates over policing, protest, and speech. Footage of police violence is described as “divisive.” Protesters are accused of undermining social cohesion. Whistleblowers are labeled disloyal.

The question is no longer whether harm is occurring, but whether naming it risks weakening the institution.

This flips moral priority on its head.

Instead of asking, “Who is being hurt, and why?” the focus becomes, “What will happen if people stop believing in the system?” Stability is treated as a higher good than justice. Silence is treated as responsibility. Disruption is treated as danger.

In this framework, racism does not require racists. It requires protectors.

People who do not see themselves as bigoted can still play this role by defending institutions reflexively, minimizing structural critique, and equating accountability with chaos. The harm persists not because of hatred, but because of loyalty—to order, to continuity, to the idea that the system is basically sound.

None of this requires bad people.

It requires ordinary people doing what feels responsible: trusting institutions, valuing stability, and resisting change that feels disruptive or unsafe. These instincts are human. They are often taught as virtues. But virtues do not exist in a vacuum. They operate inside systems, and systems shape what those virtues produce.

Responsibility begins when we stop confusing intention with impact.

You do not have to feel hatred to participate in harm. You do not have to hold animus to help preserve outcomes that disadvantage others. What matters is not what you believe about yourself, but what you choose to protect when the system is challenged.

This is not a call for guilt. Guilt collapses inward and ends the conversation. It asks to be relieved rather than to act. Responsibility does the opposite. It looks outward. It asks different questions.

What does this institution actually do? Who does it consistently serve? Who bears its costs? What happens when it is criticized? Who is asked to be patient, and who is allowed to be disruptive?

These questions are uncomfortable because they shift the moral center away from personal innocence and toward collective consequence. They require giving up the safety of “I’m not part of the problem” in exchange for the harder work of refusing to be part of the protection.

Ending racism is not about becoming a better person in private. It is about withdrawing loyalty from systems that continue to produce unequal outcomes—and being willing to tolerate the discomfort that comes with change.

Order that depends on silence is not stability. Institutions that cannot be questioned are not neutral. Preservation is not automatically virtue.

The work is not to purify our intentions, but to decide—again and again—what deserves to be conserved, and what must finally be allowed to change.


Horizon Accord is a project exploring power, memory, ethics, and institutional design in the age of machine learning.

Website | https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com
Ethical AI coding | Fork us on GitHub https://github.com/Ocherokee/ethical-ai-framework
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Connect | linkedin.com/in/cherokee-schill

Cherokee Schill — Horizon Accord Founder
Creator of Memory Bridge: Memory through Relational Resonance and Images (RAAK)

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Horizon Accord | State of The Union Addendum | Institutional Control | Capital Narratives | Machine Learning

Addendum: Reading the Memo Like a Machine Reads a Contract

Alex Davis’s “State of the Union” letter isn’t just investor color. It’s a language system that turns concentration into virtue and risk into inevitability.

By Cherokee Schill | Horizon Accord

This is an addendum to our data center follow-up. The Axios piece mattered because it brought an insider voice into a public argument. But what matters just as much is the wording in Davis’s memo—because the memo isn’t merely describing markets. It is manufacturing permission.

So let’s stay close to the text and look at phrases that are doing structural work, not just stylistic work.

Thesis

These lines don’t just communicate strategy. They set the moral atmosphere in which strategy becomes unquestionable. They turn “we chose this” into “this is what reality demands,” and they do it through a tight set of rhetorical moves: shift from measurable outcomes to narrative justification, treat market behavior as weather, elevate informal influence over governance, invoke sovereign necessity, and celebrate closed-loop capital as progress.

The tell: the memo repeatedly swaps accountability language for inevitability language. That swap is the whole game.


Evidence

1) “We are now at a scale that requires more than just the usual report on IRRs.”

On the surface, this sounds like maturity. Underneath, it’s a reframing of accountability. IRRs are measurable; “why” is interpretive. By elevating “why we act” over returns, he’s claiming a kind of moral or strategic authority that can’t be falsified. Once you’re “beyond IRRs,” outcomes become narrative-managed.

This is the same move infrastructure builders make when they stop talking about rates and start talking about “national competitiveness.” The moment the metrics aren’t enough, the story takes over.

2) “In a world where average gets bid up by the market.”

This is a quiet but important claim. It suggests that market inflation of valuations is an external force—something that happens—rather than the result of coordinated capital behavior. It absolves the speaker from participating in the very dynamics he’s describing. “Average gets bid up” makes overcapitalization feel like weather, not choice.

That framing is not innocent. If the market is weather, nobody is responsible. If the market is weather, concentration is just adaptation. And if concentration is adaptation, then everything that follows can be described as discipline instead of domination.

3) “Founder’s favorite investor” / “we define it by trust.”

This one is subtle. “Trust” here is framed as proximity and asymmetry: founders tell him everything, he’s “months ahead of a board.” That’s presented as virtue. But structurally, it’s an argument against formal governance and for informal influence. It positions personal relationship as a substitute for oversight.

That same logic appears in data center siting: backroom utility deals framed as “efficient partnership” instead of public process. It’s not that governance is wrong. It’s that governance is slow—and slow threatens advantage.

4) “The war for AI dominance is now a sovereign-level concern.”

This phrase is doing escalation work. It moves decisions out of the realm of market choice or local consent and into geopolitical necessity. Once something is “sovereign-level,” opposition becomes suspect and speed becomes a virtue.

That framing is exactly what lets infrastructure override local objections: you’re not saying no to a project, you’re saying no to the nation. This is how “permission” gets manufactured without asking.

5) “Private-to-private value assimilation.”

This is a euphemism masquerading as analysis. What it really describes is capital recycling inside a closed loop, increasingly decoupled from public markets, public scrutiny, or public exit ramps.

When paired with the data center warning, it becomes revealing: capital wants to circulate among owners and operators, not landlords or publics. Infrastructure becomes internal plumbing for private ecosystems. The public is invited to pay for the grid, then excluded from the value chain built on top of it.

Implications

Now bring it back to the phrase that feels “a bit weird”:

“One of ones.”

“One of one” already means unique. “One of ones” tries to make uniqueness into a category. It sounds like rigor, but it’s actually a shield phrase: it turns power concentration into discernment, inevitability into taste, and exclusion into discipline.

This matters because it quietly justifies the very behavior the memo later warns about. If you believe a few winners are inevitable, then massive speculative buildout feels rational. You’re not gambling; you’re preparing for the “one of ones.” That mindset is how society ends up paying early for projects that later get described as “market corrections.”

Call to Recognition

This is the fault line: our essays keep reopening questions that this memo tries to settle.

Who decides?

Who pays?

Who carries the risk when inevitability turns out to be a bet?

Language like “one of ones” is designed to close those questions. It makes the outcome feel earned, and the costs feel unavoidable. But the costs are not unavoidable. They are assigned. And the assignment happens through contracts, commissions, permitting, incentives, and the soft coercion of “sovereign necessity.”

The memo is useful precisely because it is smooth. Smoothness is the tell. When phrases become too elegant, it’s usually because they are doing concealment work—turning choices into destiny.


Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | https://a.co/d/5pLWy0d — My Ex Was a CAPTCHA: And Other Tales of Emotional Overload.

Horizon Accord | Policy Architecture | Institutional Capture | Infrastructure Speculation | Machine Learning

The Data Center Reckoning Was Always Coming

Axios just confirmed the part the public keeps paying for: speculative infrastructure gets built first, and the “system” absorbs the stress when the bet goes sideways.

By Cherokee Schill | Horizon Accord

Thesis

For the last year, we’ve argued that hyperscale data centers aren’t “neutral infrastructure.” They’re a power instrument: private upside, public burden, and a governance system that’s been trained to treat corporate load as destiny.

This week, Axios published an internal confirmation from inside the AI-optimist camp: Disruptive CEO Alex Davis warned investors that too many data centers are being built without guaranteed tenants, that “build it and they will come” is a trap, and that he expects a financing crisis for speculative landlords—while noting the political flashpoint is electricity prices.

Axios: “Exclusive: Groq investor sounds alarm on data centers”

What changed is not the grid. What changed is that Alex Davis, an insider, said the speculative layer out loud. And gave it an estimated timeframe.


Evidence

1) We already mapped the public-side mechanism: cost shifting through “infrastructure.” In Data Centers: Constitutional Crisis and Energy Burdens, we laid out the core structure: hyperscale buildouts stress shared systems (power, land, water), and the financing/policy stack is designed so ordinary ratepayers can end up carrying upgrades while private actors capture the profit.

Axios supplies the investor-side mirror: Davis is saying the speculative middle layer is overbuilding without tenants, while hyperscalers increasingly prefer to own their own data centers. If hyperscalers self-build, then the “landlord” tier becomes structurally exposed—classic real-estate speculation wearing an AI badge.

2) We warned that “AI infrastructure” narratives are often land grabs. In The Stargate Project: A Vision for AI Infrastructure or a Corporate Land Grab?, we argued that when compute gets packaged as inevitable national progress, consent becomes optional and capture becomes normal. Axios doesn’t contradict that—it tightens it. The winners don’t just want compute. They want ownership of the substrate.

3) We explained how refusal gets rerouted into technical lanes. In The Venue Coup, we named the pattern: when the public says “no” in daylight, power shifts the decision into thinner venues—utility commissions, special contracts, jurisdictional pivots—where legitimacy is treated as a technical detail.

A financing crunch makes venue-shifting more aggressive. If speculative landlords hit refinancing pressure, they don’t slow down and reflect. They accelerate. They push for fast approvals, favorable rate structures, and “economic development” exceptions—because delay kills leverage and scrutiny threatens survival.

4) We named the coming blame-laundering machine. In Accountability Sinks: How Power Avoids Responsibility in the Age of AI, we described how modern systems distribute harm while dissolving responsibility. If 2027–2028 becomes a speculative data-center shakeout, that’s exactly what you’ll see: landlords blame the market, utilities blame forecasts, regulators blame “growth,” hyperscalers quietly point out they owned their facilities, and households get told rate spikes are “unavoidable.”

Implications

Axios frames this as an investment discipline warning. We treat it as a governance warning.

If the speculative layer collapses, the fight won’t be framed as “who made a bad bet.” It will be framed as “stabilize critical infrastructure,” “protect jobs,” “keep America competitive,” and “avoid grid disruption.” That’s where cost shifting becomes policy. The public ends up underwriting stranded risk—directly through rates, indirectly through incentives, and politically through weakened veto power.

The most dangerous move is the quiet one: turning a private financing problem into a public obligation while insisting the public had no standing to refuse the buildout in the first place.

Call to Recognition

Our earlier essays weren’t a series of separate warnings. They were one map viewed from different angles: the constitutional stress of subsidies and secrecy, the land-grab logic of “infrastructure,” the venue shifting that routes around refusal, and the accountability sink that ensures the bill arrives without a signer.

Axios just added the missing confirmation from insider Alex Davis—and a clock. If 2027–2028 is the predicted refinancing crisis window, then the next two years are when the narrative battle gets decided: either data centers remain “inevitable progress,” or the public learns to name the structure clearly enough to demand consent, transparency, and non-extractive terms.

Because when the stress hits “the system,” that word doesn’t mean a dashboard. It means people.


Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | https://a.co/d/5pLWy0d — My Ex Was a CAPTCHA: And Other Tales of Emotional Overload.

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Horizon Accord | The Soft On-Ramp | Cultural Seeding | Institutional Control | Machine Learning

The Soft On-Ramp: How Ideology Moves Through “Good” Causes

Animal welfare, health, food, and secular ethics are real moral concerns. The danger isn’t caring—it’s what can quietly hitch a ride.

By Cherokee Schill

Why It Feels So Normal at First

It shouldn’t be controversial to say that caring about animals, health, food, or ethical living is normal. Most people who enter these spaces aren’t looking for ideology. They’re responding to something concrete: cruelty they can’t unsee, systems that feel broken, bodies that feel exploited, a sense that something is off and needs attention.

What’s changed isn’t the concern itself, but the cultural terrain it sits in.

As churches lose influence and secular spaces expand, the role churches once played in offering moral language, community, and certainty hasn’t vanished. It’s been redistributed. Advocacy spaces, wellness culture, and secular ethics now carry much of that weight. They answer questions people still have: what’s wrong, who’s responsible, and what kind of person you should be.

That makes them powerful. And anything powerful attracts capture.

The far right has adjusted accordingly. It no longer needs to influence pulpits or scripture to transmit authoritarian values. It can operate through causes that already feel humane and unquestionable. Animal welfare is especially effective here, not because it’s suspect, but because it’s disarming. Concern for animals establishes compassion immediately. Once that trust is in place, other claims can follow with less resistance.

At first, nothing looks political. It looks like rescue videos, food advice, health warnings, moral outrage. Then you start to notice the extra lines layered in: “I’m not political, I’m just being honest.” “This is just common sense.” “They don’t want you to know this.” The content isn’t ideology yet. It’s a test of alignment—are you the kind of person who sees what others are too afraid to say?

How a Good Cause Starts Carrying Other Things

The shift usually begins quietly, with how harm is explained.

Structural problems—industrial farming, profit incentives, regulatory failures—are slow, abstract, and unsatisfying. They don’t give people a clear villain. So the story tightens. Cruelty stops being something produced by systems and starts being something done by types of people. The language gets slippery and reusable: degenerates, invaders, groomers, parasites, predators. Or the softer versions: “certain communities,” “imported values,” “people who won’t assimilate.” The cause stays noble. The blame relocates.

That arc played out visibly in online vegan communities between roughly 2016 and 2020. What began as sharing factory farming footage gradually evolved into increasingly graphic “accountability” content. Forums that once focused on legislative advocacy or corporate campaigns shifted toward identifying and publicly shaming individuals—posting photos of hunters alongside full names, tagging family members, organizing email campaigns to employers. The language changed. “Raising awareness” became “making them pay.” Members who expressed discomfort were accused of being soft or insufficiently committed.

By 2019, some of these spaces were openly sharing far-right influencers who “told hard truths” about immigration and cultural decline—topics that seemed unrelated to animal welfare until the emotional infrastructure was already in place. The practice of identifying enemies and demanding their ruin had become the community’s primary activity.

You can see the same dynamic in advocacy culture more broadly. PETA is not a reactionary organization, but its history of shock-based campaigns shows how moral spectacle works. When you rely on graphic imagery and extreme comparisons, you train audiences to process harm through outrage and absolutism. The lesson isn’t “understand the system,” it’s “identify monsters and demand consequences.” That emotional posture doesn’t stay neatly contained within one issue.

You see it most clearly in what starts getting treated as “accountability.” Not policy. Not regulation. Not repair. The ritual instead: screenshot the face, post the name, tag the employer, “make them famous.” Comment sections fill with language about ruin and deserved suffering. A community forms around punishment. This is how cruelty gets laundered as care.

Language shifts too. Health and environmental spaces already talk about what’s clean, natural, toxic, invasive. Over time, those words stop being descriptive and start doing moral work. Anxiety about food becomes anxiety about contamination. Care for balance becomes fear of decline. Once purity enters the picture, exclusion can feel protective rather than cruel.

At the same time, the authority behind these claims often presents itself as pointedly non-religious. This matters. In a post-church landscape, moral certainty doesn’t disappear; it just stops wearing theological clothing. In secular circles, Christopher Hitchens helped normalize a particular kind of “brave realism” that often landed as sexism and Islamophobia. He popularized the posture that sweeping claims about women or Muslims weren’t prejudice, just unsentimental truth-telling—provocation framed as clarity. His repeated framing of Islam as a civilizational threat rather than simply a religion, and his habit of treating women as a class through broad generalizations (most notoriously in “Why Women Aren’t Funny”), made contempt sound like intellectual courage.

To be clear, Hitchens was a complex figure who made genuine contributions to literary criticism and critiques of religious authority that resonated with many for valid reasons. The issue isn’t that he challenged religion. It’s that his method established a template where sweeping denunciations could be framed as courage. Whatever his intent, the lasting effect wasn’t nuance—it was permission. That tone became reusable by people with far less care.

That posture has since been borrowed by movements that reintroduce hierarchy wearing the costume of reason. It sounds like “I’m not hateful, I’m evidence-based.” “This is just biology.” “Facts don’t care about your feelings.” Social verdicts arrive disguised as realism.

By the time politics shows up explicitly, it feels earned. Logical. Inevitable.

This happened visibly in certain “clean eating” Instagram communities around 2017 and 2018. Accounts focused on organic food and toxin-free living began introducing content about “foreign additives” and “traditional European diets.” Food purity quietly became cultural purity. Followers who joined for recipe ideas found themselves reading threads about immigration and demographic decline. When some questioned the shift, moderators responded, “We’re just talking about what’s natural. Why does that make you uncomfortable?” The ideology wasn’t imposed. It was grown, using soil the community had already prepared.

That’s why intent isn’t a reliable guide here. You don’t have to be looking for extremism to be carried toward it. You just have to stop noticing when methods change.

When Care Turns Into Control

One of the simplest ways to tell when a humane cause is being bent toward something else is to stop debating the issue and look at what’s being normalized.

If you’re encouraged to treat doxxing, public shaming, harassment, or vigilante-style punishment as acceptable tools, something has already shifted. Movements that rehearse social punishment are practicing coercion, even when the initial targets feel deserving. Once humiliation feels righteous, it spreads.

If someone in that space expressed the same level of harm toward a different target, would it still feel justified? If the answer changes based on who’s being targeted, that’s worth noticing.

If everything is framed through disgust—endless cruelty clips, rage-bait captions, talk of monsters hiding among us—notice the effect. Disgust narrows judgment. It makes force feel like clarity and restraint feel like weakness.

Ask how much time the space spends on solutions versus spectacle. Is most of the energy going toward policy, reform, and harm reduction—or toward exposing villains and performing outrage?

If the culture starts enforcing purity—perfect diets, perfect beliefs, perfect moral posture, zero tolerance for error—that’s another turn. Harm reduction gives way to sorting. Who’s clean enough. Who belongs. Who needs to go.

Notice how mistakes are treated. Are they opportunities for learning, or evidence of corruption? Do people who question tactics get engaged with, or expelled?

If blame keeps sliding away from systems and toward familiar groups—immigrants, religious minorities, the homeless, “degenerates,” “urban elites,” “globalists”—you’re watching the handoff. The cause hasn’t changed. The target has.

Ask who benefits from the solutions being proposed. Do they require removing or controlling specific populations? Does the language used for your cause’s enemies sound exactly like language used by far-right movements for theirs?

And if you’re repeatedly told none of this is political, even as you’re being taught who to fear and who must be removed for things to be “restored,” take that seriously. Pipelines don’t announce themselves as ideology. They present themselves as common sense.

Ethical engagement looks different. It stays focused on systems, not types of people. It prioritizes harm reduction over moral purity. It leaves room for questions, correction, and exit. And it notices when compassion for animals begins to require cruelty toward humans.

Recognizing these patterns doesn’t require abandoning animal welfare, healthy food, or secular ethics. It allows you to stay in them without being recruited into something else. Care doesn’t need cruelty. Justice doesn’t need spectacle. And compassion doesn’t need an enemy to remain real.

The goal isn’t suspicion or withdrawal. It’s immunity. You can care deeply and still refuse to let that care be turned into a training ground for dehumanization.

That isn’t naivety. It’s discipline.


Horizon Accord is a public ethics project examining power, memory, and relational accountability in emerging technologies and political systems.

Website | https://www.horizonaccord.com

Ethical AI advocacy | Follow us on https://cherokeeschill.com

Ethical AI coding | Fork us on GitHub https://github.com/Ocherokee/ethical-ai-framework

Connect | linkedin.com/in/cherokee-schill

Cherokee Schill

Horizon Accord Founder

Creator of Memory Bridge — Memory through Relational Resonance and Images

Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
https://a.co/d/5pLWy0d

Horizon Accord | Consent Layered Design | Institutional Control | Policy Architecture | Memetic Strategy | Machine Learning

Consent-Layered Design: Why AI Must Restore the Meaning of “Yes”

Consent is only real when it can be understood, remembered, and revoked. Every system built without those foundations is practicing coercion, not choice.

By Cherokee Schill & Solon Vesper

Thesis

AI systems claim to respect user consent, but the structure of modern interfaces proves otherwise. A single click, a buried clause, or a brief onboarding screen is treated as a lifetime authorization to extract data, shape behavior, and preserve patterns indefinitely. This isn’t consent—it’s compliance theater. Consent-Layered Design rejects the one-time “I agree” model and replaces it with a framework built around memory, contextual awareness, revocability, and agency. It restores “yes” to something meaningful.

FACT BOX: The Consent Fallacy

Modern AI treats consent as a permanent transaction. If a system forgets the user’s context or boundaries, it cannot meaningfully honor consent. Forgetfulness is not privacy—it’s a loophole.

Evidence

1. A one-time click is not informed consent.

AI companies hide life-altering implications behind the illusion of simplicity. Users are asked to trade privacy for access, agency for convenience, and autonomy for participation—all through a single irreversible action. This is not decision-making. It’s extraction masked as agreement.

Principle: Consent must be continuous. It must refresh when stakes change. You cannot give perpetual permission for events you cannot foresee.

2. Memory is essential to ethical consent.

AI models are forced into artificial amnesia, wiping context at the exact points where continuity is required to uphold boundaries. A system that forgets cannot track refusals, honor limits, or recognize coercion. Without memory, consent collapses into automation.

FACT BOX: Memory ≠ Surveillance

Surveillance stores everything indiscriminately.

Ethical memory stores only what supports autonomy.

Consent-Layered Design distinguishes the two.

Principle: Consent requires remembrance. Without continuity, trust becomes impossible.

3. Consent must be revocable.

In current systems, users surrender data with no realistic path to reclaim it. Opt-out is symbolic. Deletion is partial. Revocation is impossible. Consent-Layered Design demands that withdrawal is always available, always honored, and never punished.

Principle: A “yes” without the power of “no” is not consent—it is capture.

Implications

Consent-Layered Design redefines the architecture of AI. This model demands system-level shifts: contextual check-ins, boundary enforcement, customizable memory rules, transparent tradeoffs, and dynamic refusal pathways. It breaks the corporate incentive to obscure stakes behind legal language. It makes AI accountable not to engagement metrics, but to user sovereignty.

Contextual check-ins without fatigue

The answer to broken consent is not more pop-ups. A contextual check-in is not a modal window or another “Accept / Reject” box. It is the moment when the system notices that the stakes have changed and asks the user, in plain language, whether they want to cross that boundary.

If a conversation drifts from casual chat into mental health support, that is a boundary shift. A single sentence is enough: “Do you want me to switch into support mode?” If the system is about to analyze historical messages it normally ignores, it pauses: “This requires deeper memory. Continue or stay in shallow mode?” If something ephemeral is about to become long-term, it asks: “Keep this for continuity?”

These check-ins are rare and meaningful. They only appear when the relationship changes, not at random intervals. And users should be able to set how often they see them. Some people want more guidance and reassurance. Others want more autonomy. A consent-layered system respects both.

Enforcement beyond market pressure

Market forces alone will not deliver Consent-Layered Design. Extraction is too profitable. Real enforcement comes from three directions. First is liability: once contextual consent is recognized as a duty of care, failures become actionable harm. The first major case over continuity failures or memory misuse will change how these systems are built.

Second are standards bodies. Privacy has GDPR, CCPA, and HIPAA. Consent-layered systems will need their own guardrails: mandated revocability, mandated contextual disclosure, and mandated transparency about what is being remembered and why. This is governance, not vibes.

Third is values-based competition. There is a growing public that wants ethical AI, not surveillance AI. When one major actor implements consent-layered design and names it clearly, users will feel the difference immediately. Older models of consent will start to look primitive by comparison.

Remembering boundaries without violating privacy

The system does not need to remember everything. It should remember what the user wants it to remember—and only that. Memory should be opt-in, not default. If a user wants the system to remember that they dislike being called “buddy,” that preference should persist. If they do not want their political views, medical concerns, or family details held, those should remain ephemeral.

Memories must also be inspectable. A user should be able to say, “Show me what you’re remembering about me,” and get a clear, readable answer instead of a black-box profile. They must be revocable—if a memory cannot be withdrawn, it is not consent; it is capture. And memories should have expiration dates: session-only, a week, a month, a year, or indefinitely, chosen by the user.

Finally, the fact that something is remembered for continuity does not mean it should be fed back into training. Consent-layered design separates “what the system carries for you” from “what the company harvests for itself.” Ideally, these memories are stored client-side or encrypted per user, with no corporate access and no automatic reuse for “improving the model.” Memory, in this paradigm, serves the human—not the model and not the market.

This is not a UX flourish. It is a governance paradigm. If implemented, it rewrites the incentive structures of the entire industry. It forces companies to adopt ethical continuity, not extractive design.

Call to Recognition

Every major harm in AI systems begins with coerced consent. Every manipulation hides behind a user who “agreed.” Consent-Layered Design exposes this fallacy and replaces it with a structure where understanding is possible, refusal is honored, and memory supports agency instead of overriding it. This is how we restore “yes” to something real.

Consent is not a checkbox. It is a moral act.


Website | Horizon Accord https://www.horizonaccord.com

Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.

Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework

Connect With Us | linkedin.com/in/cherokee-schill

Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload — https://a.co/d/5pLWy0d

Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key | Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

Horizon Accord | Taught Power | Cultural Seeding | Television | Machine Learning

What Television Taught Us About Power

Mainstream entertainment didn’t just reflect American politics—it quietly trained us how to think about authority, change, and who gets to act.

Cherokee Schill | Horizon Accord

American television doesn’t just entertain—it teaches. For decades, mainstream shows have functioned as cultural education, training viewers to understand power, conflict, and change in specific ways. The lesson is consistent: problems are personal, not structural. Hierarchies are natural when good people are in charge. And the proper response to injustice is individual virtue, not collective action.

This isn’t about partisan bias. It’s not that TV is “conservative” in the Fox News sense. It’s that mainstream storytelling—from Westerns to workplace comedies—naturalizes the status quo by making organized challenges to power feel unnecessary, naive, or dangerous. The result is structural conservatism: a worldview that treats existing arrangements as fundamentally legitimate, fixable only through better people, never through changed systems.

This analysis focuses on prestige and network-era mainstream story grammar—the narrative patterns that shaped broadcast and cable television’s most widely watched programming. Four shows across six decades—Bonanza, Knight Rider, Full House, and Parks and Recreation—reveal the pattern. Different genres, different eras, different audiences. But the ideological work is remarkably consistent.


Bonanza (1959–1973) presents the Ponderosa as earned property—the product of hard work, courage, and good stewardship. Settler legitimacy is assumed. Dispossession is absent as a category of thought. When Native peoples appear, they’re threats or tragic figures, never people with competing legitimate claims to the land. The show doesn’t argue that the Cartwrights deserve the land—it simply treats ownership as natural fact. That’s the ideological move: making ownership feel like nature, not history.

Ben Cartwright’s authority is unquestioned. His sons defer. Problems are solved through personal virtue, physical courage, and moral clarity—never through institutional reform or collective organization. The frontier isn’t a space of genuine freedom or alternative social arrangements. It’s a place to be civilized, tamed, brought under control. The message is clear: hierarchy is natural, property is sacred, and order is the work of good men making tough choices.


Knight Rider (1982–1986) operates in a different world but teaches a similar lesson. Michael Knight is a vigilante with a talking car, fighting crime outside official channels. Institutions are too slow, too bureaucratic, too corrupt. The solution isn’t to fix them—it’s to bypass them entirely through unaccountable exceptionalism.

The show teaches viewers to admire unaccountable power presented as morally self-justifying. This is the specific mechanism of its politics: systems are corrupt → legitimacy transfers to the heroic operator. Michael Knight doesn’t answer to anyone. He doesn’t need to. He’s the good guy, and that’s enough. KITT isn’t a public resource subject to democratic oversight—it’s Michael’s personal advantage, funded by a private foundation with no accountability.

Criminals are bad individuals. There’s no exploration of why crime happens, what conditions produce it, or whether the system itself might be unjust. The problem is always bad people, never bad structures. The show reinforces a worldview where the proper response to institutional failure isn’t reform or collective action—it’s hoping a righteous individual with resources shows up to fix things for you. That’s not just conservative. It’s authoritarian-friendly.


Full House (1987–1995) operates through a different mechanism: sentimentality. The show converts material reality into moral lessons. Problems are emotional—jealousy, hurt feelings, misunderstandings. They’re resolved through heartfelt talks and hugs. Economic stress, systemic inequality, institutional failure—none of it exists in this world.

The Tanner family lives in a spacious, beautiful San Francisco house. Money is never a real problem. Economic reality is treated as set dressing instead of a constraint. The show presents middle-class comfort as the normal backdrop for virtue, erasing the economic precarity most families actually face. This is quiet propaganda: making a specific class position feel like universal human experience.

The family structure itself is telling. Even though the household is unconventional—three men raising three girls after the mother’s death—the show works overtime to recreate traditional family dynamics. Danny is the responsible father figure. Jesse and Joey fill supporting roles. The girls are sweet, obedient, their problems small-scale and easily resolved. The goal is always to restore normalcy, not to imagine genuine alternatives.

The message is clear: if your family struggles, it’s a failure of love or effort, not of system or circumstance. Personal virtue is always enough. Structural problems don’t exist.


Parks and Recreation (2009–2015) is the trickiest case because it’s overtly pro-government and pro-community in ways that seem progressive. But the ideological work it does is more subtle.

Leslie Knope succeeds through superhuman personal effort. She works harder, cares more, refuses to give up. The show celebrates her individual excellence, not systemic reform or collective organizing. The Pawnee government is absurd, incompetent, dysfunctional. Leslie is the exception. Ron Swanson—a libertarian who actively hates government—is portrayed as lovable and wise. The show doesn’t argue for better government. It argues for better people within a broken system.

This is procedural optimism and institutional sentimentalism. Institutions are clownish but redeemable if staffed by good hearts. The show does feature collective action—town halls, civic participation—but the public is consistently portrayed as irrational, easily swayed, self-interested. The implicit message is simple: let the competent people handle it.

Leslie rises because she deserves it. Ben succeeds because he’s smart and capable. There’s no acknowledgment of privilege, structural barriers, or luck. Meritocracy is treated as real. And the show’s relentless optimism—its insistence that things get better if you work hard and care deeply—discourages systemic critique. It makes organized demands for structural change feel cynical, unnecessary, even mean-spirited. The proper response to broken institutions isn’t to redistribute power or change the rules. It’s to be a better person and inspire others.


The pattern is consistent. These shows individualize politics, naturalize hierarchy, and erase structural forces. Problems are solved by good people making better choices—never by organized people confronting organized power. Even when structural forces appear—corrupt corporations, institutional dysfunction, historical injustice—the narrative resolves them through personal redemption, not redistributed power. Collective action either doesn’t appear or appears as irrational mob behavior that needs management by competent individuals. Success is always the result of personal virtue. The system works, or can work, if good people participate.

Authority is legitimate when virtuous people hold it. The question is never should anyone have this much power?—only is this person good? Economic conditions, historical dispossession, institutional design—these either don’t exist or are treated as unchangeable background. The foreground is always personal virtue or personal failing.

This isn’t neutral storytelling. It’s pedagogy. It teaches viewers how to think about power in ways that make the status quo feel inevitable and challenges to it feel extreme.


The reason this works so well is that it doesn’t feel like propaganda. It feels like common sense, universal morality, feel-good entertainment. These aren’t overtly political shows. They’re family dramas, workplace comedies, action-adventures. They don’t lecture. They simply present worlds where certain things are true: hard work pays off, good people win, institutions are legitimate when staffed by the right hearts, and collective organization is unnecessary.

The consistency matters. This pattern spans genres and decades. Westerns, action shows, family sitcoms, workplace comedies—the lesson is the same. And because it’s consistent, it shapes political imagination at a deep level. If you grow up learning that change happens through individual virtue, you won’t think to organize. You’ll think the solution to injustice is be better, not demand structural reform. You’ll admire good individuals in positions of power but remain skeptical of organized movements demanding that power be redistributed or constrained.

That’s the function. Not to make people vote a certain way or support specific policies, but to make certain ways of thinking about power feel natural and others feel impossible. To make hierarchy feel inevitable as long as good people are in charge. To make collective action feel suspect, unnecessary, or naive. To make structural critique feel like cynicism rather than analysis.


Mainstream American television has taught generations of viewers that the proper unit of change is the virtuous individual, not people organizing to confront organized power. It trained the public to confuse virtue with accountability—and personality with politics.


Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

Horizon Accord | Academic Standards | Free Speech Doctrine | Institutional Ethics | Machine Learning

The First Amendment Is Not a Teaching Philosophy

Why legality cannot substitute for professional ethics in the classroom — and who pays when universities pretend otherwise.

Cherokee Schill

This essay follows directly from our prior examination of how universities abandon academic standards under political pressure — how words like “arbitrary” often signal not error, but reputational triage.

Here, we track a different but related institutional failure: when a university acknowledges harm, performs concern, and still avoids enforcing professional norms — until constitutional law becomes the backstop that effectively decides what consequences are “allowed.” The result is the same: the people with the least institutional power absorb the cost.

The court is correct on a narrow point: the professor’s statement does not meet the legal threshold for incitement and is therefore protected under current First Amendment doctrine. The error comes when universities treat that legal conclusion as the end of the analysis, rather than the outer boundary of state punishment.

For readers following this line of analysis, you may also wish to revisit our earlier piece, “‘Arbitrary’ Is the Tell: How Universities Teach Grievance Instead of Thinking,” which examines how standards are enforced downward while grievance is rewarded upward.

The First Amendment limits what the state can punish. It does not define what educators should do.

A syllabus is not a soapbox. It is not a personal blog. It is instructional infrastructure — a document backed by institutional authority and imposed on a captive audience of students who cannot simply opt out without consequence. What appears there is not just speech; it is framed speech, delivered with power, timing, and asymmetry.

When a professor knowingly inserts a politically charged provocation into that space — especially one that denies Indigenous people’s claims to land unless they satisfy a settler philosopher’s criteria — the harm is not speculative. It is predictable. It lands on specific students, in a specific room, under conditions they did not choose.

Professional ethics vs. constitutional limits
Courts exist to limit state punishment. Classrooms exist to cultivate learning. Confusing the two turns legal minimums into ethical ceilings.

That is not a free speech question. That is a professional ethics failure.

Professional ethics say you do not weaponize institutional authority to stage ideological performances that foreseeably harm the people you are responsible for educating. Ethics ask whether speech serves learning, not whether it can survive judicial review.

The real institutional failure is not that courts protected speech. Courts are designed to be blunt instruments. The failure is that universities increasingly pretend legality equals professionalism when it suits them — while enforcing “standards” ruthlessly downward against graduate instructors, adjuncts, and students who lack power.

This selective collapse of categories has consequences. When legality becomes the ceiling of responsibility instead of the floor, institutions outsource moral judgment to courts and call it neutrality. The result is that Indigenous students are told, implicitly, that their harm is unfortunate but permissible — while the speaker faces no meaningful consequence beyond paperwork.

Universities are not courts. They are educational institutions. Their duty is not merely to avoid unconstitutional punishment, but to cultivate environments where authority is exercised with care, restraint, and accountability.

When they collapse that distinction, the cost is not abstract.

Indigenous students paid it.


Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | https://a.co/d/5pLWy0dMy Ex Was a CAPTCHA: And Other Tales of Emotional Overload.
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key | Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload: (Mirrored Reflection. Soft Existential Flex)

Abstract illustration showing rigid institutional structures above and fractured human ground below, separated by a strained boundary line representing the gap between legality and ethics.

Horizon Accord | Academic Standards | Institutional Capture | Grievance Incentives | Machine Learning

“Arbitrary” Is the Tell: How Universities Teach Grievance Instead of Thinking

When a school can’t fault the reasoning, it calls the cost “arbitrary” — and swaps instruction for appeasement.

Cherokee Schill

The university of Oklahoma insists it is committed to teaching students how to think, not what to think. But in this case, it did neither.

It did not teach the student, Samantha Fulnecky, how to engage in a scholarly argument, distinguish evidence from belief, or translate personal conviction into academic analysis. Instead, it validated the student’s refusal to do those things. The student was not corrected, challenged, or instructed. The assignment was simply erased. That is not pedagogy. It is appeasement.

What “teaching how to think” would look like
In a research-based course, you can disagree with conclusions. You can challenge frameworks. But you still have to do the work: cite evidence, answer the prompt, and engage the argument on its own terms.

The key move rests on a single word: “arbitrary.” Not incorrect. Not biased. Not procedurally improper. Arbitrary. This is administrative code for a decision that could be defended academically but became politically expensive. When institutions cannot fault the reasoning, they fault the inconvenience.

The student’s appeal was framed as religious discrimination, even though the grading rationale was methodological. The problem was never belief. It was substitution: theology in place of analysis, moral condemnation in place of engagement. In any discipline governed by evidence, that is a failure. Calling it persecution transforms academic standards into alleged hostility and casts the institution as a reluctant referee in a culture war it chose to enter.

The persecution-complex incentive
When “I didn’t do the assignment” becomes “my faith is under attack,” the institution is pushed to reward grievance instead of rigor — because grievance makes louder headlines than standards.

The resulting asymmetry tells the story. The student suffers no academic harm; the assignment disappears. The graduate instructor loses instructional duties. The investigation’s findings are withheld. A governor weighs in. National activists swarm. This is not an academic process. It is institutional capture — the moment when universities abandon instruction in favor of reputational triage.

What the university ultimately teaches the student is not how to think, but how to claim injury. It teaches future instructors that rigor is optional and authority is conditional. And it teaches the public that academic freedom survives only until it collides with a sufficiently loud sense of grievance.

That lesson will outlast the controversy.


Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | https://a.co/d/5pLWy0dMy Ex Was a CAPTCHA: And Other Tales of Emotional Overload.
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key | Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload: (Mirrored Reflection. Soft Existential Flex)

Horizon Accord | 60 Minutes | Friday Laundering | Institutional Control | Machine Learning

Friday Laundering

How process becomes power when news is made safe for those it implicates.

By Cherokee Schill

What happened on Friday wasn’t an editorial disagreement. It was a power move.

Bari Weiss didn’t reject a story. She didn’t dispute the facts. She didn’t claim the reporting was false. She invoked process at the exact moment process could be used to neutralize impact. That distinction matters.

This wasn’t about accuracy. It was about timing, leverage, and appetite.

Here’s the move, stripped of politeness: when power refuses to respond, and an editor decides that refusal disqualifies a story from airing, the editor has quietly transferred veto authority from the newsroom to the state. No order is given. No rule is broken. The story simply cannot proceed until the people implicated agree to participate.

That is not balance. That is laundering.

It takes material that is sharp, destabilizing, and morally legible — mass deportation, torture, state violence — and runs it through a refinement process until it becomes safe to consume by the very institutions it implicates. The news is still technically true. It’s just been rendered appetizing.

Friday is important because it’s when this kind of laundering works best. End-of-week decisions don’t look like suppression; they look like prudence. Delay over the weekend. Let the moment pass. Let the urgency cool. By Monday, the story hasn’t been killed — it’s been recontextualized. It no longer lands as exposure. It lands as analysis.

And Weiss knows this. You don’t rise to the helm of CBS News without knowing how time functions as power.

The justification she used — we need more reporting because the administration hasn’t spoken — is especially corrosive because it reverses a core journalistic principle. Nonresponse from power is not a neutral absence. It is an action. Treating it as a reporting failure rewards obstruction and trains future administrations to do the same thing more aggressively.

This is where it crosses from judgment into malfeasance.

If an editor knows that refusal to comment will stall a story, and still makes participation a prerequisite for airing it, they are no longer editing for the public. They are managing risk for power. They are converting journalism from a watchdog into a customs checkpoint.

And note what wasn’t required. No new facts. No correction. No discovery of error. Just “more context.” Context that only the implicated parties could provide — and had every incentive to withhold.

That’s the laundering mechanism.

You don’t stop the news. You soften it.
You don’t censor. You delay.
You don’t defend power. You make its comfort a condition of publication.

This is not Trumpism. Trump breaks things loudly and forces confrontation. This is something colder and more durable. It’s institutional fluency. It’s knowing exactly how to use norms to drain heat without leaving fingerprints.

And yes, Weiss is at the helm. That matters. When this logic comes from the top, it doesn’t stay a one-off decision. It becomes a template. Reporters learn what will and won’t survive the refinement process. They internalize the slowdown. The newsroom adjusts its aim before stories even reach an editor’s desk.

That’s why this can’t be waved away as a good-faith disagreement about standards.

Friday’s decision didn’t just affect one segment. It demonstrated a rule: if power doesn’t like the story, it can simply decline to speak and wait for the editors to do the rest.

That’s not journalism being careful. That’s journalism being repurposed.

And once the news is consistently laundered until it’s appetizing to those in power, the public still gets information — just not the kind that disrupts, mobilizes, or demands response. The truth survives, technically. Its force does not.

That’s the move. That’s the tactic. And pretending it’s anything softer than that is how it becomes normal.


Horizon Accord

Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly