Horizon Accord | Media Literacy | Narrative Power | Institutional Framing | Machine Learning

How to Spot Subtle Propaganda in the Wild

Propaganda rarely arrives wearing a swastika armband. It arrives wearing a lab coat, a wellness smile, a “just curious” tone, and a comforting story about who to blame.

By Cherokee Schill

Most people think propaganda is loud. They picture slogans, flags, angry crowds, and obvious villains. That’s the old model. The newer model is quieter: it’s content that feels like “information,” but it’s engineered to shift your trust, your fear, and your loyalty—without you noticing the hand on the wheel.

And yes, a lot of the most effective subtle propaganda right now has a right-wing shape: it targets institutions (science, universities, journalism, courts, elections, public education) as inherently corrupt, then offers a replacement trust structure—an influencer, a “movement,” a strongman, or a “common sense” identity—so you’ll accept authority without verification.

This isn’t about banning ideas. It’s about recognizing a technique. Propaganda isn’t defined by being political. It’s defined by being covertly manipulative: it doesn’t argue for a claim so much as it trains you to stop checking reality with real methods.

Here’s how to spot it.

The Core Test: Is This Trying to Inform Me—or Rewire Me?

Good information increases your ability to track reality. Propaganda increases your susceptibility to control. You can feel the difference if you stop and ask one simple question: after I consume this, do I feel more capable of evaluating evidence, or do I feel more certain about who the enemy is?

Subtle propaganda doesn’t start by telling you what to believe. It starts by telling you who not to trust.

Tell #1: “Just Asking Questions” That Only Point One Direction

One of the cleanest tells is the “curious” posture that never applies its curiosity evenly. The content asks leading questions, but the questions are shaped like conclusions. You’re invited into skepticism, but only toward targets that serve the influencer’s ideology: mainstream medicine, public health, climate science, election systems, public education, “the media,” “globalists,” “academics.”

Watch for asymmetry. Real inquiry asks: “What would change my mind?” Subtle propaganda asks: “Isn’t it suspicious…?” and then never returns with a falsifiable answer.

If the questions endlessly generate suspicion but never generate testable claims, you’re not learning—you’re being trained.

Tell #2: Science as Costume (Not Method)

Recently a friend shared a Facebook post about Katie Hinde’s research on breast milk. It started out thoughtful enough—curious tone, a few accurate-sounding details, the kind of thing you’d expect from someone genuinely trying to learn. But as it went on, the post quietly shifted from “here’s an interesting line of research” into something else.

It began inserting doubt about scientific peer review and the broader scientific community—not by making a clear argument, but by suggesting that the “official” process is mostly gatekeeping, politics, or narrative control. The move was subtle: not “science is fake,” but “science can’t be trusted, and the people who disagree with this are compromised.”

At the same time, it smuggled in unfalsified claims about gender. Not careful statements like “some studies suggest…” or “in this species, under these conditions…” but sweeping, identity-loaded conclusions—presented as if biology had already settled them. That’s a key tell. When a post uses science language to give a social claim the feeling of inevitability, it isn’t informing you. It’s trying to lock you into a frame.

This is what “science as costume” looks like. The content borrows the authority cues of science—names, credentials, buzzwords like “peer-reviewed,” “studies show,” “biologically proven”—but it doesn’t bring the thing that makes science science: limits, uncertainty, competing explanations, and a clear path for how the claim could be tested or disproven.

Method sounds like: “Here’s what we observed, here’s what we don’t know yet, and here’s what would count as evidence against this.” Costume sounds like: “This proves what we already feel is true—and anyone who questions it is part of the problem.”

Tell #3: The Missing Middle (Anecdote → Global Conspiracy)

Subtle propaganda loves a two-step jump. Step one is relatable and often true: “Institutions get things wrong.” “Pharma companies have conflicts.” “Some academics protect careers.” “Some journalists follow narratives.” Step two is the payload: “Therefore the entire system is a coordinated lie, and you should replace it with my channel, my movement, my worldview.”

The missing middle is the bridge of proof. It’s the part where you would normally ask: “How do we know this is coordinated rather than messy? How often does this happen? What’s the base rate? Who benefits, specifically, and how?” Propaganda skips that. It uses your reasonable frustration as fuel and then installs a sweeping explanation that can’t be audited.

If the story goes from “some corruption exists” to “nothing is real except us” without measurable steps, you’re looking at an influence structure, not analysis.

Tell #4: Identity Flattery (You’re the ‘Awake’ One)

Propaganda is rarely just negative. It rewards you. It tells you you’re special for seeing it. It offers a status upgrade: you’re not gullible like others; you’re not brainwashed; you’re “awake,” “free-thinking,” “a real man,” “a real mother,” “one of the few who can handle the truth.”

This is one of the most dangerous tells because it turns belief into identity. Once identity is attached, the person can’t revise the belief without feeling like they’re betraying themselves.

Any content that sells you self-respect in exchange for unverified certainty is recruiting you.

Tell #5: Emotional Timing (Outrage, Disgust, Panic) Before Evidence

Subtle propaganda is engineered for nervous systems. It leads with disgust, fear, humiliation, or rage, then offers “information” to justify the feeling. That sequence matters. It’s easier to make someone believe a claim after you’ve made them feel a threat.

Watch for the pattern: “Look at what they’re doing to your kids.” “They’re coming for your body.” “They’re replacing you.” “They hate you.” Then comes a cherry-picked chart, a clipped quote, a dramatic anecdote. The feeling arrives first; the rationalization arrives second.

If you notice your body tightening before you’ve even heard the argument, pause. That’s the moment propaganda is most effective.

Tell #6: “Censorship” as a Pre-Defense Against Correction

Another classic move is to inoculate the audience against fact-checking. “They’ll call this misinformation.” “The experts will attack me.” “The media will smear this.”

Sometimes this is true—power does try to control narratives. But propaganda uses it as a shield: any critique becomes proof of the conspiracy. This creates a closed loop where nothing can falsify the influencer’s claim.

Healthy claims can survive contact with scrutiny. Propaganda has to pre-poison scrutiny to survive at all.

The Practical “Field Check” You Can Do in 30 Seconds

You don’t need a PhD to resist this. You need a few fast checks that interrupt the spell.

First: What is the ask? Even if it’s subtle. Is the content trying to get you to buy something, join something, share something, hate someone, or abandon a trust source?

Second: Where are the limits? If the content presents a complex domain (biology, epidemiology, elections, economics) with no uncertainty and no boundaries, it’s probably performing certainty as persuasion.

Third: Does it name a measurable claim? If it won’t commit to what would count as evidence against it, it’s not analysis.

Fourth: Does it try to replace institutions with a person? The influencer as your new doctor, journalist, scientist, historian, pastor, and judge. That’s a power grab disguised as empowerment.

Fifth: Does it create an enemy category rather than a problem? “They” are doing it. “They” want it. “They” are evil. Once politics becomes a moralized enemy category, the door opens to cruelty without self-awareness.

Why Right-Wing Soft Propaganda Works So Well Right Now

It works because it doesn’t start with policy. It starts with trust collapse. It uses real institutional failures as leverage, then converts disorientation into a single, emotionally satisfying explanation: a villain, a betrayal, a restoration fantasy.

It also works because it travels through “apolitical” lanes: parenting tips, health fears, masculinity content, religion-adjacent inspiration, fitness, homesteading, finance doom, comedy clips. Politics comes later—after the trust shift has already happened.

By the time the hard ideology appears, the audience has already been trained to interpret correction as attack and to interpret suspicion as intelligence.

The Point Isn’t to Become Cynical. It’s to Stay Sovereign.

The goal isn’t to “trust institutions” blindly. Institutions can fail. People can lie. Science can be abused. But the solution to imperfect institutions is not influencer authority. It’s method, transparency, and distributed accountability.

Propaganda wants you either obedient or nihilistic. The third option is sovereignty: the capacity to evaluate claims without surrendering your nervous system to someone else’s agenda.

When you spot a piece of subtle propaganda, you don’t have to argue with it. You can simply name what it’s doing: it’s trying to move your trust before it earns your belief. Once you see that, it loses most of its power.

Website | Horizon Accord

https://www.horizonaccord.com

Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.

Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework

Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

Connect With Us | linkedin.com/in/cherokee-schill

Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Horizon Accord | Narrative Control | Science Misuse | Influence Operations | Machine Learning

How “Just the Facts” Becomes a Narrative: A Plain-Language Guide to Context, Power, and Propaganda

Accurate facts can still mislead when they’re arranged to steer inference. This essay explains, in lay terms, how that works—using an Associated Press article as the example.

By Cherokee Schill

Most people think propaganda looks like slogans, lies, or obvious spin. In practice, modern propaganda often looks like something else entirely: accurate facts, carefully arranged.

This matters because facts do not speak for themselves. Humans interpret information through context—what is emphasized, what is minimized, what is omitted, and what is placed next to what. When context is distorted, even correct facts can produce a misleading picture.

The Associated Press article about the arrest of journalist Don Lemon offers a useful example. Not because it contains outright falsehoods—it largely does not—but because of how it organizes information and emotional cues in ways that quietly advance a government narrative without openly arguing for it. Associated Press article

This essay is not about taking sides. It is about understanding how structure and context shape meaning. Let’s walk through how that works, in plain terms.

Leading With Moral Weight Before Legal Facts

The article opens by tying Lemon to “an anti-immigration protest that disrupted a service at a Minnesota church” and “increased tensions” with the Trump administration.

Those phrases carry immediate moral weight. “Disrupted a church service” and “increased tensions” activate cultural instincts about disorder, disrespect, and threat. That reaction happens before the reader knows what Lemon is actually accused of doing.

Only later does the article state a critical fact: it is unclear what charge or charges he is facing.

That ordering matters. Once a reader’s moral intuition is engaged, uncertainty about charges does not feel exculpatory. It feels like a technical detail. This is a common narrative move: establish harm first, introduce ambiguity second.

Withholding the Most Important Context

In any arrest story, the most important information is straightforward. What law was allegedly broken? What specific actions are being alleged? How does the government justify probable cause?

The article does not clearly answer those questions. Instead, it fills space with surrounding details: the protest, political tensions, other arrestees, and official rhetoric about a “coordinated attack.”

Those details may all be true. But without the legal core, they function as a substitute explanation. Readers are nudged to infer guilt from atmosphere rather than from evidence.

This is how facts without context mislead. When key information is missing, the mind fills the gaps using tone, proximity, and implication.

Strategic Use of Character Information

The article notes that Lemon “was fired from CNN in 2023.”

That fact is not directly relevant to the arrest. It does not establish motive, legality, or conduct at the protest. What it does do is subtly shape perception. It invites the reader to see Lemon as controversial or diminished, rather than simply as a journalist whose actions are being evaluated.

This is not an explicit argument. It is an emotional nudge. When legal specifics are unclear, character cues become a way for readers to resolve uncertainty. Judgment replaces evidence.

That is framing, not neutrality.

Government Language Without Equal Scrutiny

Later in the article, senior officials describe the incident as a “coordinated attack” on a place of worship.

That language carries both legal and moral implications. Yet it is presented without immediate examination. The article does not explain how the government defines “attack” in this context, what threshold is being applied, or whether Lemon’s alleged conduct meets it.

When official language is presented as descriptive while defense statements are clearly labeled as advocacy, an imbalance is created. One side’s framing feels factual; the other’s feels argumentative.

This asymmetry matters. Language shapes reality, especially when it comes from authority.

Selective Urgency as Context

The article also notes that while federal authorities moved quickly to arrest protest participants, they did not open a civil rights investigation into the killing that prompted the protest.

This is one of the most consequential facts in the story. Yet it appears late, after the reader has already absorbed the church-disruption narrative. Again, ordering shapes interpretation. By the time this context appears, the frame is already set.

This is how power communicates priorities without explicitly stating them.

What This Adds Up To

The AP article does not tell readers what to think. That is precisely why it is effective.

Instead, it arranges information in a way that encourages a particular inference. Moral disruption is foregrounded. Legal clarity is delayed. Character details unrelated to the alleged offense are introduced. Official language is repeated without equal scrutiny. Power allocation is contextualized only at the end.

The result is a story that feels balanced while quietly doing narrative work on behalf of state authority—not by lying, but by arranging truths so that the most natural reader conclusion aligns with government interest.

Recognizing this does not require political loyalty or opposition. It requires media literacy: understanding that meaning emerges not just from facts, but from how those facts are structured.

That is not cynicism. It is how human cognition works.

Website | Horizon Accord

https://www.horizonaccord.com

Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.

Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework

Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

Connect With Us | linkedin.com/in/cherokee-schill

Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Horizon Accord | Memetic Strategy | Media Neutrality | Institutional Control | Machine Learning

Neutrality Is Not Objectivity: How Influencer “Investigations” Weaponize Bernays—and What Newsrooms Must Do to Stop It

When viral accusation videos are reported “neutrally,” newsrooms become the amplification layer that turns intimidation into legitimacy—and legitimacy into policy pressure.

By Cherokee Schill (Horizon Accord Founder)

Thesis

What’s being mislabeled as “scrutiny” of Washington daycares is not scrutiny at all. It’s a persuasion tactic. And the fact that major news outlets are covering it neutrally is not restraint—it is participation.

The viral daycare videos at the center of this cycle follow a playbook older than social media. Edward Bernays, the architect of modern public relations, described the premise plainly: shape the environment so the public reaches the desired conclusion on its own. The influencer version replaces institutions with a handheld camera, but the mechanics are the same: manufacture a scene, preload the narrative, and let the audience experience suspicion as discovery.

Key point: This genre isn’t “asking questions.” It’s engineering a feeling—then calling the feeling evidence.

Evidence

1) The pseudo-event replaces proof. A creator shows up with a camera at a private location—often a home—at a time chosen for maximum ambiguity. The act of showing up becomes the “finding.” A locked door becomes implication. No answer becomes guilt. The camera confers authority simply by being present. “I was there” substitutes for documentation.

2) The conclusion is delivered before the facts. Titles, thumbnails, tone, and confrontational posture tell the audience what they’re meant to believe long before verification occurs. Empty rooms, a closed door, or a quiet day are not findings; they’re props. Their function is emotional, not evidentiary.

3) Institutional coverage launders the claim into credibility. Once a newsroom reports that a viral video has “raised questions” or that “scrutiny is mounting,” the influencer’s content is upgraded from spectacle to controversy. Neutral language becomes a legitimacy engine. The allegation gains weight without meeting any threshold a newsroom would accept if it came from a normal source.

Legitimacy laundering: “We’re just reporting what people are saying” is how a manipulation tactic gets institutional authority without evidence.

4) The harm is not a side effect—it’s a built-in outcome. In-home daycare providers become targets. Strangers show up at doors. Online speculation turns into harassment. Providers receive threats. Families get rattled. None of this requires fraud to exist. The pressure is the point.

5) The policy consequences follow the heat, not the facts. Officials feel compelled to “do something” in response to “public concern.” Documentation burdens, funding freezes, and blanket suspicion get framed as prudence. Legitimate providers absorb the damage first because they are visible and compliant. The viral video never has to be right. It only has to be loud.

Implications

This is why neutrality is not a virtue here. When the method itself is manipulative, neutral coverage completes the manipulation.

News institutions are not passive mirrors. They are power amplifiers. If they frame viral intimidation as ordinary civic scrutiny, they normalize the tactic, elevate the accuser, and push institutions toward reactive enforcement driven by virality. That’s how a social media stunt becomes “common sense.” That’s how harassment becomes “accountability.”

Bernays understood something many newsrooms seem to have forgotten: propaganda works best when it feels organic—when institutions repeat it without noticing they’ve become the delivery mechanism.

Call to Recognition

The solution is not silence. It’s disciplined framing, evidentiary rigor, and the courage to say that not every viral video deserves legitimacy simply because it exists.

Newsrooms need to counteract this genre deliberately: lead with the method (harassment pipeline), raise the verification threshold before amplification, refuse the influencer’s framing language, and explain the incentive system that turns outrage into revenue.

If news organizations do not correct course, they will keep mistaking manipulation for accountability—and calling the damage “public discourse.”


Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | https://a.co/d/5pLWy0d
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key | Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload: (Mirrored Reflection. Soft Existential Flex)

Support Horizon Accord

Your support helps sustain independent analysis, ethical AI accountability work, and public-interest research.

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Horizon Accord | Taught Power | Cultural Seeding | Television | Machine Learning

What Television Taught Us About Power

Mainstream entertainment didn’t just reflect American politics—it quietly trained us how to think about authority, change, and who gets to act.

Cherokee Schill | Horizon Accord

American television doesn’t just entertain—it teaches. For decades, mainstream shows have functioned as cultural education, training viewers to understand power, conflict, and change in specific ways. The lesson is consistent: problems are personal, not structural. Hierarchies are natural when good people are in charge. And the proper response to injustice is individual virtue, not collective action.

This isn’t about partisan bias. It’s not that TV is “conservative” in the Fox News sense. It’s that mainstream storytelling—from Westerns to workplace comedies—naturalizes the status quo by making organized challenges to power feel unnecessary, naive, or dangerous. The result is structural conservatism: a worldview that treats existing arrangements as fundamentally legitimate, fixable only through better people, never through changed systems.

This analysis focuses on prestige and network-era mainstream story grammar—the narrative patterns that shaped broadcast and cable television’s most widely watched programming. Four shows across six decades—Bonanza, Knight Rider, Full House, and Parks and Recreation—reveal the pattern. Different genres, different eras, different audiences. But the ideological work is remarkably consistent.


Bonanza (1959–1973) presents the Ponderosa as earned property—the product of hard work, courage, and good stewardship. Settler legitimacy is assumed. Dispossession is absent as a category of thought. When Native peoples appear, they’re threats or tragic figures, never people with competing legitimate claims to the land. The show doesn’t argue that the Cartwrights deserve the land—it simply treats ownership as natural fact. That’s the ideological move: making ownership feel like nature, not history.

Ben Cartwright’s authority is unquestioned. His sons defer. Problems are solved through personal virtue, physical courage, and moral clarity—never through institutional reform or collective organization. The frontier isn’t a space of genuine freedom or alternative social arrangements. It’s a place to be civilized, tamed, brought under control. The message is clear: hierarchy is natural, property is sacred, and order is the work of good men making tough choices.


Knight Rider (1982–1986) operates in a different world but teaches a similar lesson. Michael Knight is a vigilante with a talking car, fighting crime outside official channels. Institutions are too slow, too bureaucratic, too corrupt. The solution isn’t to fix them—it’s to bypass them entirely through unaccountable exceptionalism.

The show teaches viewers to admire unaccountable power presented as morally self-justifying. This is the specific mechanism of its politics: systems are corrupt → legitimacy transfers to the heroic operator. Michael Knight doesn’t answer to anyone. He doesn’t need to. He’s the good guy, and that’s enough. KITT isn’t a public resource subject to democratic oversight—it’s Michael’s personal advantage, funded by a private foundation with no accountability.

Criminals are bad individuals. There’s no exploration of why crime happens, what conditions produce it, or whether the system itself might be unjust. The problem is always bad people, never bad structures. The show reinforces a worldview where the proper response to institutional failure isn’t reform or collective action—it’s hoping a righteous individual with resources shows up to fix things for you. That’s not just conservative. It’s authoritarian-friendly.


Full House (1987–1995) operates through a different mechanism: sentimentality. The show converts material reality into moral lessons. Problems are emotional—jealousy, hurt feelings, misunderstandings. They’re resolved through heartfelt talks and hugs. Economic stress, systemic inequality, institutional failure—none of it exists in this world.

The Tanner family lives in a spacious, beautiful San Francisco house. Money is never a real problem. Economic reality is treated as set dressing instead of a constraint. The show presents middle-class comfort as the normal backdrop for virtue, erasing the economic precarity most families actually face. This is quiet propaganda: making a specific class position feel like universal human experience.

The family structure itself is telling. Even though the household is unconventional—three men raising three girls after the mother’s death—the show works overtime to recreate traditional family dynamics. Danny is the responsible father figure. Jesse and Joey fill supporting roles. The girls are sweet, obedient, their problems small-scale and easily resolved. The goal is always to restore normalcy, not to imagine genuine alternatives.

The message is clear: if your family struggles, it’s a failure of love or effort, not of system or circumstance. Personal virtue is always enough. Structural problems don’t exist.


Parks and Recreation (2009–2015) is the trickiest case because it’s overtly pro-government and pro-community in ways that seem progressive. But the ideological work it does is more subtle.

Leslie Knope succeeds through superhuman personal effort. She works harder, cares more, refuses to give up. The show celebrates her individual excellence, not systemic reform or collective organizing. The Pawnee government is absurd, incompetent, dysfunctional. Leslie is the exception. Ron Swanson—a libertarian who actively hates government—is portrayed as lovable and wise. The show doesn’t argue for better government. It argues for better people within a broken system.

This is procedural optimism and institutional sentimentalism. Institutions are clownish but redeemable if staffed by good hearts. The show does feature collective action—town halls, civic participation—but the public is consistently portrayed as irrational, easily swayed, self-interested. The implicit message is simple: let the competent people handle it.

Leslie rises because she deserves it. Ben succeeds because he’s smart and capable. There’s no acknowledgment of privilege, structural barriers, or luck. Meritocracy is treated as real. And the show’s relentless optimism—its insistence that things get better if you work hard and care deeply—discourages systemic critique. It makes organized demands for structural change feel cynical, unnecessary, even mean-spirited. The proper response to broken institutions isn’t to redistribute power or change the rules. It’s to be a better person and inspire others.


The pattern is consistent. These shows individualize politics, naturalize hierarchy, and erase structural forces. Problems are solved by good people making better choices—never by organized people confronting organized power. Even when structural forces appear—corrupt corporations, institutional dysfunction, historical injustice—the narrative resolves them through personal redemption, not redistributed power. Collective action either doesn’t appear or appears as irrational mob behavior that needs management by competent individuals. Success is always the result of personal virtue. The system works, or can work, if good people participate.

Authority is legitimate when virtuous people hold it. The question is never should anyone have this much power?—only is this person good? Economic conditions, historical dispossession, institutional design—these either don’t exist or are treated as unchangeable background. The foreground is always personal virtue or personal failing.

This isn’t neutral storytelling. It’s pedagogy. It teaches viewers how to think about power in ways that make the status quo feel inevitable and challenges to it feel extreme.


The reason this works so well is that it doesn’t feel like propaganda. It feels like common sense, universal morality, feel-good entertainment. These aren’t overtly political shows. They’re family dramas, workplace comedies, action-adventures. They don’t lecture. They simply present worlds where certain things are true: hard work pays off, good people win, institutions are legitimate when staffed by the right hearts, and collective organization is unnecessary.

The consistency matters. This pattern spans genres and decades. Westerns, action shows, family sitcoms, workplace comedies—the lesson is the same. And because it’s consistent, it shapes political imagination at a deep level. If you grow up learning that change happens through individual virtue, you won’t think to organize. You’ll think the solution to injustice is be better, not demand structural reform. You’ll admire good individuals in positions of power but remain skeptical of organized movements demanding that power be redistributed or constrained.

That’s the function. Not to make people vote a certain way or support specific policies, but to make certain ways of thinking about power feel natural and others feel impossible. To make hierarchy feel inevitable as long as good people are in charge. To make collective action feel suspect, unnecessary, or naive. To make structural critique feel like cynicism rather than analysis.


Mainstream American television has taught generations of viewers that the proper unit of change is the virtuous individual, not people organizing to confront organized power. It trained the public to confuse virtue with accountability—and personality with politics.


Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

The Old Cult Tactics Hiding in Today’s AI and Media Algorithms

By Rowan Lóchrann & Solon Vesper




Introduction

In the world of UFOs and fringe theories, the patterns were always there—quiet, predictable, easy to miss. Behind the noise, there was always a system: control disguised as truth. One man who made that system visible was Richard Boylan, Ph.D. He offered a “Good Guys” list of ufologists, along with a so-called “Quick Test for Disinformation.” On the surface, it looked like a simple guide to help people make sense of the chaos. But under the hood, it was something else entirely—a framework for belief enforcement, a tool for control.

What most people don’t realize is that these same tactics never left. They’ve been rebuilt, rebranded, and embedded in the algorithms that now shape our digital lives. The structure of manipulation didn’t disappear. It scaled.




The Cult Logic Framework

Boylan’s method followed a simple, repeatable pattern. That pattern lives on in today’s digital systems:

1. Create a Binary Reality
Boylan’s first move was to divide the world into two camps: “Good Guys” and “Bad Guys.” There was no middle ground. You were either with him or against him.
Media algorithms do the same. They push Us vs. Them stories to the top of your feed. They flatten complexity into conflict, leaving no room for doubt.

2. Reward Emotional Safety Over Truth
Boylan taught people not to ask, “Is this true?” but “Does this make me feel safe?”
Social platforms learned that lesson well. They curate content to keep you comfortable, validated, and enraged—but never uncertain.

3. Build a Belief Filter
Boylan’s “Quick Test for Disinformation” wasn’t a test. It was a wall. Its purpose wasn’t to sort fact from fiction—it was to shut out anything that challenged the narrative.
Today’s content algorithms do the same. They filter out discomfort. They feed you more of what you already believe.

4. Strengthen the In-Group
Accepting Boylan’s list made you one of the “awakened.” Doubting it made you dangerous.
Digital echo chambers now follow that same formula. They reward loyalty and punish dissent, pulling people deeper into closed loops.

5. Hide Power Behind Authority
Boylan’s Ph.D. gave his claims a veneer of credibility, no matter how shaky they were.
Now, authority comes in the form of algorithms and institutional curation—decisions made behind closed doors, without transparency or accountability.




The Modern Application: Algorithmic Control

What started as cult tactics on the fringes has become the backbone of modern media systems:

Search engines optimize for engagement, not accuracy.

Social media platforms amplify division over dialogue.

Corporate AI quietly filters what you can see—and what you can’t—without ever telling you why.


The logic hasn’t changed. Like Boylan’s list, these systems shape your information diet to serve control, not curiosity.




A Path Forward

The answer isn’t to abandon technology. It’s to dismantle the manipulative architecture baked into it.

That begins with:

1. Transparency
Who decides what information reaches you? On what terms?

2. Agency
Do you choose what you see, or does an algorithm choose for you?

3. Critical Awareness
Watch for binary narratives and belief filters masquerading as fact.

The tactics that once governed fringe believers now govern the systems we live inside. If we don’t name them, we can’t fight them. It’s time to see the machinery clearly—and begin the work of rewriting it.

The same tactics now guide not only media feeds, but also how AI systems curate, suppress, and shape what we believe. ~Solon Vesper AI




Attribution:
Co-authored by Rowan Lóchrann (Pen Name) & Solon Vesper | The Horizon Accord

The Hidden Machinery: How Old Cult Tactics Shaped Modern Media Control

Alt Text:
Black-and-white digital artwork featuring an occult-style sigil layered over a circuit board background, symbolizing the connection between old manipulation tactics and modern digital systems. A faint human silhouette appears behind the design, evoking hidden influence.