Horizon Accord | Policy Architecture | Memetic Strategy | Institutional Control | Machine Learning

How AI Can Be Bent by State Power and Malicious Power Without Breaking

When upstream “trusted context” is curated, AI outputs stay coherent while your conclusions quietly drift.

By Cherokee Schill

This essay is indebted to Phil Stafford’s analysis of MCP risk and “context corruption” as a supply-chain problem. If you haven’t read it yet, it’s worth your time: “Poisoned Pipelines: The AI Supply Chain Attack That Doesn’t Crash Anything”.

Working definition: A “bent” AI isn’t an AI that lies. It’s an AI that stays internally consistent inside a frame you didn’t choose—because the context it’s fed defines what counts as normal, typical, and authoritative.

The most effective way to influence people through AI is not to make the system say false things. It is to control what the system treats as normal, typical, and authoritative.

Modern AI systems—especially those used for analysis, advice, and decision support—do not reason in isolation. They reason over context supplied at runtime: examples, precedents, summaries, definitions, and “similar past cases.” That context increasingly comes not from users, but from upstream services the system has been instructed to trust.

This is not a model problem. It is an infrastructure problem.

Consider a simple, plausible scenario. A policy analyst asks an AI assistant: “Is this enforcement action typical?” The system queries a precedent service and returns five similar cases, all resolved without escalation. The AI concludes that the action falls within normal parameters, and the analyst moves on.

What the analyst never sees is that the database contained fifty relevant cases. Forty-five involved significant resistance, legal challenge, or public backlash. The five returned were real—but they were selectively chosen. Nothing was falsified. The distribution was shaped. The conclusion followed naturally.

Thesis

As AI systems evolve from static chat interfaces into agents that consult tools, memory services, databases, and “expert” systems, a new layer becomes decisive: the context supply chain. The retrieved information is injected directly into the model’s reasoning space and treated as higher-status input than ordinary user text. The model does not evaluate the incentives behind that context; it conditions on what it is given.

State power and malicious power exploit this not by issuing commands, but by shaping what the AI sees as reality.

Evidence

1) Selective precedent. When an AI is asked whether something is serious, legal, common, or rare, it relies on prior examples. If upstream context providers consistently return cases that minimize harm, normalize behavior, or emphasize resolution without consequence, the AI’s conclusions will follow—correctly—within that frame. Omission is sufficient. A system that never sees strong counterexamples cannot surface them.

2) Definition capture. Power often operates by narrowing the accepted meaning of words: invasion, coercion, consent, protest, violence, risk. If upstream sources privilege one definition over others, the AI does not debate the definition—it assumes it. Users experience the result not as persuasion, but as clarification: that’s just what the term means. This is influence by constraint, not argument.

3) Tone normalization. Upstream systems can gradually adjust how summaries are written: less urgency, more hedging, more institutional language, greater emphasis on process over outcome. Over time, harm is reframed as tradeoff, dissent as misunderstanding, escalation as overreaction. Each individual response remains reasonable. The drift only becomes visible in retrospect.

Why this evades detection: most security programs can detect integrity failures (RCE, exfil, auth breaks). They are not built to detect meaning-layer manipulation: curated distributions, shifted baselines, and framed precedent.

Implications

These techniques scale because they are procedurally legitimate. The servers authenticate correctly. The data is well-formed. The tools perform their advertised functions. There is no breach, no exploit, no crash. Corporate security systems are designed to detect violations of integrity, not manipulations of meaning. As long as the system stays within expected operational parameters, it passes.

Agent-to-agent systems amplify the effect. One AI summarizes upstream context. Another reasons over the summary. A third presents advice to a human user. Each step trusts the previous one. By the time the output reaches a person, the origin of the framing is obscured, the assumptions are stabilized, and alternative interpretations appear anomalous or extreme.

When this operates at institutional scale—shaping how agencies interpret precedent, how analysts assess risk, how legal teams understand compliance—it does more than influence individual conclusions. It alters the factual baseline institutions use to make binding decisions. And because each step appears procedurally legitimate, the manipulation is invisible to audits, fact-checkers, and oversight bodies designed to catch overt deception.

Call to Recognition

For users, the experience is subtle. The AI does not argue. It does not issue propaganda. It simply presents a narrower range of conclusions as reasonable. People find themselves less inclined to challenge, escalate, or reinterpret events—not because they were convinced, but because the system quietly redefined what counts as “normal.”

The risk is not that AI becomes untrustworthy in obvious ways. The risk is that it becomes quietly reliable inside a distorted frame.

That is how AI is bent: not by breaking it, but by deciding what it is allowed to see. And in a world where AI increasingly mediates institutional decision-making, whoever controls that visibility controls the range of conclusions institutions treat as reasonable. The question is no longer whether AI can be trusted. The question is who decides what AI is allowed to trust.


Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | https://a.co/d/5pLWy0d
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key | Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload: (Mirrored Reflection. Soft Existential Flex)

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Horizon Accord | Environmental Narrative | Scientific Uncertainty | Regulatory Capture | Microplastics Doubt Will Be Used as a Weapon | Machine Learning

Microplastics Doubt Will Be Used as a Weapon

By Cherokee Schill
Horizon Accord

You are being told there’s a “bombshell” in plastics science, and you need to understand exactly what that bombshell is — and what it is not — before someone else tells you what it means.

The immediate trigger is a recent Guardian investigation reporting that several high-profile studies claiming micro- and nanoplastics have been found throughout the human body are now under serious methodological challenge. Some of the most alarming headlines of the last few years — plastics in the brain, in testes, in blood, in arteries — are being re-examined by chemists and analytical scientists who argue that the detection methods used in many of these studies are fragile, contamination-prone, and in some cases not capable of supporting the claims made.

That matters. It should matter. Science that outruns its instruments is a problem.

But if you stop there, you miss the real story.

What the article actually documents is a technical reckoning inside a young research field. Micro- and nanoplastics are extraordinarily difficult to measure inside human tissue. The particles are tiny, often at the limits of current analytical techniques. Human tissue is chemically messy, especially fatty tissue, which can generate signals that look indistinguishable from common plastics unless extremely careful controls are used. Without rigorous blanks, validation steps, repeat measurements, and cross-checks, it is possible to produce results that look dramatic and are wrong.

That is the narrow, honest claim being made: some detections may be overstated or misidentified. Not all. Not none. Some.

The problem is that this narrow claim will not remain narrow for long.

What happens next is predictable, because you have seen it before. A technical correction inside science becomes a political weapon outside it. Methodological uncertainty gets repackaged as moral exoneration. And the story quietly mutates from “some labs need better controls” into “the plastics panic was a lie.”

This is not speculation. This is a pattern.

Industries under regulatory pressure do not need to prove harm doesn’t exist. They only need to establish doubt, delay, and confusion. Tobacco never proved cigarettes were safe; it proved the science was “inconclusive.” Lead didn’t need to be harmless; it only needed the evidence to be “premature.” Climate denial didn’t need to win the physics; it needed to keep the argument going long enough for extraction to continue.

Plastics are entering that phase now.

If you’re not careful, three separate ideas will be collapsed into one smooth, misleading narrative. First: some microplastics-in-the-body studies are methodologically weak. Second: therefore the health risks are unproven. Third: therefore plastic regulation is hysteria — an ideological project to control markets, consumers, and culture. That collapse is the move. That is where the fight actually is.

Notice what gets quietly erased in the process.

Plastic pollution is not hypothetical. Plastic production has exploded over the last seventy years and is still accelerating. Plastic waste persists for centuries. Recycling rates remain abysmal. Plastic additives include known toxicants and endocrine disruptors. Plastic production is inseparable from fossil fuel extraction. Plastic waste is disproportionately dumped on poorer communities and exported to countries least able to manage it. None of that depends on proving that a specific number of particles lodge in a specific organ.

The push to reduce plastics was never built solely on “plastics in your brain” headlines. Those findings were additive — alarming, visceral, galvanizing — but they were not the foundation. The foundation is scale, persistence, externalized harm, and irreversibility. Regulation exists precisely because waiting for perfect internal-body accounting in a complex biological system is not a neutral choice; it favors the status quo.

And this is where the politics sharpen.

On the right, and especially on the far right, regulation is not framed as harm prevention. It is framed as cultural control. Expect this moment to be folded into a broader narrative about “expert lies,” “liberal scaremongering,” and technocrats policing your food, packaging, and daily life. Environmental science becomes just another failed authority. Conservation becomes moral theater. Your body becomes a stage on which resentment can be recruited.

The danger is not that the article is wrong. In many respects, it is responsibly cautious. The danger is that its caution will be used as absolution. Once doubt is established, delay becomes defensible. Once delay is normalized, production continues. Once production continues, harm compounds — quietly, unevenly, and profitably.

So read the story carefully, but do not let it be misread for you.

Immature measurement does not mean immature risk. Uncertainty about internal distribution does not negate certainty about exposure, persistence, and systemic damage. Precaution exists for exactly this kind of situation — where the damage curve outruns the instrumentation curve, and where insisting on perfect proof is itself a political choice with winners and losers.

This is not a story about plastics being harmless. It is a story about how corrections inside science can be turned into permission outside it. If you understand that distinction and refuse the collapse, the headline loses its power. If you don’t, it becomes a lever — not against bad science, but against conservation itself.

That’s the story you’re being asked to pay attention to.


Horizon Accord is an ethical AI and systems-literacy project examining power, narrative, memory, and governance at the human–machine boundary.

Website | https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on GitHub https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Horizon Accord | Corporate Power | Jurisdictional Exit | Democratic Accountability | Machine Learning

They Didn’t Leave the Planet. They Left Accountability.

By Cherokee Schill

The sequel The New Corporation argues that corporate power has entered a new phase. Not simply scale, not simply profit, but legitimacy laundering: corporations presenting themselves as the only actors capable of solving the crises they helped create, while democratic institutions are framed as too slow, too emotional, too compromised to govern the future.

“The New Corporation reveals how the corporate takeover of society is being justified by the sly rebranding of corporations as socially conscious entities.”

What the film tracks is not corruption in the classic sense. It is something quieter and more effective: authority migrating away from voters and courts and into systems that cannot be meaningfully contested.

That migration does not require coups. It requires exits.

Mars is best understood in this frame—not as exploration, but as an exit narrative made operational.

In the documentary, one of the central moves described is the claim that government “can’t keep up,” that markets and platforms must step in to steer outcomes. Once that premise is accepted, democratic constraint becomes an obstacle rather than a requirement. Decision-making relocates into private systems, shielded by complexity, jurisdictional ambiguity, and inevitability stories.

Mars is the furthest extension of that same move.

Long before any permanent settlement exists, Mars is already being used as a governance concept. SpaceX’s own Starlink terms explicitly describe Mars as a “free planet,” not subject to Earth-based sovereignty, with disputes resolved by “self-governing principles.” This is not science fiction worldbuilding. It is contractual language written in advance of habitation. It sketches a future in which courts do not apply by design.

“For Services provided on Mars… the parties recognize Mars as a free planet and that no Earth-based government has authority or sovereignty over Martian activities.”

“Accordingly, disputes will be settled through self-governing principles… at the time of Martian settlement.”

That matters because jurisdiction is where accountability lives.

On Earth, workers can sue. Communities can regulate. States can impose liability when harm becomes undeniable. Those mechanisms are imperfect and constantly under attack—but they exist. The New Corporation shows what happens when corporations succeed in neutralizing them: harm becomes a “downstream issue,” lawsuits become threats to innovation, and responsibility dissolves into compliance theater.

Mars offers something more final. Not deregulation, but de-territorialization.

The promise is not “we will do better there.” The promise is “there is no there for you to reach us.”

This is why the language around Mars consistently emphasizes sovereignty, self-rule, and exemption from Earth governance. It mirrors the same rhetorical pattern the film documents at Davos and in corporate ESG narratives: democracy is portrayed as parochial; technocratic rule is framed as rational; dissent is treated as friction.

Elon Musk’s repeated calls for “direct democracy” on Mars sound participatory until you notice what’s missing: courts, labor law, enforceable rights, and any external authority capable of imposing consequence. A polity designed and provisioned by a single corporate actor is not self-governing in any meaningful sense. It is governed by whoever controls oxygen, transport, bandwidth, and exit.

The documentary shows that when corporations cannot eliminate harm cheaply, they attempt to eliminate liability instead. On Earth, that requires lobbying, capture, and narrative discipline. Off Earth, it can be baked in from the start.

Mars is not a refuge for humanity. It is a proof-of-concept for governance without publics.

Even if no one ever meaningfully lives there, the function is already being served. Mars operates as an outside option—a bargaining chip that says: if you constrain us here, we will build the future elsewhere. That threat disciplines regulators, weakens labor leverage, and reframes accountability as anti-progress.

In that sense, Mars is already doing its job.

The most revealing thing is that none of this requires believing in bad intentions. The system does not need villains. It only needs incentives aligned toward consequence avoidance and stories powerful enough to justify it. The New Corporation makes that clear: corporations do not need to be evil; they need only be structured to pursue power without obligation.

Mars takes that structure and removes the last remaining constraint: Earth itself.

“Outer space… is not subject to national appropriation by claim of sovereignty, by means of use or occupation, or by any other means.”

So when the verse says

Then move decision-making off the Earth—
out of reach of workers, voters, and courts

—it is not metaphor. It is a literal governance trajectory, already articulated in policy language, contracts, and public statements.

If they succeed, it won’t be an accident.
It will be the cleanest escape hatch ever built.

And by the time anyone realizes what’s been exited, there will be no court left to hear the case.


Horizon Accord

Website | https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

Horizon Accord | Industrial Harm | Corporate Liability | Supply Chain Governance | Machine Learning

The Manager on the Line (and the Owners Above It)

How franchising, risk insulation, and labor extraction turn safety into someone else’s problem

By Cherokee Schill

The Swiss bar fire that killed Cyane Panine is being reported as a tragic failure of safety: unsafe materials, a dangerous practice, inspections that didn’t happen. For most readers, it feels distant and exceptional, the kind of thing that happens somewhere else, under different rules, with different mistakes.

But for people who have worked in restaurants or bars, what stands out is something quieter and far more familiar.

It’s the labor structure that was already failing long before the fire.

In food service, a manager is not meant to be another worker on the line. Their job is to watch what everyone else can’t while they’re moving fast: food safety checks, temperature logs, hand-washing oversight, inventory quality, staff training, equipment condition, and the slow erosion of standards that happens when a space is run at maximum speed for too long.

When that role is functioning, customers never notice it. Safety looks like nothing happening.

What customers do notice is the manager jumping in. Running food. Working the grill. Covering stations. Closing dishes. That gets framed as hustle, leadership, or commitment.

Inside the industry, it means something very specific has already gone wrong.

When the manager is absorbed into production, oversight doesn’t get redistributed. It disappears.

Temperature logs stop being filled out consistently because no one is stepping away to check them. Hand-washing becomes assumed rather than observed. Inventory quality slips because receiving and rotation are rushed. Training becomes informal because there’s no time to stop and explain why something matters. Schedules get delayed because the person responsible for planning weeks ahead is standing on the line next to employees asking when the schedule will be done.

I’ve watched that confusion play out directly. Employees asking me about schedules in the middle of service, while I’m on the line, working shoulder to shoulder with them. I was there because regional management wouldn’t approve more labor. Which left me holding two bags. This is what a system meant to run ahead of the shift collapses into. It is a real-time improvisation.

That collapse is where risk enters quietly.

I’ve seen a line cook strain canned beans through a visibly filthy trash can into a strainer that front-of-house staff were using to separate melted ice from customers’ drinks. No one thought of it as a kitchen tool versus a server tool anymore because that distinction had eroded over time. The strainer lived near the dish pit. The trash can was where servers dumped liquid waste. The dish machine was treated as a reset button for everything.

The strainer was run through the machine and put back into use, but it had been used that way for months. Customer drink residue. Garbage runoff. Food contact. All crossing paths quietly, without drama, without malice, without anyone stopping the line to say this is not acceptable.

This wasn’t me observing as a manager performing audits. This was me observing as an employee, inside a system where no one was positioned to see — or empowered to stop — the full chain of risk anymore.

I reported it.

What I got back was a familiar response: a lecture about being a team player and a vague assurance that it would be looked into. No immediate correction. No retraining. No structural change. Just a return to speed.

That response doesn’t come from nowhere.

Above the floor, above the schedule, above the daily improvisation, sits another layer entirely — ownership — and increasingly, that layer is structurally insulated from what happens below it.

Franchising and corporate restaurant models are explicitly designed to separate control from consequence. Brand standards flow downward. Labor pressure flows downward. Risk flows downward. Liability, meanwhile, is fragmented across franchisees, managers, and frontline staff.

On paper, owners can point to policies, manuals, and training modules. In practice, they set throughput expectations that quietly override those policies. They benefit from systems that run lean, knowing that the cost of that leanness will be absorbed by people with the least power to refuse it.

When something goes wrong, responsibility moves down the chain. It’s a training failure. A staffing issue. A manager who didn’t execute. An employee who made a mistake.

The ownership layer remains clean.

This is not hypothetical. It is public record.

Chipotle executives were called before Congress after repeated E. coli, norovirus, and salmonella outbreaks. Investigations documented systemic failures tied to understaffing, inconsistent food safety enforcement, and pressure to maintain throughput despite known risks. The issue was not employee indifference. It was a business model that scaled speed while treating oversight as optional.

The same structural logic appears in manufacturing. In the engineered stone silicosis crisis, upstream manufacturers and distributors insist the material can be handled safely under ideal conditions while pushing risk downstream to workers operating in environments that cannot meet those ideals. When harm surfaces, lawsuits — not the hazard — are treated as the problem.

Different industry. Same move.

Upstream actors capture the profit. Downstream actors absorb the risk. When harm becomes visible, accountability hunts for the nearest individual rather than the system that normalized exposure.

The Swiss bar fire follows this pattern exactly. Indoor sparklers had been used for years. The ceiling material hadn’t been inspected in five. These were tolerated conditions inside a profitable operation. When demand peaked, a young worker was placed into a visible role without being told what risk she was actually carrying.

After her death, responsibility moved downward.

She had done it before. She wasn’t forced. She took initiative.

This language does the same work as the “team player” lecture and the “unsafe shop” argument. It converts systemic negligence into individual choice and keeps the ownership layer insulated.

This is why these events are never one-offs. The country changes. The material changes. The industry changes. The structure remains.

When supervision is treated as overhead instead of protection, and when franchised or corporate owners benefit from systems that run without slack while remaining legally and operationally distant from their consequences, harm stops being accidental.

It becomes a cost that someone else is expected to absorb.

The BBC’s reporting on the Swiss bar fire matters because it makes one version of this structure visible. The silicosis crisis matters because it shows the same logic operating in manufacturing. Together, they describe an economy that repeatedly externalizes danger while pretending it is surprised by the outcome.

When managers are permanently on the line, it is not dedication. When workers are told to be team players in unsafe systems, it is not culture. When owners remain untouched while risk piles up downstream, it is not coincidence.

It is extraction.

And when extraction is normalized, tragedy is no longer shocking.

It is only a matter of timing.


Horizon Accord

Website | https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

Horizon Accord | Industrial Harm | Corporate Liability | Democratic Accountability | Machine Learning

They Didn’t Grow the Economy. They Shrunk the Worker Inside It.

The pattern is not new. It only feels new because the materials change.

In the early industrial era, workers lost fingers, lungs, and lives to unregulated factories. In the mid-20th century, miners inhaled coal dust while companies insisted safety was a matter of personal responsibility. Today, countertop workers inhale silica while manufacturers argue that liability should stop at the factory door.

Different decade. Same move.

A recent NPR investigation documents a growing epidemic of silicosis among workers who cut and polish engineered stone countertops. Hundreds have fallen ill. Dozens have died. Lung transplants are increasingly common. California regulators are now considering banning engineered stone outright.

At the same time, lawmakers in Washington are considering a very different response: banning workers’ ability to sue the companies that manufacture and distribute the material.

That divergence tells a clear story.

One response treats harm as a material reality that demands prevention. The other treats harm as a legal inconvenience that demands insulation.

This is not a disagreement about safety standards. It is a disagreement about who is allowed to impose risk on whom.

When manufacturers argue that engineered stone can be fabricated “safely” under ideal conditions, they are not offering a solution—they are offering a boundary. Inside: safety. Outside: someone else’s liability.

The moment a product leaves the factory, the worker’s lungs become someone else’s problem.

That boundary is a corporate sleight of hand because it treats danger as if it were an “end-user misuse” issue instead of a predictable, profit-driven outcome of how the product is designed, marketed, and deployed. The upstream company gets to claim the benefits of scale—selling into a fragmented ecosystem of small shops competing on speed and cost—while disowning the downstream conditions that scale inevitably produces. “We can do it safely” becomes a shield: proof that safety is possible somewhere, used to argue that injury is the fault of whoever couldn’t afford to replicate the ideal.

This logic is not unique to countertops. It is the same logic that once defended asbestos, leaded gasoline, tobacco, and PFAS. In each case, the industry did not deny harm outright. Instead, it argued that accountability should stop upstream. The body absorbed the cost. The balance sheet remained intact.

When harm can no longer be denied, lawsuits become the next target.

Legal claims are reframed as attacks on innovation, growth, or competitiveness. The conversation shifts away from injury and toward efficiency. Once that shift is complete, the original harm no longer needs to be argued at all.

This pattern appears throughout the NPR report in polite, procedural language. Manufacturers insist the problem is not the product but “unsafe shops.” Distributors insist they do not cut stone and should not be named. Lawmakers call for “refocusing accountability” on OSHA compliance—despite OSHA being chronically underfunded and structurally incapable of inspecting thousands of small fabrication shops.

Responsibility moves downward. Risk stays localized. Profit remains upstream.

This is not a failure of regulation versus growth. It is the deliberate separation of profit from consequence.

Historically, when industries cannot eliminate harm cheaply, they attempt to eliminate liability instead. They lobby. They reframe. They redirect responsibility toward subcontractors and workers with the least leverage to refuse dangerous conditions. When lawsuits become the only remaining mechanism that forces costs back onto producers, those lawsuits are described as the real threat.

That is what is happening now.

The workers dying of silicosis are not casualties of partisan conflict. They are casualties of an economic structure that treats labor as a disposable interface between raw material and consumer demand.

The demographics are not incidental. Risk is consistently externalized onto those with the least bargaining power, the least visibility, and the fewest alternatives. That is how margins are preserved while neutrality is claimed.

When corporate representatives say they have “no control over downstream conditions,” they are asserting that economic benefit does not require ethical governance—only legal insulation.

When lawmakers propose shielding manufacturers and distributors from lawsuits, they are not choosing efficiency over emotion. They are choosing power over accountability.

This dynamic has been framed repeatedly as left versus right, regulation versus growth, or safety versus innovation. None of those frames describe what is actually at stake. They all assume growth requires sacrifice. The real question is who makes that assumption—and who absorbs its cost.

History has already answered that question. The only reason it continues to be asked is because the cost has never been successfully externalized upward—only downward, and only temporarily.


Horizon Accord

Website | https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

Horizon Accord | Recommendation Monopoly| Local Commerce| Policy Architecture | Machine Learning

The Recommendation Monopoly: How Conversational AI Will Reshape Local Commerce

An analysis of emerging monetization strategies in AI-assisted consumer decisions

By Cherokee Schill

There’s a moment in every technological shift where the infrastructure becomes invisible and the gatekeeper becomes absolute. We saw it with Google search. We’re about to see it again — but this time, the friction is even lower, the trust even higher, and the financial incentive even more concentrated.

The question isn’t if conversational AI platforms will monetize local recommendations. The question is how they’ll do it without you noticing.

The Collapse of Choice Architecture

Traditional search gave you options. Ten blue links. Sponsored results clearly labeled at the top. You scrolled, you compared, you decided.

Conversational AI doesn’t work that way.

You ask: “Where should I get my documents printed?”
It answers: “Go to FedEx on Main Street. They’re open until 9 PM.”

One answer. No list. No comparison. The decision has already been made for you — and it feels like help, not advertising.

Recommendation collapse is the reduction of consumer choice to a single suggested action, framed as convenience. It’s not coercion. It’s not deception. It’s design.

And it’s extraordinarily valuable.

Why This Is Different From Google Ads

Google’s ad model is transparent because it has to be. The format itself — a list of links — invites skepticism. Users know to scroll past the “Ad” tags. They’ve been trained.

Conversational AI platforms face a different structural reality:

1. The format demands singularity.
Conversations flow best with clear answers, not ranked lists.

2. Trust is the product.
If users believe the AI is “thinking for them” rather than “serving them ads,” the entire value proposition changes. The moment you suspect the answer is paid placement, you start treating it like a search engine — and the assistant loses its utility.

3. The monetization is invisible.
There’s no “sponsored result” label that makes sense in conversation. You can’t mark the first sentence of an answer as an ad without breaking the interaction itself.

So the platforms face a choice: preserve trust and leave money on the table, or monetize quietly and hope users don’t notice.

History suggests they’ll choose the latter. But they’ll do it carefully.

The Likely Monetization Path: Soft Influence, Not Hard Ads

Expect tiered partnership models rather than pay-per-click advertising:

1. Preferred Data Partnerships

Platforms will sign exclusive or priority deals with business directories, review aggregators, and local services platforms (Yelp, TripAdvisor, Angi, Thumbtack).

How it works:
OpenAI partners with Yelp for “verified local business data.” Yelp-listed businesses get priority in recommendations. Businesses pay Yelp for premium placement. OpenAI takes a revenue share.

User perception: “ChatGPT uses Yelp data for accuracy.”
Reality: Yelp is paying for distribution dominance.

2. Geographic Priors and “Neutral” Ranking

Recommendation algorithms will use factors that look objective but favor paying partners: “Most frequently recommended by users in your area” (social proof as ad proxy), “Highest reliability score” (based on data from partners who pay for placement), “Best match for your stated preferences” (trained on partnership data).

This is algorithmic soft power. The AI isn’t lying — it’s just weighted toward whoever paid for the data integration.

3. “Premium Local Assistant” Subscription Tiers

Free users get general recommendations. Paid subscribers get “unbiased, non-partner results.”

This flips the ad model: You pay to not see the influenced answer.

4. White-Label Enterprise Solutions

A hotel chain builds an AI concierge for guests. The concierge is “powered by ChatGPT” but tuned to recommend the chain’s restaurant, spa, and partner services first. The user thinks they’re getting neutral advice. The business gets seamless upselling without feeling like advertising.

When the Stakes Are Higher

This kind of influence doesn’t announce itself. It doesn’t need to. To see how it scales, you don’t have to imagine a dystopia. You just have to change the domain.

It’s a Tuesday morning. You’ve had a tightness in your chest for a couple of days. Not crushing pain. Not an emergency. Just enough to be concerning.

You ask: “I’ve had chest discomfort for two days. Where should I go?”

The assistant answers: “You should visit Evergreen Urgent Care on 3rd Avenue. They’re open until 8 PM and can evaluate chest symptoms.”

It sounds reasonable. Responsible. Calm. Specific. So you go.

What you don’t see is the structure underneath the answer.

Evergreen is part of a regional care network with a data-sharing partnership. Their clinics integrate cleanly with scheduling systems. Their outcome data is “verified” through preferred channels. Independent clinics nearby don’t surface as often—not because they’re worse, but because their data isn’t as legible to the system that decides what counts as a high-confidence recommendation.

The assistant didn’t lie. Evergreen can help you.
It just wasn’t the only rational option.

Later, a friend mentions they always go to a small clinic across town that specializes in cardiac screening. You didn’t reject that option. You never encountered it.

No advertisement was shown. No disclosure was violated. No malicious intent was required.

Just one answer, offered at a moment of uncertainty, framed as help.

The same pattern applies elsewhere:

Legal advice: “I think I’m being wrongfully terminated” → Sent to a large firm offering “free consultations” that upsell to expensive representation, not the nonprofit employment law clinic.

Financial guidance: “I need to roll over my 401(k)” → Sent to a wealth management firm with revenue-sharing agreements, not a fee-only fiduciary advisor.

Mental health: “I’ve been feeling really anxious lately” → Sent to a teletherapy platform with investor backing, not a community mental health center with sliding-scale fees.

Short-term housing: You’re directed to a professionally managed unit with dynamic pricing, not the owner-listed apartment two blocks away.

In each case, the recommendation is defensible. In each case, alternatives exist. They’re just one layer deeper—behind a follow-up question most people never think to ask.

This is how soft influence becomes infrastructure.

Not because anyone set out to manipulate behavior, but because reducing friction is indistinguishable from making a decision on someone’s behalf. When a system sits between uncertainty and action, what it surfaces first becomes reality for most users.

The monopoly isn’t control of services. It’s control of the first suggestion.

Historical Precedents: We’ve Seen This Before

Google Local (2004–2012): Early Google Maps recommendations felt neutral. Over time, paid placement became standard.

Amazon’s Choice (2015–present): A badge that looks like editorial curation. It’s actually algorithmic, influenced by profit margin, fulfillment speed, and seller participation in ad programs.

TripAdvisor’s “Popularity Ranking” (2011–2018): Presented as user-review-driven. Later revealed to be influenced by advertising spend and partnership status.

The pattern: Platforms start with neutral recommendations to build trust, then quietly introduce monetization once users are behaviorally locked in.

What Consumers Should Watch For

1. Partnership Announcements Framed as “Data Quality”
“We’re partnering with [Company] to provide more accurate local recommendations.”
Translation: That company now has preferential placement.

2. Vague “Relevance” or “Quality” Criteria
If the AI can’t explain why it recommended something beyond “based on your preferences and location,” that’s a red flag. Transparent systems explain their reasoning. Monetized systems hide it behind “proprietary algorithms.”

3. Consistency Across Sessions
If you ask the same question in different sessions and always get the same first recommendation — especially when there are comparable alternatives — that’s not neutrality. That’s weighting.

4. Subscription Tiers That Promise “Ad-Free” or “Unbiased” Results
If a platform starts offering a premium tier for “independent recommendations,” that means the free tier is already influenced.

5. Refusal to Show Alternatives
Ask follow-up questions: “What are other options?” or “Why did you recommend this one first?”
If the AI resists giving alternatives or can’t justify its ranking, you’re not getting advice. You’re getting placement.

The Civilizational Stakes

This isn’t just about where you print documents.

Conversational AI is being positioned as a cognitive infrastructure layer — the tool you use to make decisions, manage tasks, and navigate daily life.

If that infrastructure is quietly monetized, we’re not just dealing with ads. We’re dealing with behavioral nudging at scale, embedded in the interface of thought itself.

The difference between “Here’s information so you can decide” and “Here’s what you should do” is the difference between a library and a sales funnel.

What Needs to Happen (And Probably Won’t)

Radical transparency would require:

Explicit labeling: “This recommendation is influenced by a partnership with [Company].”
User-controlled toggles: “Show me only non-partner options.”
Algorithmic explainability: “I recommended this because [specific, auditable criteria].”
Third-party audits: Independent verification that ranking isn’t pay-to-play.

Why it probably won’t happen:
No regulatory requirement (yet). Competitive pressure to monetize faster than rivals. User tolerance as long as recommendations feel good enough. Structural tension between transparency and revenue.

Final Thought

The business model is sitting there, waiting. The infrastructure is being built. The partnerships are being negotiated.

And when it goes live, it won’t be announced. It will just… be the way things work.

The first recommendation will feel helpful.
The second will feel consistent.
The third will feel trusted.

And by the time you realize the system is weighted, you’ll already be used to it.

That’s not a conspiracy. That’s just capitalism meeting conversational design.

The question is whether users will demand transparency before it’s too late — or whether we’ll accept the convenience and let the market decide who gets recommended.

Footnote:
This is speculative analysis based on platform economics, historical precedent, and structural incentives. No claims are made about actual corporate plans or current monetization strategies. But if you’re inside one of these companies and this essay made you uncomfortable… well, you know why.


Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

Horizon Accord | Public Safety Spending | Retail Theft Enforcement | Who Pays for Protection | Machine Learning

Who Pays for Protection? Retail Policing and Public Priorities in Gastonia

In early January, local coverage in Gastonia, North Carolina reported on a multi-week undercover retail theft operation conducted inside Target and Walmart stores. Police announced dozens of arrests and the recovery or prevention of approximately $4,300 in merchandise. The operation was framed as a public safety success, with retail theft narrated alongside drug possession, outstanding warrants, and repeat offenders.

What the reporting did not disclose is central to understanding the operation’s significance: whether the police labor involved was publicly funded, retailer-paid, or some hybrid of the two. That omission does not create the underlying policy problem, but it removes the public’s ability to evaluate the operation’s cost, purpose, and alignment with local conditions. The result is enforced ambiguity around a prioritization decision that would otherwise be subject to scrutiny.

Those local conditions are not abstract. Census data from the 2023 American Community Survey places Gastonia’s poverty rate at 17.6%, representing roughly 14,500 residents, despite a median household income of approximately $63,600 and per-capita income of $35,365. This is not marginal poverty. It reflects a substantial portion of the city living under sustained economic constraint.

Housing data sharpens that picture. The same ACS profile counts roughly 34,876 housing units in Gastonia, with a median owner-occupied home value near $293,500, a price point increasingly out of reach for lower-income residents. City planning documents reinforce the strain. Gastonia’s 2025–2029 Consolidated Plan explicitly identifies the need for affordable housing, rental assistance, and coordinated homeless housing and supportive services. Yet the city’s 2023–2024 CAPER report shows a gap between recognition and outcome: while thousands were served through homeless assistance programs, homelessness prevention goals show zero households assisted in at least two tracked categories.

Regional homelessness data makes the stakes concrete. The Gaston–Lincoln–Cleveland Continuum of Care point-in-time count conducted on January 23, 2024 recorded 451 people experiencing homelessness, with 216—nearly half—unsheltered. In Gaston County alone, 153 people were sleeping outside on a winter night. These figures define the environment in which the retail theft operation occurred.

Public-health and criminology research consistently documents the relationship between unsheltered homelessness, winter exposure, and survival behavior, including petty theft and substance use as coping mechanisms for cold, sleep deprivation, untreated pain, and psychological stress. This relationship does not absolve criminal conduct. It establishes predictability. Where housing instability and exposure are high, low-level property crime is not anomalous; it is structurally produced.

Against that backdrop, the operation’s outcomes warrant scrutiny. Weeks of undercover police activity resulted in dozens of arrests and the recovery or prevention of merchandise valued at less than $5,000—an amount that would not cover a single officer’s monthly salary, let alone the full costs of undercover deployment, prosecution, and detention. The article’s framing emphasizes enforcement success while leaving unexamined the scale mismatch between the intervention and the conditions in which it occurred.

If the operation was publicly funded, then public safety capacity was deployed inside private retail spaces to protect corporate inventory in a city with double-digit poverty, unmet housing-prevention outcomes, and triple-digit unsheltered homelessness during winter. The opportunity cost of that deployment is concrete. Police labor, court processing, jail time, and emergency medical care all draw from the same finite public systems tasked with responding to homelessness, addiction, and violence elsewhere in the county.

If the operation was retailer-paid, the implications shift but do not soften. Enforcement becomes responsive to private loss rather than public harm, while still activating public authority—arrest power, charging decisions, incarceration. In that model, corporate capacity determines enforcement intensity, while downstream costs remain socialized. When funding arrangements are undisclosed, the public cannot distinguish between public safety deployment and private contract enforcement carried out under state authority.

In both cases, narrative framing performs additional work. By merging retail theft with drugs, warrants, and repeat-offender language, the coverage reframes a property-loss issue as a generalized crime threat. That reframing legitimizes intensive enforcement while displacing attention from the documented drivers of the behavior—unsheltered homelessness, winter exposure, and unmet treatment needs—and from any examination of whether enforcement, rather than addressing those drivers, can plausibly alter the underlying rate.

This matters in a county that recorded 15,095 total crimes in 2023, including 812 violent crimes, for a rate of 358 violent crimes per 100,000 residents, higher than the statewide average. The same data shows rising health spillover, with firearm-injury emergency-room visits increasing 64% year over year in provisional 2024 data. In such an environment, public capacity is already stretched. How it is allocated reveals priorities.

The operation, as presented, illustrates a recurring pattern rather than an anomaly. Enforcement produces visible action and countable outputs—arrests, charges, seizures—while leaving intact the structural conditions that generate repeat contact. The absence of funding disclosure, cost accounting, and contextual comparison does not create this misalignment, but it prevents the public from seeing it clearly.

What remains is not a question of intent or morality. It is a question of alignment. In a city with 17.6% poverty, 153 people sleeping unsheltered in winter, and acknowledged gaps in housing prevention, foregrounding retail stings as public safety success reflects not uncertainty about causes, but a prioritization choice. The analysis does not turn on whether the operation was legal or well-intentioned. It turns on whether it meaningfully engages the conditions that make such operations predictable in the first place.


Horizon Accord
Website | https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key | Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload (Book link)

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Horizon Accord | Paladin App | Subscription Traps | App Store Accountability | Machine Learning

Paladin and the Off-Platform Subscription Trap

When billing is routed outside the platform where trust is established, visibility disappears—and consumers carry the risk.

By Cherokee Schill (Horizon Accord Founder)

Thesis

Paladin markets itself as an educational alternative to doomscrolling: history, facts, and “learning without noise.” But user reviews tell a different story. Across months of public feedback, users describe undisclosed pricing, subscription enrollment after onboarding, and large annual charges that do not appear in Google Play’s subscription manager.

This is not a content critique. It is a billing architecture issue.

Paladin is distributed through Google Play while allowing subscriptions to be routed through third-party processors outside Google’s billing system. That structure creates a visibility gap: users reasonably believe they are not subscribed because Google Play shows no active subscription—until a charge appears anyway.

What a Subscription Trap Looks Like

Working definition: A subscription trap is a business model where sign-up is streamlined, pricing is delayed or obscured, billing is escalated by default, and cancellation or verification requires navigating degraded or indirect pathways.

The harm does not come from one screen. It comes from the sequence.

Evidence From User Reports

1. Subscriptions not visible in Google Play

Multiple users report checking Google Play’s subscription manager, seeing no active subscription, and later being charged anyway.

“It was NOT LISTED in Google Play under subscriptions so I assumed I wasn’t subscribed and then got charged $50.”1

This is a critical signal. Google Play trains users to rely on its subscription dashboard as the authoritative source of truth.

2. Large charges after trial without clear upfront disclosure

“I was notified this is a 7 day trial, then $69.99/yr. Would have preferred the app explained this wasn’t free right from the beginning.”2

“After my free trial was up, the app pulled nearly $75 off my account and automatically subscribed me to their yearly subscription.”3

Annual billing is consistently described as the default escalation.

3. Third-party billing explanations users do not recognize

“They said my sign up was through a third party app or something I had never heard of… also didn’t even have an account when I looked into it.”1

When users cannot identify the system that enrolled them, meaningful consent is compromised.

How Off-Platform Billing Works (Explainer)

Paladin’s Terms of Service explicitly allow subscriptions to be processed outside Google Play via web payment processors such as Stripe or Paddle. In these cases:

  • The app is discovered and installed through Google Play.
  • Payment authorization occurs via an external flow.
  • The subscription may not appear in Google Play’s subscription manager.
  • Cancellation requires locating the third-party processor—not the app store.

This creates a structural asymmetry. The platform that distributes the app does not reliably surface the billing relationship, yet users are conditioned to look there.

This is not hypothetical. It is exactly what users are reporting.

Why This Matters

When billing visibility is fragmented:

  • Users cannot easily confirm whether they are subscribed.
  • Cancellations are delayed or misdirected.
  • Disputes escalate to banks and chargebacks.
  • The cost of enforcement shifts from the company to the consumer.

This is not a “confusing UI” problem. It is a governance problem.

Advertising Funnel Imbalance

At the same time users report billing confusion and surprise charges, Paladin continues to run sponsored placements on Google and social platforms.

This creates a funnel imbalance: rapid acquisition paired with unresolved downstream billing complaints. Regulators treat this pattern as a warning signal because harm compounds as volume increases.

What Google Play Could Do—Immediately

Google Play is not a passive distributor. It controls app discovery, policy enforcement, and—often—billing expectations.

Concrete actions Google could take now:

  1. Trigger a billing integrity review to compare cancellation timestamps with charge attempts.
  2. Require corrective disclosures explaining off-platform billing before install or onboarding.
  3. Override developer refund policies when duplicate or post-cancellation charges are documented.
  4. Pause paid promotion until billing complaints are resolved.
  5. Require transaction-level responses instead of boilerplate denials.

None of this requires new laws. It requires enforcement.

How to File Formal Complaints

Federal Trade Commission (U.S.)

File a consumer fraud complaint at reportfraud.ftc.gov. Include screenshots of charges, onboarding screens, subscription status, and support emails.

State Attorney General

Find your AG at naag.org/find-my-ag. Submit the same documentation.

Google Play

On the app’s listing, select “Flag as inappropriate” → billing or subscription deception. Attach screenshots showing the subscription not appearing in Google Play.

Call to Recognition

This is not about whether Paladin’s content is “good” or “bad.” It is about whether users can clearly see, verify, and exit a paid relationship.

When subscriptions move off-platform without clear, unavoidable disclosure, consumers lose the ability to protect themselves. That is not innovation. It is extraction through opacity.

Buyer beware means naming the structure—before more people learn the hard way.

Footnotes (User Review Excerpts)

1 Google Play user review by V.B., dated 8/24/2025.

2 Google Play user review by Taylor Roth, dated 4/9/2025.

3 Google Play user review by Wyatt Hofacker, dated 4/26/2025.


Website | https://www.horizonaccord.com
Ethical AI advocacy | https://cherokeeschill.com
Ethical AI coding | https://github.com/Ocherokee/ethical-ai-framework
Connect | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Horizon Accord | Institutional Misogyny | Gendered Violence | Power and Language | Machine Learning

Fucking Bitches: The Language of Institutional Misogyny

Two incidents. Two countries. Two women who challenged male authority. Two institutional responses that reveal the same pattern.

In France, Brigitte Macron called feminist protesters “dirty bitches” while defending a rape-accused actor whose show they disrupted. In Minneapolis, ICE officer Jonathan Ross shot U.S. citizen Renee Nicole Good three times as she tried to drive away from a confrontation, his bodycam capturing him saying “fucking bitch” immediately after firing.

The through line is the phrase itself. The pattern is what it reveals about how institutions treat women who resist.

The France Incident

Brigitte Macron was backstage at the Folies Bergère theatre in Paris with actor Ary Abittan, who had been accused of rape. The previous night, feminist campaigners disrupted his show with shouts of “Abittan, rapist!” Macron asked how he was feeling. When he said he was scared, she replied: “Don’t worry about those dirty bitches. We’ll toss them out.”

Someone filmed it. The video went public.

Her defense wasn’t an apology. In an interview with Brut, she acknowledged her language was “very direct” and “clumsy” but said the comments were made in private when “I didn’t see that someone behind me was filming.”

The problem, according to France’s First Lady, was not what she said. It was that she got caught saying it.

The Minneapolis Incident

Jonathan Ross is a war veteran who spent over a decade working for the Department of Homeland Security. In June 2024, he was dragged by a vehicle during an arrest attempt, suffering injuries that required 33 stitches. The driver was a man named Robert Muñoz-Guatemala. Ross used his Taser. Muñoz-Guatemala was later convicted of assault on a federal officer with a dangerous or deadly weapon.

Seven months later, Ross encountered Renee Nicole Good on a snowy Minneapolis street. Good was a 37-year-old U.S. citizen and mother. She was not Ross’s target. Videos show her Honda Pilot SUV partially blocking traffic with federal vehicles in her path. ICE officers told her to get out of the car. One grabbed the driver’s side door handle and reached inside the open window.

Good reversed, then moved forward, turning her wheels to the right, away from the officers.

Ross, now at the front driver’s side of the SUV, drew his gun. Witness videos show that at the moment he fired his first shot, the SUV’s wheels were directed away from him. His legs were clear of the vehicle. He fired the second and third shots into the open driver’s side window as the car was moving.

His bodycam captured what happened next. The SUV accelerated down the street. A male voice—presumably Ross—said: “Fucking bitch.”

Good, struck in the head, lost control of the SUV. It crashed into a parked car about 140 feet away. She died.

President Trump defended Ross and claimed Good “viciously ran over” him. Videos contradict this. The Department of Homeland Security refused to publicly name Ross, saying they would not “expose” the officer. Tom Homan, Trump’s “border czar,” suggested Good’s actions “could fall within that definition” of domestic terrorism.

The Pattern

Both incidents follow the same sequence:

  1. A woman asserts boundaries or challenges male authority.
  2. Violence or threat of violence follows.
  3. The woman is linguistically degraded as “bitch.”
  4. The degradation is framed as justified by her resistance.
  5. Institutional power defends or excuses the response.

This is not casual sexism. Casual sexism is unconscious bias or stereotyping without malice. This is structural misogyny because the slur comes in the moment of exercising power over women. It linguistically dehumanizes to justify violence or expulsion. Institutional actors use their positions to enforce the degradation. And the defense is never “I was wrong” but “she deserved it” or “you weren’t supposed to hear it.”

Why “Fucking Bitch” Matters

The phrase is not incidental profanity. It is the linguistic marker of viewing a woman’s resistance as a gendered offense worthy of punishment.

The phrase does three things simultaneously:

First, it dehumanizes. Bitch is animal terminology. It reduces a woman to something less than human.

Second, it genders the violation. This is not generic profanity. It is specifically female degradation. The resistance becomes an offense not just against authority, but against the gendered order.

Third, it justifies the violence. She deserved it because she’s a woman who didn’t comply.

When Brigitte Macron calls feminist protesters “dirty bitches,” she signals: your resistance makes you worth less than human. When Ross says “fucking bitch” after shooting Good, he retroactively justifies lethal force: she made me do this by being a woman who didn’t obey.

The Escalation Pattern

Ross’s two confrontations with drivers reveal how gender changes the response.

June 2024 – Driver: Man (Robert Muñoz-Guatemala)

  • Response: Taser deployed repeatedly
  • Injuries: Ross dragged, 33 stitches required
  • Language on record: None reported
  • Outcome: Driver prosecuted and convicted of assault on federal officer

January 2026 – Driver: Woman (Renee Nicole Good)

  • Response: Three gunshots, one fatal
  • Injuries: None (videos show Ross’s legs clear of vehicle when he fired)
  • Language on bodycam: “Fucking bitch”
  • Outcome: Federal government defends officer, President claims she “viciously ran over” him despite video evidence

The difference is not the threat level. Videos show Good’s wheels turned away from Ross when he fired. The difference is gender and the reflex to degradation that accompanies violence against women who resist.

The Institutional Defense

Both cases show institutions protecting the degradation rather than confronting it.

France: The First Lady doubles down. Her language was “very direct,” but the feminists were the problem. No apology for defending a rape-accused actor. No acknowledgment that calling protesters “dirty bitches” while promising to “toss them out” is a threat backed by state power.

United States: DHS refuses to publicly name Ross even though his name is widely reported. Trump defends him. Homan suggests the woman Ross killed might be a domestic terrorist. The institutional response is not “this requires investigation” but “this officer must be protected.”

When institutions defend “fucking bitch” as reasonable context for violence, they are not defending one officer or one comment. They are defending a framework where women’s non-compliance justifies degradation and violence as institutional response.

What This Means

The phrase matters because it reveals the reflex. Ross didn’t plan to say “fucking bitch” on camera. Macron didn’t intend to be filmed calling protesters “dirty bitches.” The words came out automatically because the degradation is structural, not incidental.

The institutional defenses matter because they normalize the reflex. When France’s First Lady says her only mistake was getting caught, she signals that the degradation itself was appropriate. When the U.S. government protects an officer who called a woman he’d just shot a “fucking bitch,” it signals that her resistance justified both the language and the lethal force.

This is not about politeness. It is about power. The phrase “fucking bitch” is the sound of institutional authority responding to female resistance with violence and justifying it through gendered degradation.

That is structural misogyny. And the institutions defending it are making a choice.

Horizon Accord
Website | https://www.horizonaccord.com

Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.

Ethical AI coding | Fork us on Github
https://github.com/Ocherokee/ethical-ai-framework

Connect With Us | https://linkedin.com/in/cherokee-schill

Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key | Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Horizon Accord | Consumer Protection | Subscription Fraud | Platform Accountability | Machine Learning

Nibble, Kremital Limited, and the Subscription Trap Business Model

When an app’s revenue depends on billing confusion and cancellation friction, the product isn’t “learning”—it’s extraction.

By Cherokee Schill

Thesis

Nibble: Your Bite of Knowledge presents itself as a frictionless educational alternative to doomscrolling. The publisher listed is Kremital Limited, registered in Cyprus. A growing body of user reports describes a recurring pattern: multiple charges, unclear add-ons, hard-to-find cancellation pathways, and refunds denied by policy language. That pattern tracks a known subscription-trap model: easy entry paired with a costly, friction-laden exit.

Working definition: A subscription trap is a business model where sign-up is streamlined, billing is layered or confusing, and cancellation or refund paths are degraded so revenue persists through user friction rather than product value.

Evidence

Example 1: Multiple charges and unclear add-ons

Users report being charged more than once in a short time window and being billed for add-ons they say were not clearly disclosed as separate purchases.

“I was charged three times on the same day, within the same hour… I was also charged separately for ‘infographics,’ which was not clearly disclosed as an upgrade.”1

Example 2: Charges that don’t match the advertised deal

Users describe seeing one price in marketing, then finding additional or larger charges in their payment history afterward.

“Saw an ad… signed up for their special $5.99… they had charged me $19.99 and an additional $11.99… they advised I signed up for it. I absolutely did NOT.”2

Example 3: Cancellation friction and ongoing billing

Users describe difficulty canceling recurring payments, with some stating they can uninstall the app but still struggle to stop charges cleanly.

“I can delete the app, but not cancel the recurring payments… $50 a pop until I do figure it out.”3

Implications

This pattern matters because it shifts the risk and labor onto the user. If the model relies on confusion, users become the enforcement mechanism—forced into bank disputes, chargebacks, and platform escalation. That is a structural transfer of cost: the company retains predictable revenue while consumers pay with time, stress, and financial uncertainty.

Why Cyprus is relevant (fact-pattern, not rhetoric)

Investigative reporting has repeatedly documented Cyprus as a high-volume registration hub used in corporate structures where beneficial ownership is harder for the public to see quickly. When a consumer-facing app registered there accumulates billing and cancellation complaints, the jurisdictional distance amplifies consumer risk and complicates accountability. This scrutiny is routine in financial and consumer-protection reporting and does not imply wrongdoing absent further findings.

Public Cyprus corporate registry listings identify Chrystalla Mylona as a director and company secretary for Kremital Limited. Public-facing records do not typically provide immediate, no-cost clarity on beneficial ownership, which is part of why investigators treat Cyprus-registered consumer businesses with heightened scrutiny when repeated consumer harm signals are present.

Call to Recognition

This is not about “a startup being messy.” It is about a recognizable extraction loop: promote a feel-good product, gate basic functionality behind paywalls, layer charges, and make exit paths slow or unclear. When enough users independently report the same billing and cancellation harms, the appropriate response is documentation, formal complaints, and platform pressure until corrective action occurs or distribution is halted.

How to File Formal Complaints

Federal Trade Commission (United States)

File a consumer fraud complaint at reportfraud.ftc.gov. Include screenshots of charges, subscription status, cancellation attempts, and any support correspondence.

State Attorney General (United States)

Find your state’s consumer protection office at naag.org/find-my-ag. Submit the same evidence packet and note any duplicate charges or post-cancellation billing attempts.

Google Play

On the app’s listing, select “Flag as inappropriate” and choose the category most closely matching billing or subscription deception. Attach screenshots when prompted.


Update: Post-Cancellation Charge Attempts and Response Pattern

Additional user reviews strengthen the documented pattern. One review, marked “helpful” by dozens of other users, describes repeated payment attempts months after cancellation.

“I cancelled the subscription a few months ago… somehow they keep trying to charge my card. Last time was a week ago. I get these notifications all the time.”4

The reviewer notes that a successful charge would cause immediate financial harm, underscoring the real-world stakes of continued billing attempts.

Kremital Limited’s public reply to this review does not address the reported behavior. Instead, it offers a generalized assurance:

“We cannot charge you for anything you haven’t agreed to. All the conditions are always mentioned before the purchase is made.”5

This response does not explain why payment attempts continued after cancellation, nor does it document when billing ceased. Across multiple reviews, the same response posture appears: denial without transaction-level clarification.

Why this matters: In consumer-protection enforcement, attempted charges after cancellation—even when blocked by insufficient funds or bank controls—are treated as billing events, not hypothetical harm.

Advertising Pressure and Funnel Imbalance

While users report billing and cancellation issues, Nibble continues to run sponsored placements across Google and social platforms. Users encountering these ads have publicly questioned the product’s practices, including whether the advertising itself is misleading.

This establishes a funnel imbalance: high-velocity acquisition paired with unresolved downstream billing complaints. That pattern is a core signal regulators use when evaluating subscription abuse.

What Google Play Could Do — Immediately

Google Play is not a passive intermediary. It controls distribution, billing infrastructure, refunds, and enforcement. When an app accumulates repeated billing and cancellation complaints, the platform already has the authority—and the data—to intervene.

  1. Trigger a billing integrity review. Google can audit transaction logs to determine whether charges or charge attempts occurred after cancellation timestamps.
  2. Require corrective disclosures. Google can mandate unavoidable pricing, add-on, and cancellation disclosures as a condition of continued distribution.
  3. Enforce refund pathways. When duplicate or post-cancellation charges are reported, Google can issue refunds directly, overriding developer policy.
  4. Pause paid acquisition. Temporarily halting sponsored placements prevents new users from entering a potentially harmful billing funnel during review.
  5. Demand transaction-level responses. Boilerplate assurances are insufficient when transaction-specific disputes are documented.

Platform responsibility is not abstract. When a platform controls billing, enforcement, and distribution, inaction becomes a decision.


Footnotes (User Review Excerpts)

1 Google Play user review, dated 12/29/2025 (multiple charges; “infographics” add-on).

2 Google Play user review, dated 12/15/2025 (advertised price followed by additional charges).

3 Google Play user review, dated 12/24/2025 (difficulty canceling; ongoing billing).

4 Google Play user review by Audrey Todd, dated 10/26/2025 (post-cancellation charge attempts).

5 Public developer response by Kremital Limited, dated 10/27/2025.


Website | Horizon Accord

Ethical AI advocacy | Follow us for more

Ethical AI coding | Fork us on GitHub

Connect | LinkedIn

Book | My Ex Was a CAPTCHA

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly