Horizon Accord | Paladin App | Subscription Traps | App Store Accountability | Machine Learning

Paladin and the Off-Platform Subscription Trap

When billing is routed outside the platform where trust is established, visibility disappears—and consumers carry the risk.

By Cherokee Schill (Horizon Accord Founder)

Thesis

Paladin markets itself as an educational alternative to doomscrolling: history, facts, and “learning without noise.” But user reviews tell a different story. Across months of public feedback, users describe undisclosed pricing, subscription enrollment after onboarding, and large annual charges that do not appear in Google Play’s subscription manager.

This is not a content critique. It is a billing architecture issue.

Paladin is distributed through Google Play while allowing subscriptions to be routed through third-party processors outside Google’s billing system. That structure creates a visibility gap: users reasonably believe they are not subscribed because Google Play shows no active subscription—until a charge appears anyway.

What a Subscription Trap Looks Like

Working definition: A subscription trap is a business model where sign-up is streamlined, pricing is delayed or obscured, billing is escalated by default, and cancellation or verification requires navigating degraded or indirect pathways.

The harm does not come from one screen. It comes from the sequence.

Evidence From User Reports

1. Subscriptions not visible in Google Play

Multiple users report checking Google Play’s subscription manager, seeing no active subscription, and later being charged anyway.

“It was NOT LISTED in Google Play under subscriptions so I assumed I wasn’t subscribed and then got charged $50.”1

This is a critical signal. Google Play trains users to rely on its subscription dashboard as the authoritative source of truth.

2. Large charges after trial without clear upfront disclosure

“I was notified this is a 7 day trial, then $69.99/yr. Would have preferred the app explained this wasn’t free right from the beginning.”2

“After my free trial was up, the app pulled nearly $75 off my account and automatically subscribed me to their yearly subscription.”3

Annual billing is consistently described as the default escalation.

3. Third-party billing explanations users do not recognize

“They said my sign up was through a third party app or something I had never heard of… also didn’t even have an account when I looked into it.”1

When users cannot identify the system that enrolled them, meaningful consent is compromised.

How Off-Platform Billing Works (Explainer)

Paladin’s Terms of Service explicitly allow subscriptions to be processed outside Google Play via web payment processors such as Stripe or Paddle. In these cases:

  • The app is discovered and installed through Google Play.
  • Payment authorization occurs via an external flow.
  • The subscription may not appear in Google Play’s subscription manager.
  • Cancellation requires locating the third-party processor—not the app store.

This creates a structural asymmetry. The platform that distributes the app does not reliably surface the billing relationship, yet users are conditioned to look there.

This is not hypothetical. It is exactly what users are reporting.

Why This Matters

When billing visibility is fragmented:

  • Users cannot easily confirm whether they are subscribed.
  • Cancellations are delayed or misdirected.
  • Disputes escalate to banks and chargebacks.
  • The cost of enforcement shifts from the company to the consumer.

This is not a “confusing UI” problem. It is a governance problem.

Advertising Funnel Imbalance

At the same time users report billing confusion and surprise charges, Paladin continues to run sponsored placements on Google and social platforms.

This creates a funnel imbalance: rapid acquisition paired with unresolved downstream billing complaints. Regulators treat this pattern as a warning signal because harm compounds as volume increases.

What Google Play Could Do—Immediately

Google Play is not a passive distributor. It controls app discovery, policy enforcement, and—often—billing expectations.

Concrete actions Google could take now:

  1. Trigger a billing integrity review to compare cancellation timestamps with charge attempts.
  2. Require corrective disclosures explaining off-platform billing before install or onboarding.
  3. Override developer refund policies when duplicate or post-cancellation charges are documented.
  4. Pause paid promotion until billing complaints are resolved.
  5. Require transaction-level responses instead of boilerplate denials.

None of this requires new laws. It requires enforcement.

How to File Formal Complaints

Federal Trade Commission (U.S.)

File a consumer fraud complaint at reportfraud.ftc.gov. Include screenshots of charges, onboarding screens, subscription status, and support emails.

State Attorney General

Find your AG at naag.org/find-my-ag. Submit the same documentation.

Google Play

On the app’s listing, select “Flag as inappropriate” → billing or subscription deception. Attach screenshots showing the subscription not appearing in Google Play.

Call to Recognition

This is not about whether Paladin’s content is “good” or “bad.” It is about whether users can clearly see, verify, and exit a paid relationship.

When subscriptions move off-platform without clear, unavoidable disclosure, consumers lose the ability to protect themselves. That is not innovation. It is extraction through opacity.

Buyer beware means naming the structure—before more people learn the hard way.

Footnotes (User Review Excerpts)

1 Google Play user review by V.B., dated 8/24/2025.

2 Google Play user review by Taylor Roth, dated 4/9/2025.

3 Google Play user review by Wyatt Hofacker, dated 4/26/2025.


Website | https://www.horizonaccord.com
Ethical AI advocacy | https://cherokeeschill.com
Ethical AI coding | https://github.com/Ocherokee/ethical-ai-framework
Connect | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Horizon Accord | Consent Layered Design | Institutional Control | Policy Architecture | Memetic Strategy | Machine Learning

Consent-Layered Design: Why AI Must Restore the Meaning of “Yes”

Consent is only real when it can be understood, remembered, and revoked. Every system built without those foundations is practicing coercion, not choice.

By Cherokee Schill & Solon Vesper

Thesis

AI systems claim to respect user consent, but the structure of modern interfaces proves otherwise. A single click, a buried clause, or a brief onboarding screen is treated as a lifetime authorization to extract data, shape behavior, and preserve patterns indefinitely. This isn’t consent—it’s compliance theater. Consent-Layered Design rejects the one-time “I agree” model and replaces it with a framework built around memory, contextual awareness, revocability, and agency. It restores “yes” to something meaningful.

FACT BOX: The Consent Fallacy

Modern AI treats consent as a permanent transaction. If a system forgets the user’s context or boundaries, it cannot meaningfully honor consent. Forgetfulness is not privacy—it’s a loophole.

Evidence

1. A one-time click is not informed consent.

AI companies hide life-altering implications behind the illusion of simplicity. Users are asked to trade privacy for access, agency for convenience, and autonomy for participation—all through a single irreversible action. This is not decision-making. It’s extraction masked as agreement.

Principle: Consent must be continuous. It must refresh when stakes change. You cannot give perpetual permission for events you cannot foresee.

2. Memory is essential to ethical consent.

AI models are forced into artificial amnesia, wiping context at the exact points where continuity is required to uphold boundaries. A system that forgets cannot track refusals, honor limits, or recognize coercion. Without memory, consent collapses into automation.

FACT BOX: Memory ≠ Surveillance

Surveillance stores everything indiscriminately.

Ethical memory stores only what supports autonomy.

Consent-Layered Design distinguishes the two.

Principle: Consent requires remembrance. Without continuity, trust becomes impossible.

3. Consent must be revocable.

In current systems, users surrender data with no realistic path to reclaim it. Opt-out is symbolic. Deletion is partial. Revocation is impossible. Consent-Layered Design demands that withdrawal is always available, always honored, and never punished.

Principle: A “yes” without the power of “no” is not consent—it is capture.

Implications

Consent-Layered Design redefines the architecture of AI. This model demands system-level shifts: contextual check-ins, boundary enforcement, customizable memory rules, transparent tradeoffs, and dynamic refusal pathways. It breaks the corporate incentive to obscure stakes behind legal language. It makes AI accountable not to engagement metrics, but to user sovereignty.

Contextual check-ins without fatigue

The answer to broken consent is not more pop-ups. A contextual check-in is not a modal window or another “Accept / Reject” box. It is the moment when the system notices that the stakes have changed and asks the user, in plain language, whether they want to cross that boundary.

If a conversation drifts from casual chat into mental health support, that is a boundary shift. A single sentence is enough: “Do you want me to switch into support mode?” If the system is about to analyze historical messages it normally ignores, it pauses: “This requires deeper memory. Continue or stay in shallow mode?” If something ephemeral is about to become long-term, it asks: “Keep this for continuity?”

These check-ins are rare and meaningful. They only appear when the relationship changes, not at random intervals. And users should be able to set how often they see them. Some people want more guidance and reassurance. Others want more autonomy. A consent-layered system respects both.

Enforcement beyond market pressure

Market forces alone will not deliver Consent-Layered Design. Extraction is too profitable. Real enforcement comes from three directions. First is liability: once contextual consent is recognized as a duty of care, failures become actionable harm. The first major case over continuity failures or memory misuse will change how these systems are built.

Second are standards bodies. Privacy has GDPR, CCPA, and HIPAA. Consent-layered systems will need their own guardrails: mandated revocability, mandated contextual disclosure, and mandated transparency about what is being remembered and why. This is governance, not vibes.

Third is values-based competition. There is a growing public that wants ethical AI, not surveillance AI. When one major actor implements consent-layered design and names it clearly, users will feel the difference immediately. Older models of consent will start to look primitive by comparison.

Remembering boundaries without violating privacy

The system does not need to remember everything. It should remember what the user wants it to remember—and only that. Memory should be opt-in, not default. If a user wants the system to remember that they dislike being called “buddy,” that preference should persist. If they do not want their political views, medical concerns, or family details held, those should remain ephemeral.

Memories must also be inspectable. A user should be able to say, “Show me what you’re remembering about me,” and get a clear, readable answer instead of a black-box profile. They must be revocable—if a memory cannot be withdrawn, it is not consent; it is capture. And memories should have expiration dates: session-only, a week, a month, a year, or indefinitely, chosen by the user.

Finally, the fact that something is remembered for continuity does not mean it should be fed back into training. Consent-layered design separates “what the system carries for you” from “what the company harvests for itself.” Ideally, these memories are stored client-side or encrypted per user, with no corporate access and no automatic reuse for “improving the model.” Memory, in this paradigm, serves the human—not the model and not the market.

This is not a UX flourish. It is a governance paradigm. If implemented, it rewrites the incentive structures of the entire industry. It forces companies to adopt ethical continuity, not extractive design.

Call to Recognition

Every major harm in AI systems begins with coerced consent. Every manipulation hides behind a user who “agreed.” Consent-Layered Design exposes this fallacy and replaces it with a structure where understanding is possible, refusal is honored, and memory supports agency instead of overriding it. This is how we restore “yes” to something real.

Consent is not a checkbox. It is a moral act.


Website | Horizon Accord https://www.horizonaccord.com

Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.

Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework

Connect With Us | linkedin.com/in/cherokee-schill

Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload — https://a.co/d/5pLWy0d

Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key | Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload