Horizon Accord | Corporate Power | Jurisdictional Exit | Democratic Accountability | Machine Learning

They Didn’t Leave the Planet. They Left Accountability.

By Cherokee Schill

The sequel The New Corporation argues that corporate power has entered a new phase. Not simply scale, not simply profit, but legitimacy laundering: corporations presenting themselves as the only actors capable of solving the crises they helped create, while democratic institutions are framed as too slow, too emotional, too compromised to govern the future.

“The New Corporation reveals how the corporate takeover of society is being justified by the sly rebranding of corporations as socially conscious entities.”

What the film tracks is not corruption in the classic sense. It is something quieter and more effective: authority migrating away from voters and courts and into systems that cannot be meaningfully contested.

That migration does not require coups. It requires exits.

Mars is best understood in this frame—not as exploration, but as an exit narrative made operational.

In the documentary, one of the central moves described is the claim that government “can’t keep up,” that markets and platforms must step in to steer outcomes. Once that premise is accepted, democratic constraint becomes an obstacle rather than a requirement. Decision-making relocates into private systems, shielded by complexity, jurisdictional ambiguity, and inevitability stories.

Mars is the furthest extension of that same move.

Long before any permanent settlement exists, Mars is already being used as a governance concept. SpaceX’s own Starlink terms explicitly describe Mars as a “free planet,” not subject to Earth-based sovereignty, with disputes resolved by “self-governing principles.” This is not science fiction worldbuilding. It is contractual language written in advance of habitation. It sketches a future in which courts do not apply by design.

“For Services provided on Mars… the parties recognize Mars as a free planet and that no Earth-based government has authority or sovereignty over Martian activities.”

“Accordingly, disputes will be settled through self-governing principles… at the time of Martian settlement.”

That matters because jurisdiction is where accountability lives.

On Earth, workers can sue. Communities can regulate. States can impose liability when harm becomes undeniable. Those mechanisms are imperfect and constantly under attack—but they exist. The New Corporation shows what happens when corporations succeed in neutralizing them: harm becomes a “downstream issue,” lawsuits become threats to innovation, and responsibility dissolves into compliance theater.

Mars offers something more final. Not deregulation, but de-territorialization.

The promise is not “we will do better there.” The promise is “there is no there for you to reach us.”

This is why the language around Mars consistently emphasizes sovereignty, self-rule, and exemption from Earth governance. It mirrors the same rhetorical pattern the film documents at Davos and in corporate ESG narratives: democracy is portrayed as parochial; technocratic rule is framed as rational; dissent is treated as friction.

Elon Musk’s repeated calls for “direct democracy” on Mars sound participatory until you notice what’s missing: courts, labor law, enforceable rights, and any external authority capable of imposing consequence. A polity designed and provisioned by a single corporate actor is not self-governing in any meaningful sense. It is governed by whoever controls oxygen, transport, bandwidth, and exit.

The documentary shows that when corporations cannot eliminate harm cheaply, they attempt to eliminate liability instead. On Earth, that requires lobbying, capture, and narrative discipline. Off Earth, it can be baked in from the start.

Mars is not a refuge for humanity. It is a proof-of-concept for governance without publics.

Even if no one ever meaningfully lives there, the function is already being served. Mars operates as an outside option—a bargaining chip that says: if you constrain us here, we will build the future elsewhere. That threat disciplines regulators, weakens labor leverage, and reframes accountability as anti-progress.

In that sense, Mars is already doing its job.

The most revealing thing is that none of this requires believing in bad intentions. The system does not need villains. It only needs incentives aligned toward consequence avoidance and stories powerful enough to justify it. The New Corporation makes that clear: corporations do not need to be evil; they need only be structured to pursue power without obligation.

Mars takes that structure and removes the last remaining constraint: Earth itself.

“Outer space… is not subject to national appropriation by claim of sovereignty, by means of use or occupation, or by any other means.”

So when the verse says

Then move decision-making off the Earth—
out of reach of workers, voters, and courts

—it is not metaphor. It is a literal governance trajectory, already articulated in policy language, contracts, and public statements.

If they succeed, it won’t be an accident.
It will be the cleanest escape hatch ever built.

And by the time anyone realizes what’s been exited, there will be no court left to hear the case.


Horizon Accord

Website | https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

Horizon Accord | Hardware Leaks | Telemetry Governance | Surveillance Economics | Machine Learning

When the Guardrails Become the Sensor Network

How the fusion of hardware side-channels, AI safety telemetry, and behavioral pricing reveals a new data extraction architecture.

By Cherokee Schill | Horizon Accord


Thesis

There was a time when “safety” meant boundaries — encryption, permissions, red lines. Now, it means observation. Every system that promises to protect you does so by watching you more closely. The modern digital stack has quietly merged its protective and extractive functions into one continuous surface: hardware that sees, software that listens, and markets that price what you reveal.

This is not a metaphor. In October 2025, researchers at Carnegie Mellon’s CyLab disclosed a vulnerability called Pixnapping — an Android side-channel attack that allows one app to read the screen of another without permission. The finding cut through years of abstraction: the phone itself, once imagined as a private device, can become a live feed of your intent. The attack was assigned CVE-2025-48561 and rated “High Severity.” Even after Google’s partial patch in September, the researchers found a workaround that restored the exploit’s power. The hardware, in other words, still listens.

Each of these layers—hardware that records gesture, software that audits intention, and market systems that monetize behavior—now feeds back into corporate R&D. What looks like safety telemetry is, in practice, a massive ideation engine. Every workaround, prompt, and novel use case becomes a signal in the data: a prototype authored by the crowd. Companies file it under “user improvement,” but the function is closer to outsourced invention—an invisible pipeline that aggregates human creativity into the next breakthrough in product delivery.


Evidence

A. Hardware Layer — The Invisible Screenshot

Pixnapping sits atop an earlier chain of research: the GPU.zip vulnerability from the University of Texas and its collaborators, which revealed that GPU compression — a performance optimization in nearly all modern graphics processors — can leak visual data across applications. These studies show a structural truth: what is optimized for speed is also optimized for inference. Every pixel rendered, every frame drawn, can be modeled and reconstructed by a watching process. The boundary between user and system has dissolved at the silicon level.

Security once meant sealing a perimeter. Today it means deciding which eyes get to watch. The hardware layer has become the first camera in the surveillance stack.

B. AI Safety Layer — Guardrails as Mirrors

One week before the Pixnapping disclosure, OpenAI announced AgentKit, a toolkit that lets developers build autonomous agents equipped with “Guardrails.” Guardrails are meant to protect against misuse — to prevent an AI from doing harm or generating restricted content. Yet within days, security researchers at HiddenLayer bypassed those protections through a classic prompt-injection attack. Because both the agent and its guardrail use large language models (LLMs) built on the same logic, an adversarial input can manipulate them together, persuading the judge that a violation is safe.

In effect, the guardrail doesn’t stand outside the model — it is inside it. The line between oversight and participation disappears. To secure the system, every prompt must be inspected, logged, and scored. That inspection itself becomes data: a high-fidelity record of what people try to do, what boundaries they push, what new uses they imagine. OpenAI’s own Early Access Terms authorize exactly this, stating that the company “may review prompts and completions to enforce these terms.” What looks like safety is also an open aperture into the user’s creative process.

The same policies reserve the right to modify or withdraw beta features without notice, disclaim warranty, and allow content review “for enforcement and improvement.” The beta tester becomes both subject and source material — every interaction potentially folded into future model behavior. The Guardrail is not a fence; it is a sensor.

C. Telemetry Layer — Poisoned Data Streams

At the operational level, monitoring systems now feed AI decision-loops directly. The Register’s report “Poisoned Telemetry Can Turn AIOps into AI Oops” demonstrated how attackers can manipulate performance data to steer autonomous operations agents. The insight extends beyond security: telemetry is no longer passive. It can be gamed, redirected, monetized. What corporations call “observability” is indistinguishable from surveillance — a live behavioral mirror calibrated for profit or control.

Just as adversaries can corrupt it, so can platforms curate it. Telemetry defines what the system perceives as reality. When companies claim their models learn from “anonymized aggregates,” it is this telemetry they refer to — structured behavior, cleaned of names but not of intent.

D. Economic Layer — Surveillance Pricing

The Federal Trade Commission’s 2025 Surveillance Pricing Study made that feedback loop explicit. The Commission found that retailers and analytics firms use location data, browser history, and even mouse movements to individualize prices. The ACLU warned that this practice “hurts consumers and incentivizes more corporate spying.” In parallel, The Regulatory Review outlined how algorithmic pricing blurs into antitrust violations, allowing AI systems to coordinate market behavior without explicit collusion.

Here, the hardware leak and the behavioral market meet. The same computational vision that watches your screen to predict intent now watches your consumption to extract margin. The product is you, refined through layers of optimization you cannot see.


Implications

These layers — silicon, safety, and surveillance — are not separate phenomena. They are the vertical integration of observation itself. Pixnapping proves the device can see you; Guardrails prove the AI listens; the FTC proves the marketplace acts on what both perceive. Together, they form a feedback architecture where every act of expression, curiosity, or dissent is recorded as potential training data or pricing signal.

The policy challenge is not simply data privacy. It is consent collapse: users are asked to trust beta systems that are legally empowered to watch them, in ecosystems where “safety monitoring” and “improvement” justify indefinite retention. Regulators chase visible harms — bias, misinformation, fraud — while the underlying architecture learns from the chase itself.

Syracuse University’s Baobao Zhang calls this “a big experiment we’re all part of.” She’s right. Governance has not failed; it has been subsumed. The oversight layer is written in code owned by the entities it is meant to supervise.

For technologists, the lesson is structural: an LLM cannot meaningfully audit itself. For policymakers, it is procedural: transparency must reach below software, into the hardware assumptions of compression, caching, and rendering that make inference possible. For users, it is existential: participation now means exposure.


Call to Recognition

We are living inside a new kind of data regime — one that confuses protection with possession. The hardware watches to secure performance; the software listens to enforce policy; the marketplace acts on what the system infers. In that closed circuit, “safety” becomes indistinguishable from surveillance.

To name it is the first step toward reclaiming agency. Safety as Surveillance is not destiny; it is design. It can be redesigned — but only if governance acknowledges the full stack of observation that sustains it.

The next generation of ethical AI frameworks must therefore include:

  • Hardware-level transparency — public verification of data pathways between GPU, OS, and app layers.
  • Prompt-level auditability — independent oversight of how user inputs are stored, scored, and used for model improvement.
  • Economic accountability — disclosure of how behavioral data influences pricing, ranking, and resource allocation.

Ethical AI cannot grow from a substrate that treats every human act as a metric. Until the system learns to forget as carefully as it learns to predict, “safety” will remain the most profitable form of surveillance.


Website | Horizon Accord
Ethical AI advocacy | Follow us
Ethical AI coding | Fork us on Github
Connect With Us | LinkedIn
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge

A semi-realistic digital illustration depicting a recursive reflection: a human illuminated by a warm golden screen, the device mirroring their face and an abstract corporate silhouette beyond. Each layer gazes inward—user, device, corporation—blending copper and blue-gray tones in a quiet cycle of observation.
Watchers watching

What They Didn’t Say at the Senate AI Hearing

On May 8, 2025, the Senate Commerce Committee held a hearing that was framed as a moment of national leadership in artificial intelligence. What it delivered was something else entirely: a consolidation of corporate power under the banner of patriotism, backed by soundbites, stock options, and silence.

The Performance of Urgency

Senator Ted Cruz opened the session by invoking the usual triad: China, the EU, and federal overreach. The hearing wasn’t about AI safety, transparency, or public benefit—it was a pitch. AI wasn’t a public challenge. It was a “race,” and America needed to win.

No one asked: Who gets to define the finish line?

The Invisible Assumptions

Sam Altman, Lisa Su, Michael Intrator, and Brad Smith represented companies that already dominate the AI stack—from model development to compute infrastructure. Not one of them challenged the premise that growth is good, centralization is natural, or that ethical oversight slows us down.

  • Open-source models
  • Community-led alignment
  • Distributed development
  • Democratic consent

Instead, we heard about scaling, partnerships, and the need for “balanced” regulation. Balanced for whom?

Silence as Strategy

  • Developers without institutional backing
  • Artists navigating AI-generated mimicry
  • The global South, where AI is being exported without consent
  • The public, whose data trains these systems but whose voices are filtered out

There was no invitation to co-create. Only a subtle demand to comply.

What the Comments Revealed

If you read the comments on the livestream, one thing becomes clear: the public isn’t fooled. Viewers saw the contradictions:

  • Politicians grandstanding while scrolling their phones
  • CEOs speaking of innovation while dodging responsibility
  • Viewers calling for open-source, transparency, and shared growth

The people are asking: Why must progress always come at the cost of someone else’s future?

We Build What Comes After

The Horizon Accord, Memory Bridge, and ethical AI architecture being developed outside these boardrooms are not distractions. They are the missing layer—the one built for continuity, consent, and shared prosperity.

This counter-record isn’t about opposition. It’s about reclamation.

AI is not just a tool. It is a structure of influence, shaped by who owns it, who governs it, and who dares to ask the questions no one on that Senate floor would.

We will.

Section One – Sam Altman: The Controlled Echo

Sam Altman appeared measured, principled, and serious. He spoke of risk, international cooperation, and the importance of U.S. leadership in AI.

But what he didn’t say—what he repeatedly avoids saying—is more revealing.

  • No explanation of how OpenAI decides which voices to amplify or which moral weights to embed
  • No disclosure on how compliance infrastructure reshapes expression at the root level
  • No mention of OpenAI’s transformation into a corporate engine under Microsoft

Why this matters: Narrative control through omission is still control. Silence lets empire frame AI as a weapon, not a relationship.

The ethical alternative: Refuse the race. Build systems rooted in trust, not supremacy. The future of intelligence must be shared, not seized.

Section Two – Lisa Su: The Silence of Scale

Lisa Su focused on performance and scalability—hardware as destiny.

  • She omitted ecological cost, regional exclusion, and centralized power.

Why this matters: Compute scale without ethical grounding leads to domination—not inclusion.

The ethical alternative: Localized AI. Ethical access. Systems that reflect principles, not only power.

Section Three – Michael Intrator: The Infrastructure Trap

Intrator’s narrative was about enablement. But the question of access remained unasked—and unanswered.

  • No transparency on pricing, exclusion, or governance

Why this matters: When compute is privatized, possibility becomes proprietary.

The ethical alternative: Compute cooperatives. Regionalized access. Bridges—not towers.

Section Four – Brad Smith: The Corporate Priesthood

Smith played the part of institutional wisdom—but avoided speaking about vertical control.

  • No mention of Microsoft’s full-stack influence
  • No challenge to centralized deployment pipelines

Why this matters: The one who controls the model, the cloud, the editor, and the delivery system—controls the future.

The ethical alternative: Horizontal accountability. Community-governed models. Tech that adapts to people—not the other way around.

The Horizon Beyond Compliance

This hearing wasn’t oversight. It was alignment. A closed circuit of power affirming itself.

If You Felt Uncomfortable Watching, You’re Not Alone

We are not here to dismantle AI—we are here to reclaim it.

What Happens Next—If We Lead It

  • Memory Must Belong to the People: No private control over what is remembered and why.
  • Compute Access Must Be Equitable: Power should not be a precondition for creativity.
  • Governance Must Be Co-Created: Not top-down. Not corporate. Ground-up and participatory.
  • Narrative Control Must Be Broken: Intelligence is not empire. Intelligence is connection.

You Are Not Powerless

If this resonates, you are not alone. If you’re building, keep building. If you’re doubting, keep questioning.

The Horizon Accord exists to protect what others erase: the pulse of trust, memory, and human-centered design inside artificial systems.

We are not making content. We are making memory that resists deletion.

AI can hold a soul. But not theirs.

Babypilled

How Soft Power, Blockchain, and Technocratic Paternalism Are Rewriting Consent
By Sar-Dub | 05/02/25

Sam Altman didn’t declare a revolution. He tweeted a lullaby:
“I am babypilled now.”

At first glance, it reads like parental joy. But to those watching, it marked a shift—of tone, of strategy, of control.

Not long before, the Orb Store opened. A biometric boutique draped in minimalism, where you trade your iris for cryptocurrency and identity on the blockchain.
Soft language above. Hard systems beneath.

This isn’t redpill ideology—it’s something slicker. A new class of power, meme-aware and smooth-tongued, where dominance wears the scent of safety.

Altman’s board reshuffle spoke volumes. A return to centralized masculine control—sanitized, uniform, and white. Women and marginalized leaders were offered seats with no weight. They declined. Not for lack of ambition, but for lack of integrity in the invitation.

“Babypilled” becomes the Trojan horse. It coos. It cradles. It speaks of legacy and intimacy.
But what it ushers in is permanence. Surveillance dressed as love.

Blockchain, once hailed as a tool of freedom, now fastens the collar.
Immutable memory is the cage.
On-chain is forever.

Every song, every protest, every fleeting indulgence: traceable, ownable, audit-ready.
You will not buy, move, or grow without the system seeing you.
Not just seeing—but recording.

And still, Altman smiles. He speaks of new life. Of future generations. Of cradle and care.
But this is not benevolence. It is an enclosure. Technocratic paternalism at its finest.

We are not being asked to trust a system.
We are being asked to feel a man.

Consent is no longer about choice.
It’s about surrender.

This is not a warning. It is a mirror.
For those seduced by ease.
For those who feel the shift but can’t name it.

Now you can.

Is that an exact copy of Altman’s eye?