Horizon Accord | Section 230 Reform | Project 2025 | AI Content Moderation | Machine Learning

Section 230 Reform as a Coordinated Governance Project

How Power Actually Moves Through the System


What You’re About to Read

Public debate frames Section 230 reform as chaotic partisan warfare—conservatives versus Big Tech, free speech versus content moderation, culture war noise drowning out coherent policy.

That framing is misleading.

What the documented record reveals is something far more structured: a coordinated governance project, executed through a division of institutional labor where different actors play complementary roles. Each claims a distinct moral justification. But together, they advance a single structural outcome:

The replacement of neutral intermediary protections with liability-driven, automated, state-aligned control over online speech.

This is not coordination in the sense of secret meetings or conspiracy. It’s coordination in the sense of complementary function—like an ecosystem where each species fills a necessary niche. No single actor controls the whole system. But remove any one layer, and the project stalls. Together, it functions.

This document shows you exactly how.


Part I: Understanding Section 230 (What’s Actually at Stake)

What Section 230 Actually Does

Before we can understand the reform project, we need to be clear about what Section 230 actually protects.

Section 230 of the Communications Decency Act (1996) establishes two core protections:

1. Platforms are not legally liable for user-generated content If someone posts defamatory content, threatens violence, or shares illegal material on a platform, the legal responsibility lies primarily with the speaker—not the intermediary hosting the speech.

2. Platforms can moderate content without becoming liable for everything Platforms can remove content they find objectionable (spam, harassment, misinformation) without losing their immunity. This is the “good faith” moderation clause.

These two protections enable:

  • Anonymous speech
  • Encrypted messaging
  • Small forums and community sites
  • Nonprofit platforms
  • Investigative journalism platforms
  • Activist documentation (including ICE monitoring, police accountability)
  • Any service where users create content

Why This Matters (The Scale-Neutral Problem)

Here’s the critical feature most people miss: Section 230 is scale-neutral.

It protects a three-person subreddit exactly the same way it protects Facebook. It protects encrypted messaging services the same way it protects YouTube. It protects activist documentation platforms the same way it protects commercial social media.

This means any proposal to “reform” or “sunset” Section 230 must answer one fundamental question:

Where does legal risk go when intermediary immunity is weakened?

The answer to that question determines who survives and who doesn’t.


Part II: The Architecture of Convergence

The coordinated governance project only becomes visible when you map the roles explicitly. Here’s the division of labor:

Layer 1: Heritage Foundation — Supplying the Legal Logic

Role: Policy blueprint and jurisprudential theory

The Heritage Foundation’s contribution isn’t street-level agitation. It’s legal architecture.

Through Project 2025—specifically the FCC chapter authored by Brendan Carr—Heritage provides:

  1. The legal theory that Section 230 is “overextended” Not that the statute itself is flawed, but that courts have misinterpreted it too broadly.
  2. The argument that this is a problem of judicial excess, not statutory intent This framing is critical because it suggests the “fix” doesn’t require new legislation—just better interpretation.
  3. The justification for regulatory reinterpretation without Congressional action If courts got it wrong, then agencies (specifically the FCC) can correct the interpretation through administrative orders.

Why this matters:

By framing Section 230 as a problem of judicial overreach rather than legislative intent, Heritage makes it possible to act through agency reinterpretation—not democratic debate, not Congressional hearings, not transparent rulemaking subject to public comment in the traditional sense.

The Project 2025 FCC chapter explicitly calls for the FCC to narrow Section 230 immunity by administrative order, stripping platforms of protections if they engage in content moderation deemed “politically biased” or “censorious.”

This is not incidental language. It is the hinge mechanism that allows executive control over speech architecture without passing what would obviously be recognized as a censorship law.

The documented position:

From Project 2025’s FCC chapter: “The FCC should issue an order that interprets Section 230 in a way that eliminates the expansive, non-textual immunities that courts have read into the statute.”

From Carr’s public statements: Big Tech companies are “attempting to drive diverse political viewpoints from the digital town square” and are “abusing dominant positions.”

Heritage’s role: Supply the blueprint. Make it look like law, not politics.


Layer 2: Brendan Carr — Translating Theory into Authority

Role: Regulatory execution mechanism

Brendan Carr’s function is operationalization.

As FCC Chairman (confirmed January 20, 2025), Carr occupies a position uniquely suited to execute Project 2025’s logic:

What the FCC Chair can do:

  • Initiate reinterpretation proceedings on Section 230
  • Signal enforcement priorities to platforms
  • Shape regulatory risk without formal rulemaking
  • Create legal uncertainty that platforms must respond to preemptively

How modern governance actually works:

This is the key insight most people miss. Modern regulatory power doesn’t work through outright bans. It works through uncertainty.

By threatening reinterpretation of Section 230, the FCC doesn’t need to revoke it outright. Platforms respond in advance by:

  • Adjusting content policies
  • Increasing automated moderation
  • Building compliance infrastructure
  • Demonstrating “neutrality” through over-removal

The threat alone changes behavior. This is governance by shadow—regulatory power exercised through signaling rather than formal action.

Why Carr specifically:

Carr didn’t just write the blueprint—he now controls the agency with authority to execute it. He can initiate proceedings, issue guidance, and create the legal environment where platforms must choose between:

  1. Maintaining current moderation practices (and risking loss of Section 230 protections)
  2. Adapting to FCC signals (and accepting increased liability/compliance costs)

Either way, the outcome shifts toward greater platform liability.

The timeline:

  • 2023: Carr authors Project 2025 FCC chapter
  • January 20, 2025: Carr becomes FCC Chairman
  • Present: FCC gains authority to pursue administrative reinterpretation

The coordination is sequential, not coincidental.


Layer 3: Moms for Liberty — Supplying Moral Legitimacy and Political Pressure

Role: Moral legitimation and grievance anchoring

Moms for Liberty’s function is often misunderstood because their public rhetoric emphasizes “free speech” while their documented legal positions argue for government authority to constrain platform moderation.

The legal record:

In the NetChoice Supreme Court cases (Moody v. NetChoice and NetChoice v. Paxton), Moms for Liberty filed an amicus brief arguing for state authority to regulate platform content moderation decisions—explicitly supporting government power to dictate how platforms treat user speech.

From their brief: “Laws barring viewpoint discrimination threaten none of these [platform] rights. They do not regulate the platforms’ own speech, nor do these provisions prevent the platforms’ users from choosing what speech they receive.”

This isn’t protection from censorship. This is compelled carriage—the legal doctrine that platforms can be required to host speech they would otherwise remove.

Their dual function in the ecosystem:

  1. Moral Cover They frame liability expansion as protecting children, parents, and fairness—language that neutralizes civil liberties objections. It’s hard to oppose “protecting kids” even when the mechanism is expanded government control.
  2. Grievance Anchoring By foregrounding their own deplatforming experiences (22 Facebook chapter groups flagged, content removed for posting school board meeting times), they convert institutional control into a narrative of victimhood that legitimizes state intervention.

The documented grievance:

January 24, 2022: Moms for Liberty co-founders Tiffany Justice and Tina Descovich wrote an open letter to Mark Zuckerberg detailing how Facebook flagged 22 chapter groups for “community standards violations” for posting:

  • School board meeting times
  • Questions about student textbooks
  • Basic information about local government operations

Facebook restored access after public pressure, but the grievance remained—and became legal ammunition in the NetChoice cases.

Why this matters in the coordination structure:

Moms for Liberty doesn’t need to design policy. They need to make state intervention politically palatable. When Heritage and Carr push for regulatory reinterpretation, they can point to “grassroots” organizations like Moms for Liberty experiencing “censorship”—making executive action look like protecting citizens, not controlling speech.

The coordination: Heritage builds the legal theory, Carr provides the regulatory mechanism, Moms for Liberty provides the political narrative that makes it defensible.


Layer 4: Platform Incumbents — Normalizing the Outcome

Role: Operational precedent and market normalization

This is where the pattern becomes impossible to dismiss as coincidence or culture war.

The FOSTA-SESTA precedent (2018):

FOSTA-SESTA was the first major carve-out to Section 230, creating platform liability for knowingly facilitating sex trafficking.

Meta (then Facebook) publicly supported this legislation.

Sheryl Sandberg, Meta’s COO, endorsed FOSTA-SESTA despite widespread opposition from civil liberties organizations, small platforms, and sex worker advocacy groups who predicted (correctly) that it would:

  • Force platforms to remove adult content preventatively
  • Shut down harm reduction resources for sex workers
  • Drive smaller platforms out of business due to compliance costs
  • Increase automated content removal

Why would Meta support weakening Section 230?

Because liability-driven regulation favors scale.

When platforms face increased legal liability:

  • Large platforms build massive compliance infrastructure
  • They deploy AI moderation systems
  • They survive the cost shock
  • Smaller competitors cannot afford compliance and exit the market

This isn’t hypocrisy. It’s rational market behavior under regulatory pressure.

The documented effects of FOSTA-SESTA:

According to civil liberties organizations and platform operators:

  • Craigslist shut down its entire personals section
  • Smaller platforms shut down entirely rather than face liability
  • Sex workers lost access to harm reduction resources
  • Platforms over-removed content to minimize legal risk
  • Encrypted and anonymous services faced existential legal threats

Meta survived. Smaller competitors didn’t.

The pattern:

Platform incumbents don’t need to openly lobby for Section 230 erosion. They need only:

  1. Support targeted carve-outs that increase compliance costs
  2. Build the infrastructure to survive those costs
  3. Watch smaller competitors collapse
  4. Normalize the compliance-heavy environment

Why this matters for the coordination structure:

When Heritage and Carr push for Section 230 reinterpretation, they can point to Meta’s FOSTA-SESTA support as proof that “even Big Tech” recognizes the need for reform. Meanwhile, Meta benefits from regulatory complexity that smaller platforms cannot navigate.

The outcome: Market consolidation disguised as accountability.


Layer 5: AI Moderation — The Enforcement Substrate

Role: Technical enforcement mechanism

AI moderation is not introduced as a policy choice. It emerges as the only feasible enforcement mechanism once liability increases.

Why AI becomes mandatory under liability pressure:

When platforms face legal exposure for user content, they need three things that humans cannot provide at scale:

  1. Proactive screening (content flagged before wide distribution)
  2. Audit trails (documented proof of enforcement efforts)
  3. Repeatable, standardized enforcement (defensible in court as non-arbitrary)

These requirements create an AI moderation blueprint:

Pre-publication screening: Content is scanned at upload for categories the platform’s legal team marks as high-risk (defamation, “harmful to minors,” harassment, “extremism,” IP violations, whatever the statute or case law makes expensive).

Risk scoring and throttling: Borderline content isn’t just removed—it’s shadow-limited, deprioritized, or gated behind identity checks. Less distribution = less liability exposure.

Policy-as-code: Human-readable policies get converted into machine-enforceable rulesets. Enforcement becomes less about judgment, more about consistent algorithmic application.

Logging as legal defense: Enforcement systems keep detailed records to demonstrate “reasonable efforts” in case of litigation. This creates pressure for data retention, which collides with privacy and encryption norms.

Appeals become queue-managed bottlenecks: Appeals exist to satisfy due process, but are minimized as cost centers. Default state: content stays down unless manually reversed.

Why this matters for coordination:

Every actor in the system depends on AI moderation:

  • Heritage/Carr need it to make enforcement scalable
  • Moms for Liberty need it to demonstrate platforms “taking responsibility”
  • Meta needs it to survive compliance costs
  • Smaller platforms cannot afford it (and exit)

AI moderation isn’t introduced to improve speech. It’s introduced to make liability survivable—and once deployed, it becomes the permanent infrastructure of control.

The technical end-state:

“Reform” doesn’t produce “fair moderation.” It produces:

  • Mandatory scalable compliance systems
  • Automated pre-screening of all content
  • Opaque algorithmic enforcement
  • Competitive advantage for platforms with in-house AI
  • Elimination of platforms that cannot afford automation

This is the enforcement substrate the entire coordination structure depends on.


Part III: The Coordination Pattern Made Explicit

Now we can see how the pieces fit together.

Heritage Foundation supplies the legal justification: “Section 230 has been overextended by courts. The FCC can correct this through administrative reinterpretation.”

Brendan Carr / FCC supplies regulatory leverage: “We can initiate proceedings to narrow Section 230 immunity. Platforms will adjust their behavior preemptively to avoid losing protections.”

Moms for Liberty supplies moral pressure and political narrative: “We were censored for posting school board meeting times. State intervention is protecting citizens from Big Tech abuse.”

Meta supplies proof of survivability and market normalization: “We supported FOSTA-SESTA. We built the compliance infrastructure. We survived. This is how the market works now.”

AI moderation systems supply the enforcement mechanism: “Liability makes automation mandatory. We provide the only scalable solution. Once deployed, we become the infrastructure.”


This is not a conspiracy. It is a distributed governance system.

Each actor can plausibly deny responsibility for the whole—because no single actor controls it.

That is precisely what makes the coordination effective.


Part IV: What This System Produces

The outcomes aren’t speculative. They’re based on documented precedent (FOSTA-SESTA) and existing platform compliance behavior.

Immediate Technical Effects

1. Over-removal becomes rational When the cost of a false negative (leaving up potentially unlawful content) is existential, platforms prioritize “don’t get sued.” That means more takedowns, more account bans, more content suppression. Error on the side of removal.

2. Small platforms and nonprofits get crushed first Big incumbents can afford trust-and-safety teams, legal review, vendor tooling, and appeals infrastructure. Small forums, community tools, encrypted services, and adversarial-journalism platforms cannot. Compliance becomes a moat.

3. Automation becomes mandatory Human review cannot scale to billions of posts under legal liability. AI moderation becomes the only viable option—with all its documented problems: context collapse, bias amplification, inability to understand nuance.

4. Enforcement becomes opaque When moderation decisions are made by proprietary algorithms, users cannot understand why content was removed, platforms cannot easily explain decisions, and oversight becomes nearly impossible.

Structural Political Effects

5. Anonymous and activist platforms exit Platforms enabling ICE monitoring, police accountability documentation, whistleblowing, and activist coordination face the highest legal risk. They cannot afford AI moderation systems. They shut down.

6. Dissent becomes legally risky When platforms over-remove to minimize liability, controversial speech—even protected speech—gets suppressed. Political organizing, protest coordination, and challenging institutional power become harder.

7. Power consolidates upward Fewer platforms = less competition = more control. The platforms that survive are those with resources to build compliance infrastructure—i.e., the largest incumbents.

8. Speech becomes permissioned The shift from neutral intermediaries to liability-driven gatekeepers means speech requires permission from platform compliance systems. This is a fundamental change in who controls the conditions under which speech exists.


None of this requires bad intent. It requires only aligned incentives.


Part V: The Core Insight (Why This Matters)

Section 230 reform is not fundamentally about speech policy. It’s about who governs the conditions under which speech is allowed to exist.

The convergence documented here replaces:

  • Neutrality with liability: Platforms become legally responsible for user speech
  • Discretion with automation: Human judgment replaced by algorithmic enforcement
  • Contestation with compliance: Ability to challenge power replaced by obligation to comply with automated systems

This is a governance transition—from open, contestable systems to permissioned infrastructure where speech exists only within algorithmically-enforced bounds.

The coordination is functional, not rhetorical.

The documents are public:

  • Project 2025 FCC chapter (authored by Brendan Carr)
  • Supreme Court amicus briefs (Moms for Liberty)
  • Meta’s public statements supporting FOSTA-SESTA
  • Platform AI moderation deployment (documented in company transparency reports)

The roles are visible:

  • Heritage builds legal theory
  • Carr provides regulatory execution
  • Moms for Liberty provides political narrative
  • Meta demonstrates market adaptation
  • AI systems provide enforcement substrate

The coordination works because each layer reinforces the others:

  • Legal theory enables regulatory action
  • Regulatory action creates compliance pressure
  • Compliance pressure drives automation
  • Automation favors incumbents
  • Incumbents normalize the new environment
  • Political narrative makes intervention defensible

The only remaining question is whether the public will recognize the system before it hardens into law.


Part VI: What Can Be Done (Not Prescriptive, But Analytical)

This document doesn’t prescribe solutions. But it does identify the intervention points where the coordination could be disrupted:

At the Legal Theory Layer

Challenge the premise that Section 230 is “overextended” by courts rather than working as Congress intended. Demand legislative action rather than administrative reinterpretation.

At the Regulatory Execution Layer

Require transparency in FCC proceedings. Demand public comment periods. Challenge administrative reinterpretation authority in court.

At the Political Narrative Layer

Distinguish between “platform accountability” (which could mean transparency, due process, appeals) and “expanded liability” (which produces automation and consolidation).

At the Market Structure Layer

Recognize that regulatory complexity favors incumbents. If the goal is reducing Big Tech power, increasing compliance costs is counterproductive.

At the Technical Enforcement Layer

Resist the normalization of AI moderation as inevitable. Demand human review, transparent processes, and the ability to understand and challenge automated decisions.


But the most important intervention is simply this:

Recognize the coordination for what it is.

Not a conspiracy. Not chaos. But a distributed governance system where complementary actors advance overlapping interests toward a shared structural outcome.

Once you see the system, you can choose whether to allow it to continue.


Sources for Verification

All claims in this document can be verified through:

Sources (clickable)

Project 2025 / Carr / FCC posture
Brookings: “Not ‘deregulation’ but heavy-handed regulation at the Trump FCC” (Feb. 25, 2025)
FCC public leadership document (PDF)
CBS News (Nov. 19, 2024) on Carr + Project 2025 chapter
CNN (Nov. 18, 2024) on Carr + Project 2025 chapter

NetChoice cases / Moms for Liberty legal posture
Moms for Liberty + Institute for Free Speech amicus brief (Jan. 23, 2024) (PDF)
Moody v. NetChoice docket (No. 22-277)
NetChoice v. Paxton docket (No. 22-555)
Moms for Liberty open letter to Mark Zuckerberg (Jan. 24, 2022)

FOSTA-SESTA (Section 230 carve-out precedent) + documented impact
Congress.gov: H.R. 1865 (FOSTA) text and legislative history
EFF: documented censorship and platform response after FOSTA (Feb. 2019)

Primary Sources:

  • Project 2025: Mandate for Leadership, FCC Chapter (Brendan Carr)
  • Supreme Court filings: NetChoice v. Paxton (No. 22-555), Moody v. NetChoice (No. 22-277)
  • Moms for Liberty amicus brief (January 23, 2024)
  • FOSTA-SESTA legislative text (HR 1865, April 2018)
  • Meta/Facebook public statements on FOSTA-SESTA (2018)
  • FCC official announcements (January 2025)

Secondary Sources:

  • Brookings Institution: “Not ‘deregulation’ but heavy-handed regulation at the Trump FCC” (February 25, 2025)
  • CBS News: “Trump’s FCC pick, Brendan Carr, wrote Project 2025’s chapter on the agency” (November 19, 2024)
  • CNN: “Carr wrote the FCC chapter in ‘Project 2025.’ Now he’s Trump’s pick for the agency” (November 18, 2024)
  • Electronic Frontier Foundation: FOSTA-SESTA impact analysis
  • Civil liberties organization reports on platform compliance costs

Final Note

This is not speculation. This is documentation.

The system exists. The coordination is observable. The outcomes are predictable.

The question is what you choose to do with this information.


Document completed: February 2026 All claims verifiable through public sources Analytical framework: Institutional coordination, not conspiracy

Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload https://a.co/d/5pLWy0d
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key | Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload: (Mirrored Reflection. Soft Existential Flex)

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

The Heritage Foundation’s New Target: Dismantling the EU for Corporate Control

Something is happening. Quietly, behind closed doors, the same think tanks that gutted U.S. regulations, crushed reproductive rights, and dismantled climate policies under Trump are now turning their attention to Europe.

A Corporate Power Grab Disguised as “Sovereignty”

On March 11, the Heritage Foundation met with hard-right European political groups to discuss a plan to dismantle the European Union. The strategy? Rebrand the EU as a “quasi-federal state” infringing on national sovereignty, then justify gutting its key institutions.

This is not an ideological battle. It’s not about democracy or independence. It’s about deregulation. It’s about corporate feudalism.

Who’s Involved?

MCC – A Hungarian think tank, backed by Orbán’s government, funded by fossil fuel interests with a direct stake in oil companies.

Ordo Iuris – A Polish legal group with ties to far-right nationalism, advocating for rolling back LGBTQ+ and women’s rights.

Alliance Defending Freedom (ADF) – The same U.S. Christian nationalist group behind the overturning of Roe v. Wade now working to spread its influence in Europe.

The Heritage Foundation – The architects of Project 2025, Trump’s roadmap for dismantling federal oversight, now exporting their model to the EU.


The Playbook: How They’ll Do It

The plan, outlined in a white paper titled “The Great Reset: Restoring Member State Sovereignty in the 21st Century”, is designed to:

✔ Dismantle the European Commission – The EU’s executive body responsible for enforcing regulations.
✔ Weaken the European Court of Justice – The legal safeguard ensuring human rights and policy enforcement.
✔ Rebrand the EU into the “European Community of Nations” (ECN) – A deregulated, weakened version that reduces environmental protections, removes corporate oversight, and undermines worker protections.

Why This Matters

If they succeed in gutting the EU, this becomes a playbook for dismantling global regulatory systems.

The same dark money networks, fossil fuel backers, and nationalist think tanks that captured U.S. policy will replicate this strategy worldwide.

This isn’t just about the EU. It’s about whether international institutions survive corporate authoritarianism.

What Can Be Done?

🔎 Follow the money. Every move they make is backed by billionaire donors and fossil fuel executives. The key to stopping them is exposing who funds them.
🛑 Challenge their framing. They use words like “sovereignty” and “freedom” while pushing corporate feudalism and authoritarian rule.
📢 Raise awareness. The mainstream media won’t report this until it’s too late. We have to force this into the conversation.

Final Thought: Who’s Next?

If they break the EU, they will move on to the UN, the WHO, and any international body that enforces protections for people over profit.

The Heritage Foundation isn’t stopping at the U.S. They’re not stopping at the EU. Their goal is to reshape the world into an unchecked corporate power structure.

And they’re already making their move.

Heritage Foundation EU, dismantling the European Union, Project 2025 Europe, Orban EU reforms, corporate authoritarianism


Call to Action: Encourage shares, discussion, and investigation into their funding sources.

The Heritage Foundation’s Global Power Grab: How U.S. Conservatives Plan to Dismantle the EU

Alt Text:

A cracked European Union flag with deep fractures spreading across it, symbolizing destabilization. In the background, shadowy figures representing corporate and political interests loom over the fractured EU emblem, emphasizing the threat of external influence and deregulation.