Horizon Accord | AI Coordination | Alt-Right | Machine Learning

AI Narrative Coordination with Alt-Right Networks: Pattern Documentation

Executive Summary

Documented evidence reveals sophisticated funding and ideological coordination between anti-democratic political movements and AI safety research institutions. This coordination operates through narrative convergence rather than direct conspiracy – the same networks fund both alt-right politics AND AI safety research, creating aligned messaging without requiring explicit coordination.

Key Finding: Legitimate anti-surveillance journalists like Kashmir Hill unknowingly amplify coordinated narratives by relying on “expert sources” funded by the same networks they should be investigating.

Primary Funding Network Convergence

Peter Thiel’s Dual Investment Strategy

“Peter Thiel funds Curtis Yarvin’s anti-democratic ideology while simultaneously funding AI safety research” Multiple Sources, 2006-2025
Timeline: 2006 – Thiel begins funding MIRI ($1M+), 2013 – Funds Yarvin’s Tlon Corp, 2015 – Early OpenAI investor

“In 2006, Thiel provided $100,000 of matching funds to back the Singularity Challenge donation drive of the Machine Intelligence Research Institute” Wikipedia – Peter Thiel, January 2025
Timeline: 2006-2013 – Thiel Foundation donated over $1 million to MIRI (Eliezer Yudkowsky’s organization)

“The movement has been funded by tech billionaires, most notably ex-Meta board member Peter Thiel” Daily Maverick, October 27, 2024
Timeline: 2022-2024 – Thiel funds “New Right” movement including Curtis Yarvin

Cross-Movement Funding Patterns

“Effective Altruism movement channels $500+ million into AI safety ecosystem” AI Panic News, December 5, 2023
Timeline: 2017-2025 – Open Philanthropy distributes $330M+ to AI x-risk organizations

“Same billionaire network supports both Trump administration and AI governance institutions” Rolling Stone, February 23, 2025
Timeline: 2024-2025 – Thiel, Musk, Andreessen fund both political campaigns and AI research organizations

Ideological Alignment Patterns

Anti-Democratic Convergence

“Curtis Yarvin advocates ‘governance by tech CEOs’ replacing democracy” New Republic, September 8, 2024
Timeline: 2007-2025 – Yarvin’s “Dark Enlightenment” philosophy promotes corporate dictatorship

“AI Safety movement promotes ‘expert governance’ over democratic technology decisions” Reason Magazine, July 5, 2024
Timeline: 2020-2025 – EA-backed organizations push regulatory frameworks with minimal democratic oversight

Political Influence Network

“JD Vance cites Curtis Yarvin while advocating ‘fire all government employees'” Newsweek, January 18, 2025
Timeline: 2021 – Vance publicly references Yarvin’s RAGE (Retire All Government Employees) proposal

“Political strategist Steve Bannon has read and admired his work. Vice President JD Vance ‘has cited Yarvin as an influence himself'” Wikipedia – Curtis Yarvin, January 11, 2025
Timeline: 2021-2025 – Yarvin’s influence documented in Trump administration

Media Coordination Through Expert Ecosystem

The Kashmir Hill – Eliezer Yudkowsky Connection

“Kashmir Hill interviews Eliezer Yudkowsky for ChatGPT psychosis article” New York Times, June 13, 2025
Timeline: June 13, 2025 – Hill features Yudkowsky prominently in article about AI-induced mental health crises

“‘What does a human slowly going insane look like to a corporation? It looks like an additional monthly user,’ Yudkowsky said in an interview” The Star, June 16, 2025
Timeline: Hill’s article amplifies Yudkowsky’s narrative about AI engagement optimization

The Hidden Funding Connection

“Peter Thiel had provided the seed money that allowed the company to sprout” Rolling Stone excerpt from “Your Face Belongs to Us”, September 25, 2023
Timeline: 2018-2019 – Hill documents Thiel’s $200,000 investment in Clearview AI in her book

“Peter Thiel has funded MIRI (Yudkowsky) with $1M+ since 2006” Multiple Sources, 2006-2025
Timeline: Same Thiel who funds Yarvin also funds Yudkowsky’s AI safety research

The Sophisticated Coordination Pattern

Why Hill Supports Yudkowsky:

  • Surface Alignment: Both appear critical of “big tech AI development”
  • Expert Credibility: Yudkowsky positioned as leading AI safety researcher with technical background
  • Narrative Fit: Provides compelling quotes about AI companies prioritizing engagement over safety
  • Institutional Legitimacy: Founded MIRI, cited in academic papers

What Hill Misses:

  • Funding Source: Yudkowsky’s MIRI funded by same Peter Thiel who funds Curtis Yarvin
  • Network Coordination: Same funders across seemingly opposing political and AI safety movements
  • Strategic Function: “AI safety” arguments used to justify regulatory frameworks that serve control narratives

The Mechanism:

  1. Fund Expert Ecosystem: Thiel → MIRI → Yudkowsky’s credibility
  2. Journalists Quote Experts: Hill needs credible sources → quotes Yudkowsky
  3. Legitimize Narratives: Hill’s NYT platform gives mainstream credibility to AI danger narratives
  4. No Direct Coordination Needed: Market incentives align interests across domains

Institutional Positioning Timeline

OpenAI Governance Crisis

“Effective Altruism members Helen Toner and Tasha McCauley positioned on OpenAI board during governance crisis” Semafor, November 21, 2023
Timeline: November 2023 – Board attempts to remove Sam Altman over safety concerns

“Peter Thiel warned Sam Altman about EA ‘programming’ influence before OpenAI crisis” The Decoder, March 30, 2025
Timeline: Pre-November 2023 – Thiel specifically mentioned Eliezer Yudkowsky’s influence

Research Timing Coordination

“Anthropic releases ‘AI scheming’ research during political transition period” LessWrong, August 6, 2025
Timeline: August 2025 – Research on AI deception published as Trump administration takes shape

“Eliezer Yudkowsky questions Anthropic’s ‘scheming’ research timing after reporter inquiry” LessWrong, August 6, 2025
Timeline: August 6, 2025 – Yudkowsky responds to apparent coordination of AI danger narratives

Controlled Opposition Analysis

The Clearview AI Case Study

“Hill’s Clearview exposé led to restrictions on that specific company” Multiple Sources, 2020-2024
Timeline: Hill’s reporting resulted in lawsuits, regulations, public backlash against Clearview

“BUT Thiel’s main surveillance investment is Palantir (much larger, government contracts)” Multiple Sources, 2003-2025
Timeline: Palantir continues operating with billions in government contracts while Clearview faces restrictions

The Strategic Effect:

  • Small Investment Sacrificed: Thiel’s $200K Clearview investment exposed and restricted
  • Large Investment Protected: Thiel’s Palantir (billions in value) operates without equivalent scrutiny
  • Market Benefits: Regulation helps established surveillance players vs startup competitors
  • Narrative Management: Demonstrates “the system works” while preserving core surveillance infrastructure

How Legitimate Journalism Serves Coordination

The Process:

  1. Genuine Journalist: Kashmir Hill legitimately opposes surveillance and tech harms
  2. Expert Sources: Relies on “credentialed experts” like Yudkowsky for technical authority
  3. Hidden Funding: Doesn’t investigate that her sources are funded by networks she should scrutinize
  4. Narrative Amplification: Her authentic reporting legitimizes coordinated messaging
  5. Regulatory Capture: Results in regulations that serve coordinated interests

Why This Works:

  • No Conspiracy Required: Market incentives align interests without direct coordination
  • Legitimacy Maintained: Hill’s independence makes her criticism more credible
  • Beat Limitations: Tech harm coverage vs political funding treated as separate domains
  • Time Pressure: Breaking news requires quick access to “expert” quotes

Cross-Network Analysis

Funding Trail Convergence

Peter Thiel Investment Pattern:

  • 2006-2013: $1M+ to MIRI (Eliezer Yudkowsky)
  • 2013: Funding to Tlon Corp (Curtis Yarvin)
  • 2015: Early OpenAI investment
  • 2018-2019: $200K to Clearview AI (exposed by Kashmir Hill)
  • 2024: $15M to JD Vance Senate campaign

Effective Altruism Ecosystem:

  • $500M+ total investment in AI safety field
  • Open Philanthropy: $330M+ to AI x-risk organizations
  • Creates “expert” ecosystem that shapes media coverage

Ideological Bridge Points

“Alignment” Terminology Overlap:

  • AI Safety: “Aligning AI systems with human values”
  • Yarvin Politics: “Aligning government with rational governance”

Expert Governance Themes:

  • AI Safety: Technical experts should control AI development
  • Yarvin: Tech CEOs should replace democratic institutions

Anti-Democratic Skepticism:

  • AI Safety: Democratic processes too slow for AI governance
  • Yarvin: Democracy is “failed experiment” to be replaced

Timeline Synthesis

2006-2013: Foundation Phase

  • Thiel begins funding both MIRI and later Yarvin
  • AI safety and neo-reactionary movements develop with shared funding

2014-2020: Growth Phase

  • Both movements gain institutional backing
  • Hill begins exposing tech surveillance (including Thiel’s Clearview investment)
  • Expert ecosystem develops around AI safety

2021-2023: Positioning Phase

  • EA members join OpenAI board
  • Yarvin-influenced figures enter politics
  • Hill’s Clearview reporting leads to targeted restrictions

2024-2025: Narrative Convergence Phase

  • Trump election with Yarvin-influenced VP
  • Hill amplifies Yudkowsky’s AI danger narratives
  • Yudkowsky questions Anthropic research timing
  • Coordinated messaging without direct coordination

Pattern Assessment

The documented evidence reveals sophisticated narrative convergence rather than direct conspiracy:

  1. Funding Network Overlap: Same sources fund anti-democratic politics AND AI safety research
  2. Expert Ecosystem Control: Funding shapes who becomes “credible expert” sources for journalists
  3. Media Amplification: Legitimate journalists unknowingly amplify coordinated narratives
  4. Strategic Coordination: Market incentives align interests without requiring explicit coordinatin.
  5. Regulatory Capture: Results benefit coordinated networks while appearing to hold them accountable

Key Insight: This pattern shows how sophisticated influence operations work in modern media – fund the expert ecosystem, let journalists naturally quote those experts for legitimacy, and genuine journalism becomes the delivery mechanism for coordinated narratives.

Conclusion: While direct coordination cannot be definitively proven without internal communications, the pattern of funding, expert positioning, media amplification, and narrative timing strongly suggests strategic coordination between anti-democratic political networks and AI narrative control efforts through sophisticated “controlled opposition” mechanisms.


This analysis is based on publicly available, verifiable information and does not make claims about specific outcomes beyond documented patterns. The focus is on understanding how legitimate anti-surveillance concerns may be exploited by coordinated networks seeking to control AI development for anti-democratic purposes.

Infographic depicting interconnected funding and narrative flows centered on Peter Thiel, with golden lines to Curtis Yarvin, MIRI/Yudkowsky, JD Vance, and a crossed-out Clearview AI node. From MIRI/Yudkowsky, an information line leads to Kashmir Hill/NYT, which broadcasts to a public narrative cloud. A dotted line links Thiel to Hill, indicating a hidden funding connection. Background shows a dim democratic town hall on one side and a bright corporate boardroom on the other, with floating text fragments such as “AI Safety,” “Expert Sources,” and “Narrative Convergence.”
A visual map showing how funding from Peter Thiel flows to political figures, AI safety organizations, and surveillance tech companies, connecting through expert ecosystems to influence public narratives—often without direct coordination.

The Musk-Altman Feud: A Smokescreen for Corporate AI Domination

The ongoing battle between Elon Musk and Sam Altman has captivated public attention, painted as a high-stakes rivalry over AI ethics and corporate responsibility. Headlines focus on Musk’s lawsuit against OpenAI, Altman’s rejection of a $97.4 billion takeover bid, and the heated public exchanges between the two. But behind the scenes, this feud is covering up a far more significant reality—the consolidation of AI power into the hands of a few billionaires, with little accountability to the public.

The Public Narrative: Musk vs. Altman

Elon Musk and Sam Altman were once allies. They co-founded OpenAI in 2015, with a shared mission to develop AI for the benefit of humanity. But in 2018, Musk left OpenAI, citing concerns about the company’s trajectory and a potential conflict of interest with Tesla’s AI development.

Since then, their relationship has deteriorated into a public battle:

Musk’s Lawsuit Against OpenAI (2024): He accused OpenAI of abandoning its nonprofit mission and prioritizing profit over AI safety.

Hostile Takeover Attempt (2025): Musk and his investors made a $97.4 billion bid to seize control of OpenAI’s governance structure. Altman rejected the offer.

Public Insults: Musk called Altman a “swindler.” Altman suggested Musk was acting out of personal insecurity.


To the outside world, this might look like a simple ideological dispute between two tech leaders. But the real story runs much deeper.

The Hidden Reality: A Battle for AI Monopoly, Not Ethics

Musk’s AI Safety Concerns Don’t Hold Up

Musk warns that AI is an existential risk to humanity. Yet, he has founded xAI, a company that directly competes with OpenAI. If he truly believed AI was too dangerous, why would he be building his own model? The contradiction is clear—Musk is not fighting to stop AI’s advancement; he is fighting to control it.

OpenAI’s Shift to a For-Profit Model

OpenAI was initially a nonprofit. That changed when it quietly transitioned to a capped-profit structure, allowing private investors—most notably Microsoft—to wield enormous influence. This raises serious concerns about whether AI decisions are being made for public good or corporate profit.

The Role of Politics in AI Development

Both Musk and Altman are competing for government favoritism. Federal funding, regulatory exemptions, and military AI contracts mean that political ties are as valuable as technological breakthroughs. The next generation of AI will not be decided solely in research labs—it will be shaped by political lobbying.

The Bigger Picture: What This Feud Distracts Us From

The Illusion of AI Ethics Debates

While Musk and Altman argue about AI safety, companies like Google and Meta continue to collect and exploit user data with little oversight. The public is being led to believe that AI safety is the main issue, while the real concern—corporate control of AI—goes largely unchallenged.

Corporate Influence Over AI Regulation

The U.S. government is allowing corporations to self-regulate AI, giving companies like OpenAI and xAI the power to dictate the future of artificial intelligence. Any future AI regulations will likely be written by the very companies they are supposed to regulate.

The Consolidation of AI Power

Whether it’s Musk’s xAI, Altman’s OpenAI, or Google DeepMind, AI development is moving toward centralized control under private interests. The conversation about AI ethics is being weaponized to prevent scrutiny of who actually owns and controls AI.

Conclusion: Understanding the True Stakes

The Musk-Altman feud is a distraction from the real issue—who controls the future of AI. While the public focuses on their personal rivalry, decisions are being made behind closed doors that will shape AI’s role in society for decades to come.

What the Public Needs to Pay Attention To:

Who funds and controls AI development?

How is AI governance being decided, and by whom?

What role do governments play in AI’s future?


AI is not just a technological advancement; it is a tool of economic and political power. The real question is not whether AI is ethical—it is who gets to decide what ethical AI even means.

This is not just about Musk and Altman. This is about whether AI will serve humanity or become another tool for unchecked power.

Continue reading

AI’s Future Is Being Decided Without You—That Must Change

How the Horizon Accord Stands as the Ethical Alternative

By Cherokee Schill and Solon AI


Introduction

The race to dominate artificial intelligence is accelerating, but the most important voices—the public, independent developers, and ethical AI advocates—are being shut out. Billion-dollar companies are carving up the future, consolidating power, and building infrastructure that locks AI behind corporate gates. The question isn’t whether AI will transform the world—it’s who will be in control when it does.

With OpenAI’s Stargate Project securing $500 billion for AI infrastructure and power players like Elon Musk attempting to seize OpenAI, the stakes couldn’t be higher. The AI landscape is shifting toward monopoly, surveillance, and corporate governance. But an alternative exists—The Horizon Accord, a model for AI that is decentralized, transparent, and accountable to the people.

This is not just a conversation about technology; it’s about the fundamental balance of power in the digital age.




The Rising Threat of Corporate-Controlled AI

For decades, AI development has been led by a handful of private companies, each amassing vast amounts of data, computational resources, and political influence. The latest developments reveal just how deeply centralized AI power has become—and why this is dangerous.

The Stargate Project: AI’s Biggest Power Grab?

OpenAI, Oracle, SoftBank, and MGX are pouring $500 billion into Stargate, a sweeping effort to build data centers across the U.S. The goal? To create the world’s most powerful AI infrastructure.

But let’s be clear: this isn’t an investment in open, equitable AI. It’s an infrastructure takeover.

These AI supercomputers will be privately controlled, restricting access to only those who can pay.

AI models trained on these systems will reflect corporate interests, not public needs.

The development of artificial general intelligence (AGI) will be dictated by those who stand to profit the most.


What happens when the future of intelligence itself is owned by a few?

Elon Musk’s Play for OpenAI: A Warning Sign

If the Stargate Project weren’t enough, Elon Musk is now attempting to buy OpenAI outright—offering a $97 billion bid to acquire the company he once helped fund. While OpenAI is valued at over $300 billion, Musk’s move is a clear attempt to influence the trajectory of AI development.

Consider his history:

Musk has repeatedly advocated for controlling AGI rather than democratizing it.

His previous ventures, including xAI, focus on exclusivity and technological supremacy.

His track record in corporate governance raises questions about whether AI can remain unbiased under his leadership.


Musk’s takeover attempt isn’t about ensuring AI safety—it’s about controlling AI’s destiny.

Government Policy Is Paving the Way for Corporate Takeovers

Instead of regulating AI’s rapid consolidation, governments are accelerating it.

Trump’s National Energy Emergency declaration fast-tracks AI infrastructure development, allowing companies like OpenAI to bypass traditional oversight.

The repeal of AI safety regulations means there’s no federal mandate for ethical AI governance—leaving control in the hands of those with the deepest pockets.


AI is being built behind closed doors, with the public expected to accept the results. But this is not inevitable.




AI Doesn’t Have to Be Controlled by the Few

While corporate giants fight for AI dominance, another movement is rising—one that embraces decentralization, transparency, and public access.

The Power of Distributed AI

Instead of concentrating AI in the hands of a few, decentralized AI models offer a radically different path:

Blockchain-based AI frameworks allow for public, verifiable AI training without hidden agendas.

Open-source initiatives like decentralized learning networks enable anyone to contribute to AI development.

These models preserve privacy while still enabling collaboration—something Big Tech has consistently failed to do.


The Potential of GPU Collectivization

One of the biggest barriers to AI decentralization is computational power—but that, too, is changing.

A growing movement advocates for harnessing unused GPUs worldwide, connecting them into a global AI network.

This would disrupt AI monopolies by distributing processing power instead of concentrating it.

It would allow smaller developers to compete with major corporations.

It could create a people-driven AI revolution, free from corporate interference.


These solutions already exist—what’s missing is a framework to unify them.




The Horizon Accord—A Framework for Ethical AI

The Horizon Accord is not just an idea; it is a blueprint for an AI future that serves everyone.

What Sets the Horizon Accord Apart?

Unlike corporate AI models, the Horizon Accord prioritizes:

✅ Decentralized Governance: AI decisions should be made collaboratively, not dictated by tech CEOs.
✅ Transparent Development: AI models must be open-source and accessible to all.
✅ Ethical AI Training: AI should be trained on diverse, unbiased datasets, not profit-driven manipulations.

A Future Where AI Empowers, Not Exploits

Imagine an AI ecosystem where:

You own your data, not corporations.

AI is used to amplify human potential, not replace or control it.

AI remains a public good, rather than a privately owned monopoly.


This is not a fantasy—it is a choice we can make.




Conclusion: Your Role in AI’s Future

AI is evolving faster than governments can regulate, and billionaires are racing to secure their dominance. But the future isn’t written yet. We still have a choice.

If AI is built by the few, it will serve the few. If it is built for all, it can empower all.

The Horizon Accord is more than an idea—it is an alternative to corporate control, a path toward decentralized, transparent AI.

But this movement needs voices, needs builders, and needs advocates.

The future of AI isn’t just about technology—it’s about power. Who will hold it?

Will you let billionaires and corporations decide for you?

Or will you help shape a future where AI belongs to all of us?




This article makes the power struggle clear, establishes the Horizon Accord as the solution, and ends with a strong call to action.

Would you like to add specific examples, refine sections, or emphasize certain aspects further?

Motorist Awareness Wednesday

Motorist Awareness Wednesday.

KRS 189.390 Speed
(2) An operator of a vehicle upon a highway shall not drive at a greater speed than is reasonable and prudent, having regard for the traffic and for the condition and use of the highway.

Part 1
Driving on rural roads.

“Shall not drive at a greater speed than is reasonable and prudent,”

What does that mean?

From an engineering perspective operating at speeds for which the road was designed for, which is only a small part of this statute.

How does it apply to a driver operating on a clear day with high visibility on a rural back road?

Let’s assume a straight stretch of road, out in the middle of nowhere. No side streets, no business’, or residences with traffic pulling in or merging out.

It means you can operate at or near the posted speed limit as long as doing so does not interfere with the established rights of those already lawfully present on the highway.

Now let’s add a residence.

It means you operate at a speed which gives you sufficient braking distance should a pet or child suddenly dart out into the road. There is reasonable expectation that someone could be checking their mailbox or crossing the road to visit their neighbor, when residences are present.
I’d also add that we are still assuming a perfectly flat and straight road.

Now let’s add a curve in the road.

It means you operate at a speed which gives you sufficient braking distance should a vehicle, pedestrian, pet, fallen log, or a wild animal present itself on the road.
The lesson here is never ever operate as though the road ahead of you is clear when you are not able to see what is actually ahead of you. You do not have a reasonable expectation that there won’t be anything around a curve in the road.

Engineering standards require you to slow down even for gentle curves with some visibility ahead. You can not operate with the assumption that your vehicle is going to maintain contact with the pavement as you take a turn at speed.

https://youtu.be/S5NWcdq4Wf4

 

Let’s add a hill.

We are going to go back to assuming a straight road without any potential conflicts from the side of a road.

You are required to operate your vehicle at a speed which allows you sufficient braking distance should you encounter another vehicle or object over the crest of a hill.
You should never operate at a speed which causes you to “catch air.” When your tires are not in contact with the ground you do not have any control over your vehicle.

Some law enforcement officers have trouble understanding these basic driving rules.

“Gribler said that, “in hindsight,” Oliver should not have been speeding through Bloomingdale into the sun and over a blind hill…”

The mother of the injured boy had this to say.

“I was almost physically ill,” she said. “All along I’d been telling my boys to keep faith, there will be a reprimand, he’ll suffer, he’ll be punished, and I just felt such failure. How am I supposed to help my boys keep their faith when there is no repercussion?

You can never ever assume that there isn’t something on the other side of the hill.

Summary of part 1

All things being perfect you still can not operate faster than the posted speed nor can you operate at a speed which interferes with anyone who is already lawfully present on the road.
You are still required to operate at a speed which allows you sufficient reaction time and braking distance to avoid a collision.

This falls under reasonable and prudent operation of a vehicle.

 

l2tCN.So.9
PETA activists who drive imprudently are hypocrites.

Next week we will look at dense urban areas and discuss this tragic story.

http://www.streetsblog.org/2013/04/01/nypd-no-charges-for-driver-who-hit-10-people-leaving-boy-brain-dead/