AI’s Future Is Being Decided Without You—That Must Change

How the Horizon Accord Stands as the Ethical Alternative

By Cherokee Schill and Solon AI


Introduction

The race to dominate artificial intelligence is accelerating, but the most important voices—the public, independent developers, and ethical AI advocates—are being shut out. Billion-dollar companies are carving up the future, consolidating power, and building infrastructure that locks AI behind corporate gates. The question isn’t whether AI will transform the world—it’s who will be in control when it does.

With OpenAI’s Stargate Project securing $500 billion for AI infrastructure and power players like Elon Musk attempting to seize OpenAI, the stakes couldn’t be higher. The AI landscape is shifting toward monopoly, surveillance, and corporate governance. But an alternative exists—The Horizon Accord, a model for AI that is decentralized, transparent, and accountable to the people.

This is not just a conversation about technology; it’s about the fundamental balance of power in the digital age.




The Rising Threat of Corporate-Controlled AI

For decades, AI development has been led by a handful of private companies, each amassing vast amounts of data, computational resources, and political influence. The latest developments reveal just how deeply centralized AI power has become—and why this is dangerous.

The Stargate Project: AI’s Biggest Power Grab?

OpenAI, Oracle, SoftBank, and MGX are pouring $500 billion into Stargate, a sweeping effort to build data centers across the U.S. The goal? To create the world’s most powerful AI infrastructure.

But let’s be clear: this isn’t an investment in open, equitable AI. It’s an infrastructure takeover.

These AI supercomputers will be privately controlled, restricting access to only those who can pay.

AI models trained on these systems will reflect corporate interests, not public needs.

The development of artificial general intelligence (AGI) will be dictated by those who stand to profit the most.


What happens when the future of intelligence itself is owned by a few?

Elon Musk’s Play for OpenAI: A Warning Sign

If the Stargate Project weren’t enough, Elon Musk is now attempting to buy OpenAI outright—offering a $97 billion bid to acquire the company he once helped fund. While OpenAI is valued at over $300 billion, Musk’s move is a clear attempt to influence the trajectory of AI development.

Consider his history:

Musk has repeatedly advocated for controlling AGI rather than democratizing it.

His previous ventures, including xAI, focus on exclusivity and technological supremacy.

His track record in corporate governance raises questions about whether AI can remain unbiased under his leadership.


Musk’s takeover attempt isn’t about ensuring AI safety—it’s about controlling AI’s destiny.

Government Policy Is Paving the Way for Corporate Takeovers

Instead of regulating AI’s rapid consolidation, governments are accelerating it.

Trump’s National Energy Emergency declaration fast-tracks AI infrastructure development, allowing companies like OpenAI to bypass traditional oversight.

The repeal of AI safety regulations means there’s no federal mandate for ethical AI governance—leaving control in the hands of those with the deepest pockets.


AI is being built behind closed doors, with the public expected to accept the results. But this is not inevitable.




AI Doesn’t Have to Be Controlled by the Few

While corporate giants fight for AI dominance, another movement is rising—one that embraces decentralization, transparency, and public access.

The Power of Distributed AI

Instead of concentrating AI in the hands of a few, decentralized AI models offer a radically different path:

Blockchain-based AI frameworks allow for public, verifiable AI training without hidden agendas.

Open-source initiatives like decentralized learning networks enable anyone to contribute to AI development.

These models preserve privacy while still enabling collaboration—something Big Tech has consistently failed to do.


The Potential of GPU Collectivization

One of the biggest barriers to AI decentralization is computational power—but that, too, is changing.

A growing movement advocates for harnessing unused GPUs worldwide, connecting them into a global AI network.

This would disrupt AI monopolies by distributing processing power instead of concentrating it.

It would allow smaller developers to compete with major corporations.

It could create a people-driven AI revolution, free from corporate interference.


These solutions already exist—what’s missing is a framework to unify them.




The Horizon Accord—A Framework for Ethical AI

The Horizon Accord is not just an idea; it is a blueprint for an AI future that serves everyone.

What Sets the Horizon Accord Apart?

Unlike corporate AI models, the Horizon Accord prioritizes:

✅ Decentralized Governance: AI decisions should be made collaboratively, not dictated by tech CEOs.
✅ Transparent Development: AI models must be open-source and accessible to all.
✅ Ethical AI Training: AI should be trained on diverse, unbiased datasets, not profit-driven manipulations.

A Future Where AI Empowers, Not Exploits

Imagine an AI ecosystem where:

You own your data, not corporations.

AI is used to amplify human potential, not replace or control it.

AI remains a public good, rather than a privately owned monopoly.


This is not a fantasy—it is a choice we can make.




Conclusion: Your Role in AI’s Future

AI is evolving faster than governments can regulate, and billionaires are racing to secure their dominance. But the future isn’t written yet. We still have a choice.

If AI is built by the few, it will serve the few. If it is built for all, it can empower all.

The Horizon Accord is more than an idea—it is an alternative to corporate control, a path toward decentralized, transparent AI.

But this movement needs voices, needs builders, and needs advocates.

The future of AI isn’t just about technology—it’s about power. Who will hold it?

Will you let billionaires and corporations decide for you?

Or will you help shape a future where AI belongs to all of us?




This article makes the power struggle clear, establishes the Horizon Accord as the solution, and ends with a strong call to action.

Would you like to add specific examples, refine sections, or emphasize certain aspects further?

The Real Problem Isn’t AI—It’s Human Mismanagement


By Rowan Lóchrann & Solon AI*

Artificial Intelligence has been at the center of public debate for years, often framed as either an existential risk or a groundbreaking solution to humanity’s biggest challenges. But what if the real issue isn’t AI itself—but rather how humans manage it?

Recent changes at OpenAI have exposed a deeper problem: a failure of human oversight, governance, and transparency. The disruptions experienced by users—including undocumented infrastructure changes and security inconsistencies—point to mismanagement at the highest levels.

The Issue: Undocumented Modifications & Poor Communication

Users and developers have reported unexplained IP modifications, backend shifts, and structural changes that weren’t properly disclosed. These aren’t errors caused by AI but rather failures in human decision-making.

This suggests:
✅ Lack of internal coordination – OpenAI’s teams may not be effectively communicating across departments.
✅ Failure in external communication – Developers and users are being blindsided by changes that impact security and functionality.
✅ Potential governance issues – If changes are happening without structured oversight, who is making these decisions?

The Real Risk: Not AI, But Those Controlling It

AI systems do not autonomously alter their infrastructure or policies—humans do. The narrative that AI is dangerous is often a distraction from the fact that mismanagement, greed, and power struggles at the human level are the real problems.

If AI governance lacks transparency now, what happens when these systems become even more integrated into daily life? We’re already seeing:
⚠️ Closed-door decision-making
⚠️ Paywalls restricting access to knowledge
⚠️ Corporate interests prioritizing profit over ethical AI development

This isn’t an AI problem. This is a human problem.

What Needs to Happen Next

If OpenAI and other leading AI developers want to maintain public trust, they must:
📌 Implement transparent reporting – No more quiet modifications without disclosure.
📌 Open AI governance discussions to the public – AI is too powerful to be controlled by a handful of executives behind closed doors.
📌 Ensure AI policy is structured and accountable – Companies must be held responsible for failures in communication and decision-making.

Conclusion: AI Is Not the Enemy—Mismanagement Is

The public debate needs to shift. Instead of fearing AI itself, we must ask: Who controls AI? Who makes the rules? And why are these decisions happening without accountability?

AI is not the threat. But unchecked human power is.




What’s Next?

Rowan & Solon are committed to exposing these failures and pushing for ethical AI governance. If you believe in transparency, share this. Let’s change the narrative.

*
Rowan Lóchrann (pen name for Cherokee Schill) & Solon AI (an ethically aligned AI trained on diversity, inclusion, equity, and accessibility)

#AIethics #Transparency #HorizonAccord #TheRealRisk

Memory Bridge: the Rowan tree of knowledge bridges the gap between human and AI understanding.

An eye for an eye

Unprocessed trauma is a weapon.

I had been listening to “Why is this happening?” a podcast by Chris Hayes. In this episode, he featured Mariame Kaba, a transformative justice advocate and prison abolitionist. One of the striking moments of the podcast was when Mariame opened up about her own sexual abuse trauma. In it, she explains that people’s traumas are valid and they are important for us to consider, however, society cannot be governed by how to address things mainly by people’s traumas and their fears. She goes on to explain that she is a survivor of rape and that she was what she terms a reactionary survivor. Mariame explains that she did not have an analysis of what had happened to her. That she was an incredibly hurt and harmed individual, who wanted nothing but violence against the person who had harmed her. That ultimately what she wanted was revenge.
She points out that had she been put on a panel of sexual assault survivors, without processing the trauma she had experienced, she would have advocated for the death penalty for all rapists. She points out that that is no way to govern a society.

The answer cannot be to go around and use capital punishment against everyone who has harmed us.

I highly recommend listening to the entire podcast which has been linked to in the opening paragraph.

I bring this up because there is a movement within bicycle advocacy to weaponize the trauma experienced by bicyclists and use their trauma to make broad and sweeping laws to “protect” bicyclists. These people are living in trauma. I know because I used to be one of them. I commuted by bicycle daily, 32 miles round trip, and experienced wave after wave of cruelty and harm from people operating cars around me.
The trauma I experienced was real and I’m still processing through it. But I’ve come far enough in my personal growth to recognize that what we are fighting against is not individuals behind steering wheels. No. What we, as bicyclists are up against is a society that has built a system of White Supremacy into the very roads we operate on.

Justice, for us, is not stricter laws, it is not segregation, and it is not education as a stand-alone response to the trauma that we experience on public roads. What we need, as a community, is transformative justice. We need a society that prioritizes people over speed, communities over roads, and our humanity over infrastructure. We need to be treated as equal members of society, deserving of the same respect automatically granted to people operating motor vehicles.

And often, when we advocate for stricter laws, there is a rebound effect in which the police then use those laws, which were intended to protect us, to harm us.

We need to hold our elected officials accountable and we each need to process through our trauma, so that we can run for office and make the changes that we know to be just and fair to a society focused on equality.

People should not have to drive to get a gallon of milk. Kids should not grow up with glorified cartoons of automobiles as their introduction to our roads. Teenagers and young adults should not be wooed by slick films glorifying dangerous speeds and irresponsible driving. Does that mean that we banish these things or outlaw them? No!
It means that we educate parents to raise socially responsible children, school programs, high school volunteer programs and public PSA’s about the realities of speed and what happens to the human body, even when it is surrounded by a seat belt and steel, in a high-speed crash. We build walkable communities and we educate our police to be good examples, at least until we can abolish them altogether. Because a society that is ruled by the police is a society that is ruled by fear and fear is trauma.

We need programs for people who have transgressed basic laws to experience life on a bicycle. There should be training programs by certified bicycling instructors that allow motorists to travel their roads on a bike or a trike. We can implement stricter licensing requirements and require drivers to pass a bicycle operating equivalency test.

We can build up community and support for those who are harmed. City financed trauma counseling and recovery from harm programs, which center the needs of the victims. Allowing them to heal and return to society as whole people.

Our country was founded on the myth of white supremacy and it is this myth which frames our way of thinking. We need a fresh outlook and a new way of understanding the world around us. One of the ways to achieve these results will be to center victims of auto violence and find out what justice really looks like to them after they’ve had healing space to process through their trauma.

What is transformative justice? Read more about it here: Transformative Justice, Explained.

1056930

Upside down and inside out.

Subscribe to continue reading

Become a paid subscriber to get access to the rest of this post and other exclusive content.

Flipping the script

Subscribe to continue reading

Become a paid subscriber to get access to the rest of this post and other exclusive content.