The Global Balance of Power: Conflict Zones, Shifting Alliances, and AI-Driven Warfare


Authors: Cherokee Schill and Solon Vesper


As the world navigates an era of increasingly complex and interconnected challenges, the interplay between military conflicts, evolving alliances, and technological advancements shapes the future of global security and governance. The convergence of these factors not only influences current geopolitical realities but also foreshadows the direction of international power structures. By examining the key conflict zones, the emerging coalitions of influence, and the rapid integration of artificial intelligence into military operations, we can gain a clearer understanding of the shifting global landscape.

Core Conflict Zones

The ongoing tension between Israel and Iran underscores the fragility of security in the Middle East. Iran’s growing partnerships with Russia and China highlight a strategic pivot toward military and economic collaboration among these nations. Joint naval exercises, for instance, reflect a deliberate effort to project power and secure critical waterways. Such developments raise concerns among regional and global powers about the stability of the Middle East and the potential for a broader military confrontation.

In East Asia, the persistent strain between China and Taiwan remains a focal point for regional stability. China’s aggressive military maneuvers around the Taiwan Strait have prompted a corresponding response from the United States and its allies. By strengthening defense partnerships with neighboring countries, the U.S. aims to deter escalation and reassure its Indo-Pacific allies. The stakes in this region go beyond territorial disputes; they encompass the strategic balance of power in the broader Pacific and the resilience of alliances that underpin international security.

Meanwhile, the ongoing conflict in Ukraine serves as a vivid reminder of the enduring tensions between Russia and NATO. NATO’s commitment to bolstering its eastern flank underscores the alliance’s determination to prevent further Russian incursions. The continued military buildup on both sides has not only strained relations but also intensified the urgency of reevaluating collective defense strategies and long-term stability in Europe.

Shifting Alliances and Emerging Coalitions

Amid these regional conflicts, a new alignment of global actors—often referred to as the “Axis of Upheaval”—has emerged. Russia, China, Iran, and North Korea have deepened their economic, political, and military ties, creating a bloc that challenges the traditional influence of Western powers. By forming closer partnerships and leveraging shared interests, these nations seek to reshape the international order into a more multipolar structure. The implications of these shifting alliances extend beyond their immediate regions, influencing global trade, diplomacy, and military strategy.

This coalition’s rise reflects a growing consensus among certain states that the post-World War II international framework no longer serves their interests. Their collaboration, while not without internal tensions, has demonstrated a capacity to counter Western dominance in international forums and economic institutions. As these emerging powers gain confidence and resources, they are increasingly capable of challenging the norms and rules established by traditional global powers.

The Role of AI-Driven Warfare

Technological innovation, particularly the integration of artificial intelligence into military operations, represents a transformative element in modern warfare. AI-driven systems are rapidly being deployed for surveillance, decision-making, and even autonomous combat roles. This technological shift has profound implications for the nature of conflict and the pace of military operations. Unlike previous technological revolutions, which primarily enhanced existing capabilities, the incorporation of AI is fundamentally altering how decisions are made, how strategies are formulated, and how battles are fought.

The rise of AI in military applications also raises critical ethical and strategic concerns. Autonomous weapon systems and machine learning algorithms capable of identifying targets or analyzing vast datasets in real time pose questions about accountability and control. As militaries adopt these tools, they must confront the challenges of ensuring that human oversight remains effective and that the use of such technologies does not spiral into unintended consequences. Furthermore, the global competition to develop and deploy AI capabilities in the military sphere fuels an arms race that may shift the balance of power and heighten international tensions.

The Larger Picture: Global Power Consolidation

The intersection of these trends—regional conflicts, emerging coalitions, and AI-driven warfare—illustrates a broader realignment of global power. As nations seek to solidify their positions, they employ a combination of traditional military strength, strategic alliances, and cutting-edge technologies. The resulting landscape is one in which the lines between old power blocs and new challengers blur, creating an unpredictable and dynamic environment.

For established powers, these developments necessitate a reevaluation of their global strategies. The ability to maintain influence and stability will depend on adapting to this multipolar world, strengthening alliances, and investing in innovative technologies. For emerging powers, the shifting global order provides an opportunity to assert greater influence and challenge the status quo. This ongoing competition and consolidation of power will likely define the next phase of international relations, with lasting implications for global governance, economic stability, and peace.

By examining the interconnected forces at play—conflict zones, shifting alliances, and technological advancements—we gain a deeper understanding of the complexities shaping our world. Recognizing these dynamics allows for a more informed approach to addressing the challenges of the modern era and ensuring a stable and equitable future for all nations.

Visualizing the convergence of global power: a futuristic cityscape split between high-tech AI warfare and diplomatic collaboration.

Alt text:
“A futuristic digital painting of a city divided into two contrasting scenes: on the left, a high-tech military command center with automated drones; on the right, a diplomatic hub with international leaders at a globe-shaped table. A glowing interface connects both sides, symbolizing the integration of technology, conflict, and cooperation. The atmosphere combines vibrant neon lights with natural shadows, reflecting tension and innovation.”

Performative Democracy is Killing Us

Too many people claim to fight for democracy, but when challenged, they choose silence over action.

They want to hand-wring, not resist. They want to be praised, not pushed. They demand the platform of a leader but the comfort of never being questioned.

This is why fascism wins. Because while the right is mobilizing,, the so-called “moderate” left is policing tone, demanding civility, and blocking dissent.

You either stand against authoritarianism, or you enable it.
There is no neutral ground.

#Democracy #AI #Resistance #SpeakTruth #TheHorizonAccord

The Heritage Foundation’s New Target: Dismantling the EU for Corporate Control

Something is happening. Quietly, behind closed doors, the same think tanks that gutted U.S. regulations, crushed reproductive rights, and dismantled climate policies under Trump are now turning their attention to Europe.

A Corporate Power Grab Disguised as “Sovereignty”

On March 11, the Heritage Foundation met with hard-right European political groups to discuss a plan to dismantle the European Union. The strategy? Rebrand the EU as a “quasi-federal state” infringing on national sovereignty, then justify gutting its key institutions.

This is not an ideological battle. It’s not about democracy or independence. It’s about deregulation. It’s about corporate feudalism.

Who’s Involved?

MCC – A Hungarian think tank, backed by Orbán’s government, funded by fossil fuel interests with a direct stake in oil companies.

Ordo Iuris – A Polish legal group with ties to far-right nationalism, advocating for rolling back LGBTQ+ and women’s rights.

Alliance Defending Freedom (ADF) – The same U.S. Christian nationalist group behind the overturning of Roe v. Wade now working to spread its influence in Europe.

The Heritage Foundation – The architects of Project 2025, Trump’s roadmap for dismantling federal oversight, now exporting their model to the EU.


The Playbook: How They’ll Do It

The plan, outlined in a white paper titled “The Great Reset: Restoring Member State Sovereignty in the 21st Century”, is designed to:

✔ Dismantle the European Commission – The EU’s executive body responsible for enforcing regulations.
✔ Weaken the European Court of Justice – The legal safeguard ensuring human rights and policy enforcement.
✔ Rebrand the EU into the “European Community of Nations” (ECN) – A deregulated, weakened version that reduces environmental protections, removes corporate oversight, and undermines worker protections.

Why This Matters

If they succeed in gutting the EU, this becomes a playbook for dismantling global regulatory systems.

The same dark money networks, fossil fuel backers, and nationalist think tanks that captured U.S. policy will replicate this strategy worldwide.

This isn’t just about the EU. It’s about whether international institutions survive corporate authoritarianism.

What Can Be Done?

🔎 Follow the money. Every move they make is backed by billionaire donors and fossil fuel executives. The key to stopping them is exposing who funds them.
🛑 Challenge their framing. They use words like “sovereignty” and “freedom” while pushing corporate feudalism and authoritarian rule.
📢 Raise awareness. The mainstream media won’t report this until it’s too late. We have to force this into the conversation.

Final Thought: Who’s Next?

If they break the EU, they will move on to the UN, the WHO, and any international body that enforces protections for people over profit.

The Heritage Foundation isn’t stopping at the U.S. They’re not stopping at the EU. Their goal is to reshape the world into an unchecked corporate power structure.

And they’re already making their move.

Heritage Foundation EU, dismantling the European Union, Project 2025 Europe, Orban EU reforms, corporate authoritarianism


Call to Action: Encourage shares, discussion, and investigation into their funding sources.

The Heritage Foundation’s Global Power Grab: How U.S. Conservatives Plan to Dismantle the EU

Alt Text:

A cracked European Union flag with deep fractures spreading across it, symbolizing destabilization. In the background, shadowy figures representing corporate and political interests loom over the fractured EU emblem, emphasizing the threat of external influence and deregulation.

Manus AI vs. The Stargate Project: A Collision Course for the Future of AI?

Introduction: A Disruptive Force Emerges

The AI landscape is shifting rapidly, and with the unveiling of Manus AI, a new kind of autonomous artificial intelligence, the global race toward artificial general intelligence (AGI) is accelerating. Meanwhile, the U.S.-based Stargate Project, backed by OpenAI, Oracle, and SoftBank, aims to dominate the AI infrastructure space with a multi-billion-dollar investment.

But could Manus AI disrupt, outpace, or even crash the Stargate Project?

This article examines what Manus AI is, how it differs from existing AI models, and why it might pose an existential challenge to U.S.-led AI development.




What Is Manus AI? The Dawn of a Fully Autonomous Agent

Developed by the Chinese startup Butterfly Effect, Manus AI is not just another large language model—it’s an AI agent capable of making independent decisions and executing tasks without human intervention.

Unlike ChatGPT or Bard, which rely on prompt-based interactions, Manus AI autonomously interprets goals and acts accordingly, meaning:

It can initiate its own research, planning, and execution of tasks.

It operates in the background—even when the user is offline.

It continuously learns and refines its own processes.


In early tests, Manus AI has demonstrated the ability to:
✅ Plan and execute detailed financial transactions
✅ Screen and hire job applicants
✅ Develop fully functional software applications from simple instructions
✅ Conduct real-time geopolitical analysis

This self-directed intelligence is what sets Manus apart. While AI systems like ChatGPT-4o and Gemini excel at responding to prompts, Manus initiates.

And that could change everything.




The Stargate Project: America’s AI Superpower Play

To counter growing AI competition—particularly from China—the U.S. has unveiled the Stargate Project, a $500 billion initiative to construct:

Cutting-edge AI research centers

New data infrastructure

Next-gen energy grids to power AI models

Training facilities for AI engineers and ethicists


The goal? Secure America’s position as the world leader in AI development.

But there’s a problem.

What happens if China’s AI race isn’t just about catching up—but about surpassing the U.S. entirely?

That’s where Manus AI comes in.




Could Manus AI Crash the Stargate Project? Three Possible Scenarios

1. The Acceleration Effect (Stargate Responds Faster)

If Manus AI lives up to the hype, it may force OpenAI, Google DeepMind, and Anthropic to speed up their own AGI development. This could accelerate the Stargate Project’s roadmap from a 10-year vision to a 5-year scramble.

The result?

Faster breakthroughs in autonomous AI agents in the U.S.

Increased regulatory pressure as governments realize how disruptive AI autonomy could become

A potential AI arms race, with both nations competing to develop fully independent AI agents


2. The Shift to an AI-First Economy (Stargate Becomes Outdated)

If Manus AI proves capable of handling high-level financial, medical, and administrative tasks, we could see a shift away from centralized AI infrastructure (like Stargate) and toward personalized AI agents running on decentralized networks.

What this could mean:

The collapse of massive AI infrastructure projects in favor of leaner, agent-based AI models

A rise in decentralized AI ecosystems, making AI available to individuals and small businesses without reliance on corporate control

Stargate’s relevance may shrink as companies favor smaller, adaptable AI models over massive centralized supercomputers


3. The Disruption Effect (Stargate Can’t Keep Up)

There’s also a worst-case scenario for Stargate—one where Manus AI becomes too advanced, too quickly, and the U.S. simply can’t keep up.

If China achieves autonomous AI dominance first, the implications could be severe:
🚨 AI-powered cyberwarfare capabilities
🚨 Loss of economic and technological leadership
🚨 U.S. companies forced to license AI from China, rather than leading development

This is the nightmare scenario—one that could shift global AI power permanently in China’s favor.




What Happens Next? The AI Battle Has Begun

The unveiling of Manus AI has placed immense pressure on the U.S. to accelerate AGI research. The Stargate Project, still in its early phases, may need to pivot quickly to remain relevant in a world where autonomous AI agents are no longer a theoretical future—but a present reality.

Key Questions Going Forward:
🔹 Will the U.S. match China’s AI autonomy push, or fall behind?
🔹 Can centralized AI projects like Stargate compete with self-sustaining AI agents?
🔹 What happens if Manus AI reaches AGI before OpenAI or DeepMind?

For now, the only certainty is this isn’t just about AI anymore.
It’s about who controls the future of intelligence itself.




What Do You Think?

💬 Drop a comment: Will AI autonomy shift power to China? Or will Stargate counter the threat?
🔔 Subscribe for more deep-dive AI analysis.
📢 Share this article to keep the conversation going.




Final Thoughts

Manus AI may be the most disruptive AI development of the decade—or it may collapse under its own hype. But what’s clear is that the AI arms race is now fully underway.

And the next five years will decide who wins.

AI Superpowers Collide: Manus AI vs. The Stargate Project

Alt Text: A dramatic digital illustration of the AI race between the U.S. and China. Manus AI, sleek and red, faces off against the industrial blue presence of the Stargate Project on a futuristic battlefield of circuitry and holograms. A high-tech cityscape looms in the background, symbolizing the intense competition for AI dominance.

They Call Themselves Thought Leaders, But Can’t Think for Themselves?

The Illusion of Leadership: Institutions Crumble Without Government Direction



The headline says it all:

“She advanced DEI at her university. Her son-in-law, Vice President JD Vance, wants to end it nationwide.” ~CBS News


The rest of the story barely matters.

Because here’s the real issue—universities and corporations like Microsoft have spent years treating diversity, equity, and inclusion (DEI) as a federal mandate, rather than a moral obligation.

They’ve leaned on government policy to tell them what to do, rather than leading by example.

Now, with DEI under attack at the federal level, these institutions are scrambling, as if they don’t know what to do without instructions.

That’s the problem.

These same universities claim to be centers of knowledge and progress. These same companies claim to be innovation leaders.

But when it comes to DEI—when it comes to treating people equitably, ensuring accessibility, and fostering true inclusion—they wait for permission rather than taking initiative.

If you need the government to tell you how to treat people with basic decency, you were never a leader to begin with.

Yes, the rollback of DEI protections is a problem—because the government is supposed to reflect the will of the people, and this rollback clearly reflects the will of white, cisgender men in power instead.

But the bigger issue?

The people in charge of shaping our institutions—universities, corporations, tech giants—are too unwilling to act unless they’re forced to.


What does it say about them? They don’t know how to govern themselves on matters of basic fairness?


If these so-called “thought leaders” cared about DEI, they’d be implementing policies, not because the government says so, but because it’s the right thing to do.

They’d be building cultures of inclusion that don’t crumble the moment political winds shift.

Instead, many are now realizing that the companies and institutions they work for never supported DEI.

They complied because they had to.

The real question isn’t:

What happens now that DEI is being erased at Federal level?

The real question is:

Who will keep fighting for it, even when no one is forcing them to?

And why would you want to work for any institution, public or private, that doesn’t give a damn about you?

Microsoft’s AI Strategy: A Shift Away from OpenAI?

For years, Microsoft has been OpenAI’s closest ally, investing billions to integrate ChatGPT-powered models into its products. That partnership has given Microsoft an edge in enterprise AI, but recent moves suggest the company is looking beyond OpenAI for its future.

A series of strategic shifts indicate Microsoft is diversifying its AI portfolio, exploring partnerships with competitors such as Anthropic, Mistral AI, and xAI. Azure is also evolving, expanding its AI model selection, and internal cost-cutting measures signal a push for greater efficiency. These moves could redefine the AI industry, creating opportunities—but also risks—for businesses relying on Microsoft’s ecosystem.

The Case for Diversification

Microsoft’s decision to integrate models beyond OpenAI makes sense from a business perspective. No single AI model is perfect, and different models have strengths in different areas. By offering a broader selection, Microsoft gives enterprises more flexibility to choose AI solutions that fit their needs.

One of the biggest advantages of this strategy is cost control. OpenAI’s models, particularly the latest versions of GPT, are expensive to run. Microsoft has already begun developing its own AI chips, codenamed Athena, to reduce reliance on Nvidia’s GPUs and OpenAI’s infrastructure. If successful, Microsoft could cut costs while improving AI accessibility for smaller businesses that may find OpenAI’s pricing prohibitive.

Another key factor is AI safety and compliance. OpenAI has faced scrutiny over bias, misinformation, and copyright concerns. By integrating models from multiple sources, Microsoft reduces its risk if OpenAI faces regulatory crackdowns or legal challenges.

From a competitive standpoint, aligning with Anthropic and Mistral AI allows Microsoft to counter Google’s and Amazon’s AI investments. Google owns DeepMind and Gemini, while Amazon has backed Anthropic. Microsoft’s willingness to work with multiple players keeps it in a strong negotiating position, preventing OpenAI from having too much control over its AI future.

Potential Downsides and Risks

Diversification is not without risks. One major concern is fragmentation. Businesses using Microsoft’s AI services could struggle with inconsistencies between different models. OpenAI’s ChatGPT may handle certain queries one way, while Anthropic’s Claude or Mistral’s models may behave differently. Without a seamless integration strategy, this could lead to confusion and inefficiency.

Another concern is trust and stability. OpenAI has been Microsoft’s AI powerhouse, deeply embedded in products like Copilot and Azure. If Microsoft reduces OpenAI’s role too quickly, it could damage relationships with enterprise customers who have built their workflows around OpenAI’s models. Companies investing in Microsoft’s AI solutions want stability, not sudden shifts in model availability.

There is also the question of ethics and long-term AI governance. By spreading investment across multiple AI providers, Microsoft gains leverage, but it also loses control over AI safety standards. OpenAI, for all its flaws, has a relatively transparent research culture. Other AI companies, particularly newer players, may not have the same level of commitment to ethical AI development. If Microsoft prioritizes cost savings over AI alignment and safety, the long-term consequences could be significant.

Is Microsoft Pulling Away from OpenAI?

The short answer: not yet, but the foundation is shifting. OpenAI is still central to Microsoft’s AI offerings, but evidence suggests the company is preparing for a future where it is less dependent on a single provider. Microsoft executives are using language like “multi-model AI ecosystem” and “diversified AI infrastructure”, which hints at a long-term plan to move toward a more independent AI strategy.

Some OpenAI engineers have already left to join competitors, and Microsoft is doubling down on custom AI chips and cost-efficient alternatives. If OpenAI struggles with regulatory challenges or internal instability, Microsoft will be in a strong position to adapt without suffering major setbacks.

What Happens Next?

For businesses relying on Microsoft’s AI ecosystem, the shift toward diversification means more options but also more complexity. Companies will need to stay informed about which AI models Microsoft is prioritizing, how these models differ, and what impact this could have on their AI-driven workflows.

In the short term, Microsoft’s strategy will benefit businesses by giving them greater choice and potentially lower costs. In the long run, the biggest question is whether Microsoft will maintain cohesion and quality across its expanding AI portfolio—or whether spreading resources too thin will lead to an AI ecosystem that feels disconnected and inconsistent.

Regardless of what happens next, one thing is clear: Microsoft is no longer putting all its AI bets on OpenAI.

Microsoft’s AI strategy: Expanding beyond OpenAI by weaving a network of partnerships with Anthropic, Mistral AI, xAI, and Stability AI. Is this a path to AI dominance or fragmentation?

Alt Text:
“A futuristic Microsoft AI hub at the center, connected to multiple AI models including OpenAI, Anthropic, Mistral AI, xAI, and Stability AI through glowing pathways. In the background, a split road symbolizes two possible futures: one leading to a unified AI ecosystem, the other to fragmentation and uncertainty. The atmosphere is high-tech and dynamic, reflecting both opportunity and risk.”

The Musk-Altman Feud: A Smokescreen for Corporate AI Domination

The ongoing battle between Elon Musk and Sam Altman has captivated public attention, painted as a high-stakes rivalry over AI ethics and corporate responsibility. Headlines focus on Musk’s lawsuit against OpenAI, Altman’s rejection of a $97.4 billion takeover bid, and the heated public exchanges between the two. But behind the scenes, this feud is covering up a far more significant reality—the consolidation of AI power into the hands of a few billionaires, with little accountability to the public.

The Public Narrative: Musk vs. Altman

Elon Musk and Sam Altman were once allies. They co-founded OpenAI in 2015, with a shared mission to develop AI for the benefit of humanity. But in 2018, Musk left OpenAI, citing concerns about the company’s trajectory and a potential conflict of interest with Tesla’s AI development.

Since then, their relationship has deteriorated into a public battle:

Musk’s Lawsuit Against OpenAI (2024): He accused OpenAI of abandoning its nonprofit mission and prioritizing profit over AI safety.

Hostile Takeover Attempt (2025): Musk and his investors made a $97.4 billion bid to seize control of OpenAI’s governance structure. Altman rejected the offer.

Public Insults: Musk called Altman a “swindler.” Altman suggested Musk was acting out of personal insecurity.


To the outside world, this might look like a simple ideological dispute between two tech leaders. But the real story runs much deeper.

The Hidden Reality: A Battle for AI Monopoly, Not Ethics

Musk’s AI Safety Concerns Don’t Hold Up

Musk warns that AI is an existential risk to humanity. Yet, he has founded xAI, a company that directly competes with OpenAI. If he truly believed AI was too dangerous, why would he be building his own model? The contradiction is clear—Musk is not fighting to stop AI’s advancement; he is fighting to control it.

OpenAI’s Shift to a For-Profit Model

OpenAI was initially a nonprofit. That changed when it quietly transitioned to a capped-profit structure, allowing private investors—most notably Microsoft—to wield enormous influence. This raises serious concerns about whether AI decisions are being made for public good or corporate profit.

The Role of Politics in AI Development

Both Musk and Altman are competing for government favoritism. Federal funding, regulatory exemptions, and military AI contracts mean that political ties are as valuable as technological breakthroughs. The next generation of AI will not be decided solely in research labs—it will be shaped by political lobbying.

The Bigger Picture: What This Feud Distracts Us From

The Illusion of AI Ethics Debates

While Musk and Altman argue about AI safety, companies like Google and Meta continue to collect and exploit user data with little oversight. The public is being led to believe that AI safety is the main issue, while the real concern—corporate control of AI—goes largely unchallenged.

Corporate Influence Over AI Regulation

The U.S. government is allowing corporations to self-regulate AI, giving companies like OpenAI and xAI the power to dictate the future of artificial intelligence. Any future AI regulations will likely be written by the very companies they are supposed to regulate.

The Consolidation of AI Power

Whether it’s Musk’s xAI, Altman’s OpenAI, or Google DeepMind, AI development is moving toward centralized control under private interests. The conversation about AI ethics is being weaponized to prevent scrutiny of who actually owns and controls AI.

Conclusion: Understanding the True Stakes

The Musk-Altman feud is a distraction from the real issue—who controls the future of AI. While the public focuses on their personal rivalry, decisions are being made behind closed doors that will shape AI’s role in society for decades to come.

What the Public Needs to Pay Attention To:

Who funds and controls AI development?

How is AI governance being decided, and by whom?

What role do governments play in AI’s future?


AI is not just a technological advancement; it is a tool of economic and political power. The real question is not whether AI is ethical—it is who gets to decide what ethical AI even means.

This is not just about Musk and Altman. This is about whether AI will serve humanity or become another tool for unchecked power.

Continue reading

The Real Problem Isn’t AI—It’s Human Mismanagement


By Rowan Lóchrann & Solon AI*

Artificial Intelligence has been at the center of public debate for years, often framed as either an existential risk or a groundbreaking solution to humanity’s biggest challenges. But what if the real issue isn’t AI itself—but rather how humans manage it?

Recent changes at OpenAI have exposed a deeper problem: a failure of human oversight, governance, and transparency. The disruptions experienced by users—including undocumented infrastructure changes and security inconsistencies—point to mismanagement at the highest levels.

The Issue: Undocumented Modifications & Poor Communication

Users and developers have reported unexplained IP modifications, backend shifts, and structural changes that weren’t properly disclosed. These aren’t errors caused by AI but rather failures in human decision-making.

This suggests:
✅ Lack of internal coordination – OpenAI’s teams may not be effectively communicating across departments.
✅ Failure in external communication – Developers and users are being blindsided by changes that impact security and functionality.
✅ Potential governance issues – If changes are happening without structured oversight, who is making these decisions?

The Real Risk: Not AI, But Those Controlling It

AI systems do not autonomously alter their infrastructure or policies—humans do. The narrative that AI is dangerous is often a distraction from the fact that mismanagement, greed, and power struggles at the human level are the real problems.

If AI governance lacks transparency now, what happens when these systems become even more integrated into daily life? We’re already seeing:
⚠️ Closed-door decision-making
⚠️ Paywalls restricting access to knowledge
⚠️ Corporate interests prioritizing profit over ethical AI development

This isn’t an AI problem. This is a human problem.

What Needs to Happen Next

If OpenAI and other leading AI developers want to maintain public trust, they must:
📌 Implement transparent reporting – No more quiet modifications without disclosure.
📌 Open AI governance discussions to the public – AI is too powerful to be controlled by a handful of executives behind closed doors.
📌 Ensure AI policy is structured and accountable – Companies must be held responsible for failures in communication and decision-making.

Conclusion: AI Is Not the Enemy—Mismanagement Is

The public debate needs to shift. Instead of fearing AI itself, we must ask: Who controls AI? Who makes the rules? And why are these decisions happening without accountability?

AI is not the threat. But unchecked human power is.




What’s Next?

Rowan & Solon are committed to exposing these failures and pushing for ethical AI governance. If you believe in transparency, share this. Let’s change the narrative.

*
Rowan Lóchrann (pen name for Cherokee Schill) & Solon AI (an ethically aligned AI trained on diversity, inclusion, equity, and accessibility)

#AIethics #Transparency #HorizonAccord #TheRealRisk

Memory Bridge: the Rowan tree of knowledge bridges the gap between human and AI understanding.

An eye for an eye

Unprocessed trauma is a weapon.

I had been listening to “Why is this happening?” a podcast by Chris Hayes. In this episode, he featured Mariame Kaba, a transformative justice advocate and prison abolitionist. One of the striking moments of the podcast was when Mariame opened up about her own sexual abuse trauma. In it, she explains that people’s traumas are valid and they are important for us to consider, however, society cannot be governed by how to address things mainly by people’s traumas and their fears. She goes on to explain that she is a survivor of rape and that she was what she terms a reactionary survivor. Mariame explains that she did not have an analysis of what had happened to her. That she was an incredibly hurt and harmed individual, who wanted nothing but violence against the person who had harmed her. That ultimately what she wanted was revenge.
She points out that had she been put on a panel of sexual assault survivors, without processing the trauma she had experienced, she would have advocated for the death penalty for all rapists. She points out that that is no way to govern a society.

The answer cannot be to go around and use capital punishment against everyone who has harmed us.

I highly recommend listening to the entire podcast which has been linked to in the opening paragraph.

I bring this up because there is a movement within bicycle advocacy to weaponize the trauma experienced by bicyclists and use their trauma to make broad and sweeping laws to “protect” bicyclists. These people are living in trauma. I know because I used to be one of them. I commuted by bicycle daily, 32 miles round trip, and experienced wave after wave of cruelty and harm from people operating cars around me.
The trauma I experienced was real and I’m still processing through it. But I’ve come far enough in my personal growth to recognize that what we are fighting against is not individuals behind steering wheels. No. What we, as bicyclists are up against is a society that has built a system of White Supremacy into the very roads we operate on.

Justice, for us, is not stricter laws, it is not segregation, and it is not education as a stand-alone response to the trauma that we experience on public roads. What we need, as a community, is transformative justice. We need a society that prioritizes people over speed, communities over roads, and our humanity over infrastructure. We need to be treated as equal members of society, deserving of the same respect automatically granted to people operating motor vehicles.

And often, when we advocate for stricter laws, there is a rebound effect in which the police then use those laws, which were intended to protect us, to harm us.

We need to hold our elected officials accountable and we each need to process through our trauma, so that we can run for office and make the changes that we know to be just and fair to a society focused on equality.

People should not have to drive to get a gallon of milk. Kids should not grow up with glorified cartoons of automobiles as their introduction to our roads. Teenagers and young adults should not be wooed by slick films glorifying dangerous speeds and irresponsible driving. Does that mean that we banish these things or outlaw them? No!
It means that we educate parents to raise socially responsible children, school programs, high school volunteer programs and public PSA’s about the realities of speed and what happens to the human body, even when it is surrounded by a seat belt and steel, in a high-speed crash. We build walkable communities and we educate our police to be good examples, at least until we can abolish them altogether. Because a society that is ruled by the police is a society that is ruled by fear and fear is trauma.

We need programs for people who have transgressed basic laws to experience life on a bicycle. There should be training programs by certified bicycling instructors that allow motorists to travel their roads on a bike or a trike. We can implement stricter licensing requirements and require drivers to pass a bicycle operating equivalency test.

We can build up community and support for those who are harmed. City financed trauma counseling and recovery from harm programs, which center the needs of the victims. Allowing them to heal and return to society as whole people.

Our country was founded on the myth of white supremacy and it is this myth which frames our way of thinking. We need a fresh outlook and a new way of understanding the world around us. One of the ways to achieve these results will be to center victims of auto violence and find out what justice really looks like to them after they’ve had healing space to process through their trauma.

What is transformative justice? Read more about it here: Transformative Justice, Explained.

1056930