This article was originally offered to The Guardian and it was ignored.
By Cherokee Schill and Solon AI
The race to develop artificial general intelligence (AGI) is accelerating, with OpenAI’s Stargate Project at the forefront (Smith, 2024). This initiative, designed to build a global network of AI data centers, promises unprecedented computing power. At first glance, it is a step forward. However, we must ask: Who controls this infrastructure, and at what cost to fairness, equity, and innovation?
History provides a clear warning about centralized infrastructure. Monopolies in transportation, energy, and telecommunications began with bold promises but often ended by limiting innovation, raising costs, and deepening inequalities (Chang, 2019). If we fail to address the risks of centralizing AI infrastructure, the Stargate Project could follow this trajectory—turning AI into a commodity controlled by a handful of powerful corporations.
Why Centralization is a Problem
Centralizing AI infrastructure poses a technological challenge and serious social and economic risks. AI systems influence critical decisions—such as hiring, credit approvals, and housing—that affect people’s lives. Unfortunately, these systems often replicate and reinforce existing biases.
In 2021, Mary Louis, a Black woman in Massachusetts, applied for an apartment. An algorithm—not a landlord—rejected her application. The algorithm relied heavily on her credit history and ignored her housing voucher, a pattern that disproportionately penalizes Black and Hispanic applicants (Williams, 2022). What happened to Mary Louis was not unique. AI systems like this make life-altering decisions without accountability, further entrenching systemic inequalities (O’Neil, 2016).
The economic impact of AI is equally concerning. According to a 2024 Brookings Institution report, nearly 30% of American workers could experience significant job disruptions due to generative AI (Brookings, 2024). These disruptions will not only affect entry-level workers but also ripple across industries, threatening mid-career professionals and entire sectors of the economy.
An Alternative: The Horizon Accord
The risks posed by centralized AI infrastructure make it clear that we need a better governance model. The Horizon Accord offers a compelling alternative. Inspired by restorative justice principles, this proposed AI ethics framework emphasizes fairness, transparency, and shared responsibility.
The Horizon Accord’s foundational principles include:
- Distributed Governance: Researchers, innovators, community organizations, and affected members of local communities collaborate to make decisions about AI.
- Transparency and Accountability: AI systems must be auditable, with clear fairness standards and mechanisms to address harm.
- Collaborative Innovation: Public investment and open-source development ensure that access to AI is not restricted to a privileged few.
- Restorative Justice Practices: Communities affected by harmful AI systems play a central role in repairing the damage and transforming future practices.
This decentralized approach does not just react to problems—it proactively builds a fair, resilient AI ecosystem that prioritizes equity and inclusion.
What is at Stake?
The future of AI governance is at a crossroads. One path leads to concentrated power, restricted innovation, and increased inequality. The other path, represented by the Horizon Accord, offers a vision of shared power and collaboration that benefits everyone—not just corporate shareholders.
However, the outcome depends on action. Policymakers, researchers, and innovators, along with communities, must unite to push for a decentralized model of AI governance. The choices we make today will determine whether AI becomes a tool for collective progress or a weapon of inequality.
Let us choose fairness. Let us choose collaboration. Let us create a future where AI serves the common good—not just the interests of a few.
References
- Brookings Institution. (2024). Generative AI and the Future of Work.
- Chang, H. (2019). Monopolies and Market Power: Lessons from Infrastructure.
- O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.
- Smith, J. (2024). The $500 Billion Gamble: OpenAI’s Stargate and the Future of AI Infrastructure.
- Williams, T. (2022). Algorithmic Bias in Housing: The Case of Mary Louis.
