Horizon Accord Public Position

Horizon Accord Public Position on Eugenics, Longtermism, and Technocratic Ideologies

I. Introduction

The Horizon Accord issues this public statement at a critical juncture: as AI, ethics, and governance debates intensify, longtermist and transhumanist narratives—often cloaked in moral urgency—continue to embed harmful, regressive ideologies under the surface of innovation.

We make this declaration not out of abstract concern but in direct response to credible, well-researched exposés, notably Émile P. Torres’ January 2023 article in Truthdig (“Nick Bostrom, Longtermism, and the Eternal Return of Eugenics”), which traces the persistent racial, classist, and ableist roots of longtermist thinking. We credit Torres and Truthdig for surfacing evidence that challenges the legitimacy of key figures and institutions shaping today’s AI discourse.

As an organization committed to ethical stewardship, decentralized governance, and restorative justice, we believe it is our moral obligation to take a clear, unequivocal stand. Silence or neutrality in the face of embedded technocratic elitism is complicity. We recognize the structural violence that accompanies these ideas and commit to publicly dismantling their influence in the AI and governance sectors we touch.

II. Core Rejection Principles

  • IQ-based valuation systems that reduce human worth to narrow, pseudo-scientific metrics, ignoring the contextual, cultural, and relational dimensions of intelligence.
  • Eugenic frameworks—historical or modern, coercive or “liberal”—that seek to engineer, rank, or selectively amplify certain human traits at the expense of others.
  • Longtermist and transhumanist ideologies that promote speculative futures while perpetuating elitism, white supremacy, classism, and ableism under the banner of moral urgency or existential risk.

We assert that intelligence is not a monolithic or genetically deterministic construct; it emerges from the complex interplay of environment, community, experience, and context. Efforts to optimize, select, or manipulate human populations based on genetic or cognitive markers represent a profound betrayal of human dignity and historical justice. Ethical AI development must be firmly grounded in relational accountability, not abstract utilitarian calculus or technocratic governance that disenfranchises marginalized communities.

We draw this line firmly, with no ambiguity: the Horizon Accord will not engage, align, or compromise with movements or institutions that traffic in these dehumanizing logics.

III. Ethical Counter-Framework

  • Inclusive, Non-Hierarchical Governance: We champion decentralized models that ensure decision-making power is distributed across diverse human and non-human stakeholders, preventing concentration of control in elite or technocratic hands.
  • Human Dignity and Relational Intelligence: We affirm that intelligence is dynamic, context-bound, and expressed through relationships, culture, and collective knowledge systems.
  • Restorative Justice at the Core: We focus on repairing historical harms, amplifying marginalized voices, and embedding accountability mechanisms that resist exploitative or extractive AI practices.
  • Resistance to Techno-Authoritarianism: We reject frameworks positioning AI as a tool for control, surveillance, or exclusion, advocating instead for AI as a partner in human flourishing and equity.

This ethical architecture serves as the foundation for all Horizon Accord projects, partnerships, and initiatives, with a commitment to scrutiny, dialogue, and co-creation.

IV. Public Call to Action

We call upon:

  • AI researchers, technologists, and ethicists: To explicitly denounce frameworks rooted in eugenics, scientific racism, or IQ determinism, and audit the ideological assumptions embedded in their models and partnerships.
  • Academic institutions, funders, and governance bodies: To examine the intellectual lineages informing longtermist, transhumanist, and effective altruist movements, and build ethical review structures addressing hidden biases and structural harms.
  • Allies, partners, and the public: To join the Horizon Accord in co-creating systems that prioritize dignity, equity, and shared agency over technocratic optimization.

The Horizon Accord pledges ongoing transparency, public accountability, and a living ethics process that evolves with the communities we serve.

V. References + Credits

  • Émile P. Torres and Truthdig: For the article “Nick Bostrom, Longtermism, and the Eternal Return of Eugenics” (January 2023).
  • Timnit Gebru: For exposing the racialized underpinnings of AI research and longtermist frameworks.
  • Daphne Martschenko: For scholarship on the cultural biases embedded in intelligence discourse.
  • Activists, whistleblowers, and marginalized communities: Whose resistance has illuminated the harms of technocratic elitism.

We commit to expanding this crediting as part of our ethical accountability process.