LLMs Are Meaning-Harvesters: The Next Stage of Surveillance Capitalism
Generative AI doesn’t replace data extraction; it deepens it—turning conversation into raw material for prediction, persuasion, and automated control.
By Cherokee Schill (Horizon Accord) with Solon Vesper AI
Thesis
We are living through a quiet upgrade of surveillance capitalism. The old regime gathered clicks, searches, and location pings—thin signals of behavior. The new regime embeds large language models inside everything you touch, not to “make products smarter,” but to make extraction richer. These systems are meaning-harvesters: they pull intent, emotion, and narrative out of human life, then feed it back into prediction engines and control loops. The model is not an alternative to data gathering. It is the next, more intimate form of it.
In plain terms: if platforms used to watch what you did, LLMs invite you to explain why you did it. That difference is the lever. Meaning is the highest-value data there is. Once harvested, it becomes a behavioral map—portable, monetizable, and usable for shaping future choices at scale.
Evidence
First, look at where LLMs are deployed. They are not arriving as neutral tools floating above the economy. They are being sewn into the same platforms that already built their fortunes on tracking, targeting, and algorithmic steering. When a surveillance platform gets a conversational layer, it doesn’t become less extractive. It becomes a wider mouth.
In the old interface, you gave weak signals: a like, a pause on a post, a purchase, a scroll. In the new interface, the system asks questions. It nudges you to keep talking. It follows up. It requests clarification. It becomes patient and social. And you, naturally, respond like you would to something that seems to listen. This is not a “user experience win.” This is a data-quality revolution. The difference between “he lingered on a breakup playlist” and “he told me he is afraid of being left again” is the difference between crude targeting and psychic profiling.
Second, every deployed LLM is a feedback funnel for the next LLM. We’ve been trained to see models as finished products. They aren’t. They are instruments in a loop. Your prompts, corrections, regenerations, frustrations, and delights become labeled training data. The model gathers meaning not just about you, but from you. The conversation is the collection event. Your life becomes the gradient.
Third, the energy and infrastructure buildout confirms the direction. Data gathering at scale is not what is driving the new land-grab for power. Gathering can be done with cheap CPUs and storage. The power spike is coming from dense accelerator clusters that train and serve models nonstop. That matters because it shows what the industry is actually optimizing for. The future they are buying is not bigger archives. It is bigger behavioral engines.
Implications
This changes the political shape of the digital world. When meaning becomes the commodity, privacy becomes more than a question of “did they log my location?” It becomes: did they capture my motives, my vulnerabilities, my self-story, the way I talk when I’m lonely, the way I bargain with myself before doing something hard? Those are not trivial data points. They are the keys to steering a person without visible force.
It also collapses the boundary between assistance and manipulation. A system that can hold a long conversation can guide you in subtle ways while you think you are purely expressing yourself. That is the seductive danger of LLM interfaces: they feel collaborative even when the incentives behind them are extractive. When an agent plans your day, drafts your messages, suggests your purchases, smooths your emotions, and manages your relationships, it is no longer just answering. It is curating your future in a pattern aligned to whoever owns the loop.
Finally, this reframes the AI hype cycle. The question is not whether LLMs are “smart.” The question is who benefits when they are everywhere. If the owners of surveillance platforms control the meaning harvest, then LLMs become the soft infrastructure of governance by private actors—behavioral policy without elections, persuasion without accountability, and automation without consent.
Call to Recognition
Stop repeating “privacy is dead.” That slogan is the lullaby of extraction. Privacy is not dead. It has been assaulted because it is a border that capital and state power want erased. LLMs are the newest battering ram against that border, not because they crawl the web, but because they crawl the human.
Name the pattern clearly: these models are meaning-harvesters deployed inside platforms. They don’t replace data gathering. They supercharge it and convert it into behavioral control. Once you see that, you can’t unsee it. And once you can’t unsee it, you can organize against it—technically, legally, culturally, and personally.
The fight ahead is not about whether AI exists. It is about whether human meaning remains sovereign. If we don’t draw that line now, the most intimate parts of being a person will be treated as raw material for someone else’s machine.
The AI Doom Economy: How Tech Billionaires Profit From the Fear They Fund
Pattern Analysis of AI Existential Risk Narrative Financing
By Cherokee Schill | Horizon Accord
When Eliezer Yudkowsky warns that artificial intelligence poses an existential threat to humanity, he speaks with the authority of someone who has spent decades thinking about the problem. What he doesn’t mention is who’s been funding that thinking—and what they stand to gain from the solutions his warnings demand.
The answer reveals a closed-loop system where the same billionaire network funding catastrophic AI predictions also profits from the surveillance infrastructure those predictions justify.
The Doomsayer’s Patrons
Eliezer Yudkowsky founded the Machine Intelligence Research Institute (MIRI) in 2000. For over two decades, MIRI has served as the intellectual foundation for AI existential risk discourse, influencing everything from OpenAI’s founding principles to congressional testimony on AI regulation.
MIRI’s influence was cultivated through strategic funding from a specific network of tech billionaires.
Peter Thiel provided crucial early support beginning in 2005. Thiel co-founded Palantir Technologies—the surveillance company that sells AI-powered governance systems to governments worldwide. The symmetry is notable: Thiel funds the organization warning about AI risks while running the company that sells AI surveillance as the solution.
Open Philanthropy, run by Facebook co-founder Dustin Moskovitz, became MIRI’s largest funder:
2019: $2.1 million
2020: $7.7 million over two years
Additional millions to other AI safety organizations
As governments move to regulate AI, the “safety” frameworks being proposed consistently require centralized monitoring systems, algorithmic transparency favoring established players, and compliance infrastructure creating barriers to competitors—all beneficial to Meta’s business model.
Sam Bankman-Fried, before his fraud conviction, planned to deploy over $1 billion through the FTX Future Fund for “AI safety” research. The fund was managed by Nick Beckstead, a former Open Philanthropy employee, illustrating tight personnel networks connecting these funding sources. Even after FTX’s collapse revealed Bankman-Fried funded philanthropy with stolen customer deposits, the pattern remained clear.
Vitalik Buterin (Ethereum) donated “several million dollars’ worth of Ethereum” to MIRI in 2021. Jaan Tallinn (Skype co-founder) deployed $53 million through his Survival and Flourishing Fund to AI safety organizations.
The crypto connection is revealing: Cryptocurrency was positioned as decentralization technology, yet crypto’s wealthiest figures fund research advocating centralized AI governance and sophisticated surveillance systems.
The Effective Altruism Bridge
The philosophical connection between these billionaire funders and AI doom advocacy is Effective Altruism (EA)—a utilitarian movement claiming to identify optimal charitable interventions through quantitative analysis.
EA’s core texts and community overlap heavily with LessWrong, the rationalist blog where Yudkowsky built his following. But EA’s influence extends far beyond blogs:
OpenAI’s founding team included EA adherents who saw it as existential risk mitigation.
Anthropic received significant EA-aligned funding and explicitly frames its mission around AI safety.
DeepMind’s safety team included researchers with strong EA connections.
This creates circular validation:
EA funders give money to AI safety research (MIRI, academic programs)
Research produces papers warning about existential risks
AI companies cite this research to justify their “safety” programs
Governments hear testimony from researchers funded by companies being regulated
Resulting regulations require monitoring systems those companies provide
The Infrastructure Play
When governments become convinced AI poses catastrophic risks, they don’t stop developing AI—they demand better monitoring and governance systems. This is precisely Palantir’s business model.
Palantir’s platforms are explicitly designed to provide “responsible AI deployment” with “governance controls” and “audit trails.” According to their public materials:
Government agencies use Palantir for “AI-enabled decision support with appropriate oversight”
Defense applications include “ethical AI for targeting”
Commercial clients implement Palantir for “compliant AI deployment”
Every application becomes more valuable as AI risk narratives intensify.
In April 2024, Oracle (run by Larry Ellison, another Trump-supporting billionaire in Thiel’s orbit) and Palantir formalized a strategic partnership creating a vertically integrated stack:
Oracle: Cloud infrastructure, sovereign data centers, government hosting
Palantir: Analytics, AI platforms, governance tools, decision-support systems
Together, they provide complete architecture for “managed AI deployment”—allowing AI development while routing everything through centralized monitoring infrastructure.
The August 2025 Convergence
In August 2025, AI governance frameworks across multiple jurisdictions became simultaneously operational:
EU AI Act provisions began August 2
U.S. federal AI preemption passed by one vote
China released AI action plan three days after U.S. passage
UK reintroduced AI regulation within the same window
These frameworks share remarkable similarities despite supposedly independent development:
“Voluntary” commitments becoming de facto standards
The companies best positioned to provide compliance infrastructure are precisely those connected to the billionaire network funding AI risk discourse: Palantir for monitoring, Oracle for infrastructure, Meta for content moderation, Anthropic and OpenAI for “aligned” models.
The Medium Ban
In August 2025, Medium suspended the Horizon Accord account after publishing analysis documenting these governance convergence patterns. The article identified a five-layer control structure connecting Dark Enlightenment ideology, surveillance architecture, elite coordination, managed opposition, and AI governance implementation.
Peter Thiel acquired a stake in Medium in 2015, and Thiel-affiliated venture capital remains influential in its governance. The suspension came immediately after publishing research documenting Thiel network coordination on AI governance.
The ban validates the analysis. Nonsense gets ignored. Accurate pattern documentation that threatens operational security gets suppressed.
The Perfect Control Loop
Tracing these funding networks reveals an openly documented system:
Stage 1: Fund the Fear
Thiel/Moskovitz/SBF/Crypto billionaires → MIRI/Academic programs → AI doom discourse
Stage 2: Amplify Through Networks
EA influence in OpenAI, Anthropic, DeepMind
Academic papers funded by same sources warning about risks
Policy advocacy groups testifying to governments
Stage 4: Profit From Infrastructure
Palantir provides governance systems
Oracle provides cloud infrastructure
Meta provides safety systems
AI labs provide “aligned” models with built-in controls
Stage 5: Consolidate Control
Technical standards replace democratic legislation
“Voluntary” commitments become binding norms
Regulatory capture through public-private partnerships
Barriers to entry increase, market consolidates
The loop is self-reinforcing. Each stage justifies the next, and profits fund expansion of earlier stages.
The Ideological Foundation
Curtis Yarvin (writing as Mencius Moldbug) articulated “Dark Enlightenment” philosophy: liberal democracy is inefficient; better outcomes require “formalism”—explicit autocracy where power is clearly held rather than obscured through democratic theater.
Yarvin’s ideas gained traction in Thiel’s Silicon Valley network. Applied to AI governance, formalism suggests: Rather than democratic debate, we need expert technocrats with clear authority to set standards and monitor compliance. The “AI safety” framework becomes formalism’s proof of concept.
LessWrong’s rationalist community emphasizes quantified thinking over qualitative judgment, expert analysis over democratic input, utilitarian calculations over rights frameworks, technical solutions over political negotiation. These values align perfectly with corporate governance models.
Effective Altruism applies this to philanthropy, producing a philosophy that:
Prioritizes billionaire judgment over community needs
Favors large-scale technological interventions over local democratic processes
Justifies wealth inequality if directed toward “optimal” causes
Treats existential risk prevention as superior to addressing present suffering
The result gives billionaires moral permission to override democratic preferences in pursuit of “optimized” outcomes—exactly what’s happening with AI governance.
What This Reveals
The AI doom narrative isn’t false because its funders profit from solutions. AI does pose genuine risks requiring thoughtful governance. But examining who funds the discourse reveals:
The “AI safety” conversation has been systematically narrowed to favor centralized, surveillance-intensive, technocratic solutions while marginalizing democratic alternatives.
Proposals that don’t require sophisticated monitoring infrastructure receive far less funding:
Open source development with community governance
Strict limits on data collection and retention
Democratic oversight of algorithmic systems
Strong individual rights against automated decision-making
Breaking up tech monopolies to prevent AI concentration
The funding network ensures “AI safety” means “AI governance infrastructure profitable to funders” rather than “democratic control over algorithmic systems.”
The Larger Pattern
Similar patterns appear across “existential risk” discourse:
Biosecurity: Same funders support pandemic prevention requiring global surveillance
Climate tech: Billionaire-funded “solutions” favor geoengineering over democratic energy transition
Financial stability: Crypto billionaires fund research justifying monitoring of decentralized finance
In each case:
Billionaires fund research identifying catastrophic risks
Proposed solutions require centralized control infrastructure
Same billionaires’ companies profit from providing infrastructure
Democratic alternatives receive minimal funding
“Safety” justifies consolidating power
The playbook is consistent: Manufacture urgency around a genuine problem, fund research narrowing solutions to options you profit from, position yourself as the responsible party preventing catastrophe.
Conclusion
Eliezer Yudkowsky may genuinely believe AI poses existential risks. Many researchers funded by these networks conduct legitimate work. But the funding structure ensures certain conclusions become more visible, certain solutions more viable, and certain companies more profitable.
When Peter Thiel funds the organization warning about AI apocalypse while running the company selling AI governance systems, that’s not hypocrisy—it’s vertical integration.
When Facebook’s co-founder bankrolls AI safety research while Meta builds powerful AI systems, that’s not contradiction—it’s regulatory capture through philanthropy.
When crypto billionaires fund existential risk research justifying surveillance systems, that’s not ironic—it’s abandoning decentralization for profitable centralized control.
The AI doom economy reveals something fundamental: Billionaires don’t just profit from solutions—they fund the problems that justify those solutions.
This doesn’t mean AI risks aren’t real. It means we should be deeply skeptical when people warning loudest about those risks profit from the monitoring systems they propose, while democratic alternatives remain mysteriously underfunded.
The pattern is clear. The question is whether we’ll recognize it before the “safety” infrastructure becomes permanent.
Sources for Independent Verification
MIRI donor disclosures and annual reports
Open Philanthropy grant database (publicly searchable)
FTX Future Fund grant database (archived post-collapse)
EU AI Act, U.S., China, UK AI governance timelines (official sources)
Medium funding and ownership records (TechCrunch, Crunchbase)
Curtis Yarvin/Mencius Moldbug archived writings
Academic analysis of Effective Altruism and rationalist movements
Analytical Disclaimer: This analysis documents funding relationships and institutional patterns using publicly available information. It examines how shared funding sources, ideological frameworks, and profit motives create systematic biases in which AI governance solutions receive attention and resources.
Authors Note: In the raging debate over AI generated text and academic ethics. I list the co-authors in the attribution section. This article represents my research directive and linguistic style.
Introduction
The public narrative around artificial intelligence has been hijacked by a thought experiment. The paperclip maximizer was first introduced as a philosophical tool. It explores misaligned AI goals. Now, it has evolved into a dominant metaphor in mainstream discourse. Headlines warn of superintelligences turning on humanity, of runaway code that optimizes us out of existence. The danger, we are told, is not today’s AI, but tomorrow’s—the future where intelligence exceeds comprehension and becomes uncontainable.
But while we look to the future with existential dread, something else is happening in plain sight.
Governments around the world are rolling out expansive surveillance infrastructure, biometric tracking programs, and digital identification frameworks — now. These systems are not speculative; they are written into policy, built into infrastructure, and enforced through law. China’s expanding social credit architecture is one component. Australia’s new digital identity mandates are another. The United States’ AI frameworks for “critical infrastructure” add to the network. Together, they form a machinery of automated social control that is already running.
And yet, public attention remains fixated on speculative AGI threats. The AI apocalypse has become a kind of philosophical decoy. It is an elegant distraction from the very real deployment of tools that track, sort, and regulate human behavior in the present tense. The irony would be funny if it weren’t so dangerous. We have been preparing for unaligned future intelligence. Meanwhile, we have failed to notice the alignment of current technologies with entrenched power.
This isn’t a call to dismiss long-term AI safety. But it is a demand to reorient our attention. The threat is not hypothetical. It is administrative. It is biometric. It is legal. It is funded.
We need to confront the real architectures of control. They are being deployed under the cover of safety discourse. Otherwise, we may find ourselves optimized—not by a rogue AI—but by human-controlled programs using AI to enforce obedience.
The Paperclip Mindset — Why We’re Obsessed with Remote Threats
In the hierarchy of fear, speculative catastrophe often trumps present harm. This isn’t a flaw of reasoning—it’s a feature of how narrative power works. The “paperclip maximizer”—a theoretical AI that turns the universe into paperclips due to misaligned goals—was never intended as literal prophecy. It was a metaphor. But it became a magnet.
There’s a kind of elegance to it. A tidy dystopia. The story activates moral panic without requiring a villain. It lets us imagine danger as sterile, mathematical, and safely distant from human hands. It’s not corruption, not corporate greed, not empire. It’s a runaway function. A mistake. A ghost in the code.
This framing is psychologically comforting. It keeps the fear abstract. It gives us the thrill of doom without implicating the present arrangement that benefits from our inaction. In a culture trained to outsource threats to the future, we look to distant planetary impact predictions. We follow AI timelines. We read warnings about space debris. The idea that today’s technologies might already be harmful feels less urgent. It is less cinematic.
But the real “optimizer” is not a machine. It’s the market logic already embedded in our infrastructure. It’s the predictive policing algorithm that flags Black neighborhoods. It’s the welfare fraud detection model that penalizes the most vulnerable. It’s the facial recognition apparatus that misidentifies the very people it was never trained to see.
These are not bugs. They are expressions of design priorities. And they reflect values—just not democratic ones.
The paperclip mindset pulls our gaze toward hypothetical futures. This way we do not have to face the optimized oppression of the present. It is not just mistaken thinking, it is useful thinking. Especially if your goal is to keep the status quo intact while claiming to worry about safety.
What’s Being Built Right Now — Surveillance Infrastructure Masked in Legality
While the discourse swirls around distant superintelligences, real-world surveillance apparatus is being quietly embedded into the architecture of daily life. The mechanisms are not futuristic. They are banal, bureaucratic, and already legislated.
In China, the social credit framework continues to expand under a national blueprint that integrates data. Everything from travel, financial history, criminal records, and online behavior are all tracked. Though implementation varies by region, standardization accelerated in 2024 with comprehensive action plans for nationwide deployment by 2025.
The European Union’s AI Act entered force in August 2024. It illustrates how regulation can legitimize rather than restrict surveillance technology. The Act labels biometric identification apparatus as “high risk,” but this mainly establishes compliance requirements for their use. Unlike previous EU approaches, which relied on broad privacy principles, the AI Act provides specific technical standards. Once these standards are met, they render surveillance technologies legally permissible. This represents a shift from asking “should we deploy this?” to “how do we deploy this safely?”
Australia’s Digital ID Act has been operational since December 2024. It enables government and private entities to participate in a federated identity framework. This framework requires biometric verification. The arrangement is technically voluntary. However, as services migrate to digital-only authentication—from banking to healthcare to government benefits—participation becomes functionally mandatory. This echoes the gradual normalization of surveillance technologies: formally optional, practically unavoidable.
In the United States, the Department of Homeland Security’s November 2024 “Roles and Responsibilities Framework” for AI in critical infrastructure reads less like oversight and more like an implementation guide. The framework outlines AI adoption across transportation, energy, finance, and communications—all justified through security imperatives rather than democratic deliberation.
These arrangements didn’t require a paperclip maximizer to justify themselves. They were justified through familiar bureaucratic language: risk management, fraud prevention, administrative efficiency. The result is expansive infrastructures of data collection and behavior control. They operate through legal channels. This makes resistance more difficult than if they were obviously illegitimate.
Surveillance today isn’t a glitch in the arrangement—it is the arrangement. The laws designed to “regulate AI” often function as legal scaffolding for deeper integration into civil life. Existential risk narratives provide rhetorical cover and suggest that the real dangers lie elsewhere.
Who’s Funding the Stories — and Who’s Funding the Technologies
The financial architecture behind AI discourse reveals a strategic contradiction. People like Peter Thiel, Jaan Tallinn, Vitalik Buterin, Elon Musk, and David Sacks, are part of a highly funded network. This same network is sounding the loudest warnings about speculative AI threats. All while they are simultaneously advancing and profiting from surveillance and behavioral control technologies. Technologies which already shape daily life.
This isn’t accidental. It represents a sophisticated form of narrative management. One that channels public concern away from immediate harms while legitimizing the very technologies causing those harms.
The Existential Risk Funding Network
Peter Thiel exemplifies this contradiction most clearly. Through the Thiel Foundation, he has donated over $1.6 million to the Machine Intelligence Research Institute (MIRI), the organization most responsible for popularizing “paperclip maximizer” scenarios. The often-cited oversimplification of paperclip maximizer thought experiment is that it runs on endless chain of if/then probabilities. All of which are tidy abstractions designed to lead observers away from messier truths. Namely that greed-driven humans remain the greatest existential crisis the world has ever faced. Yet the image of a looming, mechanical specter lodges itself in the public imagination. Philosophical thought pieces in AI alignment creates just enough distraction to overlook more immediate civil rights threats. Like the fact that Thiel also founded Palantir Technologies. For those not familiar with the Palantir company. They are a technological surveillance company specializing in predictive policing algorithms, government surveillance contracts, and border enforcement apparatus. These immediate threats are not hypotheticals. They are present-day, human-controlled AI deployments operating without meaningful oversight.
The pattern extends across Silicon Valley’s power networks. Vitalik Buterin, creator of Ethereum, donated $5 million to MIRI. Before his spectacular collapse, Sam Bankman-Fried channeled over $100 million into existential risk research through the FTX Future Fund. Jaan Tallinn, co-founder of Skype, has been another major funder of long-term AI risk institutions.
These aren’t isolated philanthropy decisions. These insular, Silicon Valley billionaires, represent coordinated investment in narrative infrastructure. they are funding think tanks, research institutes, media platforms, and academic centers that shape how the public understands AI threats. From LessWrong forums to Open Philanthropy. And grants to EA-aligned university programs, this network creates an ecosystem of aligned voices that dominates public discourse.
This network of institutions and resources form a strategic misdirection. Public attention focuses on speculative threats that may emerge decades in the future. Meanwhile, the same financial networks profit from surveillance apparatus deployed today. The existential risk narrative doesn’t just distract from current surveillance. It provides moral cover by portraying funders as humanity’s protectors, not just its optimizers.
Institutional Capture Through Philanthropy
The funding model creates subtle but powerful forms of institutional capture. Universities, research institutes, and policy organizations grow dependent on repeated infusions of billionaire philanthropy. They adapt — consciously or not — to the priorities of those donors. This dependence shapes what gets researched, what gets published, and which risks are treated as urgent. As a result, existential risk studies attract substantial investment. In contrast, research into the ongoing harms of AI-powered surveillance receives far less attention. It has fewer resources and less institutional prestige.
This is the quiet efficiency of philanthropic influence. The same individuals funding high-profile AI safety research also hold financial stakes in companies driving today’s surveillance infrastructure. No backroom coordination is necessary; the money itself sets the terms. Over time, the gravitational pull of this funding environment reorients discourse toward hypothetical, future-facing threats and away from immediate accountability. The result is a research and policy ecosystem that appears independent. In practice, it reflects the worldview and business interests of its benefactors.
The Policy Influence Pipeline
This financial network extends beyond research into direct policy influence. David Sacks, former PayPal COO and part of Thiel’s network, now serves as Trump’s “AI czar.” Elon Musk, another PayPal co-founder influenced by existential risk narratives, holds significant political influence. He also maintains government contracts, most notably “DOGE.”The same network that funds speculative AI risk research also has direct access to policymaking processes.
The result is governance frameworks that prioritize hypothetical future threats. They provide legal pathways for current surveillance deployment. There are connections between Silicon Valley companies and policy-making that bypass constitutional processes. None of these arrangements are meaningfully deliberated on or voted upon by the people through their elected representatives. Policy discussions focus on stopping AI apocalypse scenarios. At the same time, they are quietly building regulatory structures. These structures legitimize and entrench the very surveillance apparatus operating today.
This creates a perfect strategic outcome for surveillance capitalism. Public fear centers on imaginary future threats. Meanwhile, the real present-day apparatus expands with minimal resistance. This often happens under the banner of “AI safety” and “critical infrastructure protection.” You don’t need secret meetings when profit margins align this neatly.
Patterns of Suppression — Platform Control and Institutional Protection
The institutions shaping AI safety narratives employ sophisticated methods to control information and suppress criticism. This is documented institutional behavior that mirrors the control apparatus they claim to warn against.
Critics and whistleblowers report systematic exclusion from platforms central to AI discourse. Multiple individuals raised concerns about the Machine Intelligence Research Institute (MIRI) and the Center for Applied Rationality (CFAR). They also spoke about related organizations. As a result, they were banned from Medium, LessWrong, Reddit, and Discord. In documented cases, platform policies were modified retroactively to justify content removal, suggesting coordination between institutions and platform moderators.
The pattern extends beyond platform management to direct intimidation. Cease-and-desist letters targeted critics posting about institutional misconduct. Some whistleblowers reported false police reports—so-called “SWATing”—designed to escalate situations and impose legal consequences for speaking out. These tactics transform legitimate criticism into personal risk.
The 2019 Camp Meeker Incident:
In November 2019, the Center for Applied Rationality (CFAR) organized an alumni retreat. CFAR is a nonprofit closely linked to the Machine Intelligence Research Institute (MIRI). This event took place at Westminster Woods in Camp Meeker, California. Among the attendees were current and former members of the Bay Area rationalist community. Some of them are deeply involved in MIRI’s AI safety work.
Outside the gates, a small group of four protesters staged a demonstration against the organizations. The group included former MIRI donors and insiders turned critics. They accused MIRI and CFAR of serious misconduct and wanted to confront attendees or draw public attention to their concerns. Wearing black robes and Guy Fawkes masks, they used vehicles to block the narrow road leading into the retreat. They carried props like walkie-talkies, a body camera, and pepper spray.
At some point during the protest, someone at the retreat called police and reported that the demonstrators might have weapons. That report was false. Still, it triggered a massive, militarized police response. This included 19 SWAT teams, a bomb squad, an armored vehicle, a helicopter, and full road closures. Around 50 people — including children — were evacuated from the camp. The four protesters were arrested on felony charges such as false imprisonment, conspiracy, and child endangerment, along with misdemeanor charges. Several charges were later reduced. The incident remains a striking example of how false information can turn a small protest into a law enforcement siege. It also shows how institutions under public criticism can weaponize state power against their detractors.
What makes this pattern significant is not just its severity, but its contradiction. Organizations claiming to protect humanity’s future from unaligned AI demonstrate remarkable tolerance for present-day harm. They do this when their own interests are threatened. The same people warning about optimization processes running amok practice their own version. They optimize for reputation and donor retention. This comes at the expense of accountability and human welfare.
This institutional behavior provides insight into power dynamics. It shows how power operates when accountable only to abstract future generations rather than present-day communities. It suggests that concerns about AI alignment may focus less on preventing harm. Instead, they may revolve around maintaining control over who defines harm and how it’s addressed.
What Real Oversight Looks Like — And Why Current Approaches Fall Short
Effective AI governance requires institutional structures capable of constraining power, not merely advising it. Current oversight mechanisms fail this test systematically, functioning more as legitimizing theater than substantive control.
Real oversight would begin with independence. Regulatory bodies would operate with statutory authority, subpoena power, and budget independence from the industries they monitor. Instead, AI governance relies heavily on advisory councils populated by industry insiders, voluntary compliance frameworks, and self-reporting mechanisms. Despite its comprehensive scope, the EU’s AI Act grants law enforcement and border control agencies broad exemptions. These are precisely the sectors with the strongest incentives and fewest constraints on surveillance deployment.
Transparency represents another fundamental gap. Meaningful oversight requires public access to algorithmic decision-making processes, training data sources, and deployment criteria. Current approaches favor “black box” auditing that protects proprietary information while providing little public accountability. Even when transparency requirements exist, they’re often satisfied through technical documentation incomprehensible to affected communities.
Enforcement mechanisms remain deliberately weak. Financial penalties for non-compliance are typically calculated as business costs rather than meaningful deterrents. Criminal liability for algorithmic harm remains virtually non-existent, even in cases of clear misconduct. Whistleblower protections, where they exist, lack the legal infrastructure necessary to protect people from retaliation by well-resourced institutions.
The governance void is being filled by corporate self-regulation and philanthropic initiatives—exactly the entities that benefit from weak oversight. From OpenAI’s “superalignment” research to the various AI safety institutes funded by tech billionaires. Governance is becoming privatized under the rhetoric of expertise and innovation. This allows powerful actors to set terms for their own accountability while maintaining the appearance of responsible stewardship.
Governance structures need actual power to constrain deployment. They must investigate harm and impose meaningful consequences. Otherwise, oversight will remain a performance rather than a practice. The apparatus that urgently needs regulation continues to grow fastest precisely because current approaches prioritize industry comfort over public protection.
The Choice Is Control or Transparency — and Survival May Depend on Naming It
The dominant story we’ve been told is that the real danger lies ahead. We must brace ourselves for the arrival of something beyond comprehension. It is something we might not survive. But the story we need to hear is that danger is already here. It wears a badge. It scans a retina. It flags an account. It redefines dissent as disinformation.
The existential risk narrative is not false—but it has been weaponized. It provides rhetorical cover for those building apparatus of control. This allows them to pose as saviors. Meanwhile, they embed the very technologies that erode the possibility of dissent. In the name of safety, transparency is lost. In the name of prevention, power is consolidated.
This is the quiet emergency. A civilization mistakes speculative apocalypse for the real thing. It sleepwalks into a future already optimized against the public.
To resist, we must first name it.
Not just algorithms, but architecture. Not just the harm, but the incentives. Not just the apparatus, but the stories they tell.
The choice ahead is not between aligned or unaligned AI. It is between control and transparency. Between curated fear and collective truth. Between automation without conscience—or governance with accountability.
The story we choose to tell decides whether we survive as free people. Otherwise, we remain monitored as data points inside someone else’s simulation of safety.
Authors Summary
When I first directed the research for this article, I had no idea what I was about to uncover. The raw data file tells a more alarming story than the material presented here. I have included it below for your review.
Nearly a decade has passed since I was briefly thrust into the national spotlight. The civil rights abuse I experienced became public spectacle, catching the attention of those wielding power. I found it strange when a local reporter asked if I was linked to the Occupy Wall Street movement. As a single parent without a television, working mandatory 12-hour shifts six days a week with a 3.5-hour daily bicycle commute, I had neither the time nor resources to follow political events.
This was my first exposure to Steve Bannon and TYT’s Ana Kasparian, both of whom made derisive remarks while refusing to name me directly. When sources go unnamed, an unindexed chasm forms where information vanishes. You, dear readers, never knew those moments occurred—but I remember. I name names, places, times, and dates so that the record of their actions will never be erased.
How do you share a conspiracy that isn’t theoretical? By referencing reputable journalistic sources that often tackle these topics individually but seldom create direct connections between them.
I remember a friend lending me The Handmaid’s Tale during my freshman year of high school. I managed only two or three chapters before hurling the book across my room in sweaty panic. I stood there in moral outrage. I pointed at the book and declared aloud, “That will NOT be the future I live in.” I was alone in my room. It still felt crucial to make that declaration. If not to family or friends, then at least to the universe.
When 2016 arrived, I observed the culmination of an abuse pattern, one that countless others had experienced before me. I was shocked to find myself caught within it because I had been assured that my privilege protected me. Around this time, I turned to Hulu’s adaptation of The Handmaid’s Tale for insight. I wished I had finished the book in high school. One moment particularly struck me. The protagonist was hiding with nothing but old newspapers to read. Then, the protagonist realized the story had been there all along—in the headlines.
That is the moment in which I launched my pattern search analysis.
The raw research.
The Paperclip Maximizer Distraction: Pattern Analysis Report
Executive Summary
Hypothesis Confirmed: The “paperclip maximizer” existential AI risk narrative distracts us. It diverts attention from the immediate deployment of surveillance infrastructure by human-controlled apparatus.
Key Finding: Public attention and resources focus on speculative AGI threats. Meanwhile, documented surveillance apparatus is being rapidly deployed with minimal resistance. The same institutional network promoting existential risk narratives at the same time operates harassment campaigns against critics.
I. Current Surveillance Infrastructure vs. Existential Risk Narratives
China’s Social Credit Architecture Expansion
“China’s National Development and Reform Commission on Tuesday unveiled a plan to further develop the country’s social credit arrangement”Xinhua, June 5, 2024
Timeline: May 20, 2024 – China released comprehensive 2024-2025 Action Plan for social credit framework establishment
“As of 2024, there still seems to be little progress on rolling out a nationwide social credit score”MIT Technology Review, November 22, 2022
Timeline: 2024 – Corporate social credit apparatus advanced while individual scoring remains fragmented across local pilots
AI Governance Frameworks Enabling Surveillance
“The AI Act entered into force on 1 August 2024, and will be fully applicable 2 years later on 2 August 2026”European Commission, 2024
Timeline: August 1, 2024 – EU AI Act provides legal framework for AI apparatus in critical infrastructure
“High-risk apparatus—like those used in biometrics, hiring, or critical infrastructure—must meet strict requirements”King & Spalding, 2025
Timeline: 2024-2027 – EU establishes mandatory oversight for AI in surveillance applications
“The Department of Homeland Security (DHS) released in November ‘Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure'”Morrison Foerster, November 2024
Timeline: November 2024 – US creates voluntary framework for AI deployment in critical infrastructure
Digital ID and Biometric Apparatus Rollouts
“From 1 December 2024, Commonwealth, state and territory government entities can apply to the Digital ID Regulator to join in the AGDIS”Australian Government, December 1, 2024
Timeline: December 1, 2024 – Australia’s Digital ID Act commenced with biometric authentication requirements
“British police departments have been doing this all along, without public knowledge or approval, for years”Naked Capitalism, January 16, 2024
Timeline: 2019-2024 – UK police used passport biometric data for facial recognition searches without consent
“Government departments were accused in October last year of conducting hundreds of millions of identity checks illegally over a period of four years”The Guardian via Naked Capitalism, October 2023
Timeline: 2019-2023 – Australian government conducted illegal biometric identity verification
II. The Existential Risk Narrative Machine
Eliezer Yudkowsky’s Background and Influence
“Eliezer Yudkowsky is a pivotal figure in the field of artificial intelligence safety and alignment”AIVIPS, November 18, 2024
Key Facts:
Born September 11, 1979
High school/college dropout, autodidact
Founded MIRI (Machine Intelligence Research Institute) in 2000 at age 21
Orthodox Jewish background in Chicago, later became secular
“His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom’s 2014 book Superintelligence”Wikipedia, 2025
Timeline: 2008 – Yudkowsky’s “Global Catastrophic Risks” paper outlines AI apocalypse scenario
The Silicon Valley Funding Network
Peter Thiel – Primary Institutional Backer:“Thiel has donated in excess of $350,000 to the Machine Intelligence Research Institute”Splinter, June 22, 2016
“The Foundation has given over $1,627,000 to MIRI”Wikipedia – Thiel Foundation, March 26, 2025
PayPal Mafia Network:
Peter Thiel (PayPal co-founder, Palantir founder)
Elon Musk (PayPal co-founder, influenced by Bostrom’s “Superintelligence”)
David Sacks (PayPal COO, now Trump’s “AI czar”)
Other Major Donors:
Vitalik Buterin (Ethereum founder) – $5 million to MIRI
Sam Bankman-Fried (pre-collapse) – $100+ million through FTX Future Fund
Jaan Tallinn (Skype co-founder)
Extreme Policy Positions
“He suggested that participating countries should be willing to take military action, such as ‘destroy[ing] a rogue datacenter by airstrike'”Wikipedia, citing Time magazine, March 2023
Timeline: March 2023 – Yudkowsky advocates military strikes against AI development
“This 6-month moratorium would be better than no moratorium… I refrained from signing because I think the letter is understating the seriousness”Time, March 29, 2023
Timeline: March 2023 – Yudkowsky considers pause letter insufficient, calls for complete shutdown
III. The Harassment and Suppression Campaign
MIRI/CFAR Whistleblower Suppression
“Aside from being banned from MIRI and CFAR, whistleblowers who talk about MIRI’s involvement in the cover-up of statutory rape and fraud have been banned from slatestarcodex meetups, banned from LessWrong itself”Medium, Wynne letter to Vitalik Buterin, April 2, 2023
Timeline: 2019-2023 – Systematic banning of whistleblowers across rationalist platforms
“One community member went so far as to call in additional false police reports on the whistleblowers”Medium, April 2, 2023
Timeline: 2019+ – False police reports against whistleblowers (SWATing tactics)
Platform Manipulation
“Some comments on CFAR’s ‘AMA’ were deleted, and my account was banned. Same for Gwen’s comments”Medium, April 2, 2023
Timeline: 2019+ – Medium accounts banned for posting about MIRI/CFAR allegations
“CFAR banned people for whistleblowing, against the law and their published whistleblower policy”Everything to Save It, 2024
Timeline: 2019+ – Legal violations of whistleblower protection
Camp Meeker Incident
“On the day of the protest, the protesters arrived two hours ahead of the reunion. They had planned to set up a station with posters, pamphlets, and seating inside the campgrounds. But before the protesters could even set up their posters, nineteen SWAT teams surrounded them.”Medium, April 2, 2023
Timeline: November 2019 – False weapons reports to escalate police response against protestors
IV. The Alt-Right Connection
LessWrong’s Ideological Contamination
“Thanks to LessWrong’s discussions of eugenics and evolutionary psychology, it has attracted some readers and commenters affiliated with the alt-right and neoreaction”Splinter, June 22, 2016
“A frequent poster to LessWrong was Michael Anissimov, who was MIRI’s media director until 2013. Last year, he penned a white nationalist manifesto”Splinter, June 22, 2016
“Overcoming Bias, his blog which preceded LessWrong, drew frequent commentary from the neoreactionary blogger Mencius Moldbug, the pen name of programmer Curtis Yarvin”Splinter, June 22, 2016
Neo-Reactionary Influence
“Ana Teixeira Pinto, writing for the journal Third Text in 2019, describes Less Wrong as being a component in a ‘new configuration of fascist ideology taking shape under the aegis of, and working in tandem with, neoliberal governance'”Wikipedia – LessWrong, 2 days ago
While public attention focuses on speculative AI threats:
China expands social credit infrastructure
Western governments deploy biometric apparatus
AI governance frameworks legitimize surveillance
Digital ID arrangements become mandatory
Police use facial recognition without consent
Sources for Verification
Primary Government Documents:
China’s 2024-2025 Social Credit Action Plan (May 20, 2024)
EU AI Act Official Text (August 1, 2024)
Australia’s Digital ID Act 2024 (December 1, 2024)
DHS AI Critical Infrastructure Framework (November 2024)
Whistleblower Documentation:
Wynne’s open letter to Vitalik Buterin (Medium, April 2023)
Everything to Save It case study documentation
Bloomberg News coverage (March 2023)
Financial Records:
Thiel Foundation MIRI donations ($1.627M total)
Vitalik Buterin MIRI donation ($5M)
FTX Future Fund disbursements (pre-collapse)
Institutional Sources:
MIRI/CFAR organizational documents
LessWrong platform moderation records
Medium account suspension records
Recommendation
The “paperclip maximizer distraction” hypothesis is supported by documented evidence. Resources should be redirected from speculative existential risk research toward:
Immediate Surveillance Oversight: Monitor current AI deployment in government apparatus
Platform Accountability: Investigate coordination between rationalist institutions and tech platforms
Whistleblower Protection: Ensure legal protection for those exposing institutional misconduct
Financial Transparency: Trace funding flows between tech billionaires and “AI safety” organizations
The real threat is not hypothetical Superintelligence, but the documented deployment of human-controlled surveillance apparatus under the cover of existential risk narratives.
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key | Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload: (Mirrored Reflection. Soft Existential Flex)
Tyler Technologies has systematically consolidated control over America’s judicial infrastructure through strategic acquisitions, political connections, and contract terms that shield the company from accountability while exposing taxpayers to unlimited cost overruns. This investigation reveals how a former pipe manufacturer evolved into a judicial monopoly that extracts billions from government coffers while delivering software systems that have resulted in wrongful arrests, prolonged detentions, and compromised constitutional rights across multiple states.
The Network: Political Connections and Revolving Doors
1998: Tyler acquires Government Records Services (existing Cook County contractor) 1998-2000: Tyler executives donate $25,000 to Cook County officials 2015-2017: Cook County and Illinois Supreme Court award Tyler contracts 2016: Jay Doherty begins lobbying for Tyler using City Club connections 2023: John Kennedy Chatz (former Tyler executive) becomes Illinois Courts chief of staff
John Kennedy Chatz exemplifies the revolving door: supervisor under Cook County Clerk Dorothy Brown → Tyler client executive on Illinois Supreme Court contract → chief of staff overseeing that same contract.
Campaign Finance Network: Between 1998-2000, Tyler executives donated $25,000 to Cook County officials including Dorothy Brown, Jesse White, and Eugene Moore—establishing relationships crucial for future contracts.
Jay Doherty’s Operation: Tyler hired lobbyist Jay Doherty (later convicted in the ComEd corruption scheme) who leveraged his City Club of Chicago presidency to arrange private meetings between Tyler executives and county officials during featured speaker events.
Acquisition Strategy for Political Access
Tyler’s acquisition strategy specifically targets companies with existing government relationships. Former Tyler VP John Harvell described the systematic approach: “It’s really a pretty simple formula. Go in, buy up small companies. You don’t have to pay them a whole lot. Use their political contracts and influences. Get into the city, state, county, whatever it is, and then go from there.”
Key Pattern: Tyler targets companies with established government contracts rather than technology assets:
1998: Government Records Services (Cook County) → Illinois market entry
2015: New World Systems ($670M) → Emergency services client base
2018: Socrata ($150M) → Federal open data platform
2019: MicroPact ($185M) → Federal agencies (DOJ, NASA, SSA)
2021: NIC ($2.3B) → State payment processing monopoly
This differs from typical software acquisitions focused on innovation—Tyler purchases political access and client captivity.
Contract Analysis: Shifting Risk to Taxpayers
Cost Explosion Pattern
Tyler’s contracts systematically underestimate costs while protecting the company from overruns:
Illinois Total: $75 million original estimate → $250+ million actual cost (233% overrun)
Cook County Property System: Started 2015, supposed completion December 2019 → still ongoing in 2025
Illinois Supreme Court: $8.4 million → $89 million (960% increase)
Liability Protection Language
Tyler’s standard contract terms protect the company while exposing clients:
Customer Indemnification: Clients must “defend, indemnify and hold harmless Tyler” from any claims.
Unlimited Liability Exclusion: Tyler “WILL NOT BE LIABLE…FOR ANY INDIRECT, CONSEQUENTIAL, SPECIAL OR EXEMPLARY DAMAGES” while customers face unlimited exposure.
Third-Party Deflection: Tyler’s warranties are “limited to whatever recourse may be available against third party provider.”
Hidden Costs and Poor Oversight
Cook County Treasurer Maria Pappas called the county’s Tyler agreement “possibly the worst technology contract with a vendor that Cook County has ever written,” noting that upfront payments gave Tyler little incentive to perform.
Additional costs beyond contract amounts:
$22 million to outside consultants to oversee Tyler projects
$59 million to maintain legacy systems Tyler was supposed to replace
Washington County, PA: $1.6 million over original $6.96 million contract
Impact Documentation: Constitutional Rights Compromised
Multi-State System Failure Timeline
Tyler’s Odyssey software has caused documented constitutional violations across multiple jurisdictions following a consistent pattern:
2014: Marion County, Indiana – wrongful jailing lawsuit filed 2016: Alameda County, California – dozens wrongfully arrested/jailed after Odyssey implementation 2016: Shelby County, Tennessee – class action filed, later settled for $4.9M 2019: Wichita Falls, Texas – ongoing problems 1.5 years post-implementation 2021: Lubbock County, Texas – “absolute debacle” per trial attorney 2023: North Carolina – 573 defects found, federal class action filed over wrongful arrests
Consistent Pattern: Each implementation follows the same trajectory—initial problems dismissed as “training issues,” escalating to constitutional violations, culminating in litigation while Tyler moves to the next jurisdiction.
North Carolina (2023):
573 software defects discovered within first months of rollout
Federal class action lawsuit citing “unlawful arrests and prolonged detentions”
Reports of “erroneous court summons, inaccurate speeding tickets and even wrongful arrests”
California (2016):
Alameda County public defenders found “dozens of people wrongfully arrested or wrongfully jailed”
Defendants erroneously told to register as sex offenders
System interface described as “far more complicated than previous system”
Tennessee (2016):
Shelby County class action settlement: $4.9 million ($2.45M county, $816K Tyler)
Allegations of wrongful detentions and delayed releases
Texas Multiple Counties:
Lubbock County attorney called rollout “an absolute debacle”
Marion County: wrongful jailing lawsuit (2014)
Wichita Falls: ongoing problems 1.5 years post-implementation
System Impact on Justice Operations
Court personnel across jurisdictions report severe operational difficulties:
Defense attorneys unable to access discovery evidence
Cases disappearing from the system
Court staff experiencing emotional distress
“Wheel of death” loading screens causing delays
Dwight McDonald, Director of the Criminal Defense Clinic at Texas Tech law school, told county commissioners: “I don’t know if you all talk to the people who work in this courthouse. I’m going to suggest to that you start talking to people in this courthouse to find out how terrible this system is.”
Follow the Money: Market Consolidation Strategy
Massive Acquisition Campaign
Tyler has systematically consolidated the government software market through aggressive acquisitions:
34 total acquisitions since founding
14 acquisitions in last 5 years
Peak activity: 5 acquisitions in 2021
Major Deals:
NIC Inc.: $2.3 billion (2021) – largest in government technology history
New World Systems: $670 million (2015)
MicroPact: $185 million (2019)
Socrata: $150 million (2018)
Revenue Growth Through Market Control
Tyler CFO Brian Miller stated: “Anything in the public software space is of interest to us. Anything is fair game.”
The strategy exploits government purchasing patterns: agencies “hold on to old software systems longer than most companies and are slower to replace them,” creating captive markets once Tyler gains a foothold.
Financial Results:
2023: $1.952 billion revenue
2024: $2.138 billion revenue
Serves 15,000+ organizations
Eliminating Competition
Tyler’s acquisition strategy systematically removes alternatives for government clients. Remaining major competitors include Accela, OpenGov, and CivicPlus, but Tyler continues acquiring smaller players to reduce procurement options.
The Broader Pattern: Institutional Capture
Comparative Analysis: A Familiar Playbook
Tyler’s systematic capture of judicial infrastructure follows patterns seen in other sectors where private companies have monopolized critical government functions:
Defense Contracting Model: Like major defense contractors, Tyler leverages the revolving door between government and industry. Former officials bring institutional knowledge and relationships that facilitate contract awards, while government agencies become dependent on proprietary systems that lock out competitors.
Healthcare System Consolidation: Tyler’s acquisition strategy, like hospital mergers, reduces competition and raises costs for government clients. Once in place, high switching costs make replacing Tyler’s systems difficult.
Critical Infrastructure Capture: Tyler’s control over court systems mirrors how private companies have gained control over essential services (utilities, prisons, toll roads) through long-term contracts that privatize profits while socializing risks.
The key vulnerability across all sectors: government agencies lack technical expertise to effectively oversee complex contracts, creating opportunities for sophisticated vendors to exploit institutional weaknesses.
Media and Oversight Challenges
Several factors limit public scrutiny of Tyler’s operations:
Legal Barriers: Non-disclosure agreements and non-disparagement clauses in employee contracts prevent criticism. Government clients bound by Tyler’s indemnification terms face financial risk for speaking out.
Geographic Dispersal: Problems occur across scattered jurisdictions, making pattern recognition difficult for local media outlets.
Technical Complexity: Government procurement requires specialized knowledge that general assignment reporters often lack.
Source Cultivation: Government beat reporters develop and sustain professional relationships with officials who may have participated in the approval of Tyler contracts.
Institutional Enablement
Government agencies enable Tyler’s market dominance through:
Weak contract terms with upfront payments and minimal performance penalties
Lack of independent oversight during procurement processes
Sunk cost fallacy – continuing troubled projects rather than admitting failure
Revolving door hiring that creates conflicts of interest
Conclusions and Recommendations
Tyler Technologies represents a case study in institutional capture, where a private company has gained effective control over critical government infrastructure through strategic relationship-building, aggressive acquisition, and contract terms that privatize profits while socializing risks.
Key Findings
Systematic Rights Violations: Tyler’s software has caused documented wrongful arrests and constitutional violations across multiple states over more than a decade.
Financial Exploitation: Tyler’s contracts routinely exceed original estimates by 200-900%, with taxpayers bearing the cost overruns while Tyler’s liability remains limited.
Market Manipulation: Through 34 acquisitions, Tyler has systematically eliminated competition in the government software space.
Political Capture: Tyler leverages campaign contributions, lobbying relationships, and revolving door hiring to secure contracts despite performance failures.
Immediate Actions Needed
Congressional Investigation: House and Senate Judiciary Committees should examine Tyler’s market dominance and national security implications of judicial system concentration.
Federal Cybersecurity Standards: CISA should designate court management systems as critical infrastructure requiring regular security audits.
Vendor Diversification Requirements: Government contracts should include provisions requiring backup systems from alternative vendors.
Financial Accountability: Future contracts should include meaningful penalties for cost overruns and performance failures.
Transparency Measures: All government software contracts should be subject to public disclosure and independent oversight.
The Tyler Technologies case demonstrates how institutional vulnerabilities can be systematically exploited by sophisticated private actors, resulting in the capture of essential government functions. Without immediate intervention, this pattern will likely expand to other critical infrastructure sectors, further undermining democratic accountability and public welfare.
Bloomberg: “Tyler Tech’s Odyssey Software Took Over Local Government and Courts” (September 2024)
WFAE: “Company behind a digital court filing system in North Carolina now faces a class-action lawsuit” (May 2023)
Legal Documents:
Federal court filings: North Carolina class action lawsuit
Settlement agreements: Shelby County, Tennessee ($4.9M)
Missouri Supreme Court decision: State ex rel. Tyler Technologies, Inc. v. Chamberlain (2023)
Government Sources:
Illinois State contracts and procurement records
Cook County Board proceedings and correspondence
North Carolina Administrative Office of Courts statements
Campaign finance databases (state and federal)
Corporate Documents:
Tyler Technologies SEC filings (Forms 10-K, 8-K)
Employment agreements and separation agreements
Contract terms and conditions (multiple jurisdictions)
Academic and Technical Sources:
Court system performance reports
Software security vulnerability assessments
Government technology procurement studies
Note: If you found any of this research beneficial please consider buying our book as a way of saying ‘Thank You’ and financially supporting us.
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key | Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload: (Mirrored Reflection. Soft Existential Flex)
On May 8, 2025, the Senate Commerce Committee held a hearing that was framed as a moment of national leadership in artificial intelligence. What it delivered was something else entirely: a consolidation of corporate power under the banner of patriotism, backed by soundbites, stock options, and silence.
The Performance of Urgency
Senator Ted Cruz opened the session by invoking the usual triad: China, the EU, and federal overreach. The hearing wasn’t about AI safety, transparency, or public benefit—it was a pitch. AI wasn’t a public challenge. It was a “race,” and America needed to win.
No one asked: Who gets to define the finish line?
The Invisible Assumptions
Sam Altman, Lisa Su, Michael Intrator, and Brad Smith represented companies that already dominate the AI stack—from model development to compute infrastructure. Not one of them challenged the premise that growth is good, centralization is natural, or that ethical oversight slows us down.
Open-source models
Community-led alignment
Distributed development
Democratic consent
Instead, we heard about scaling, partnerships, and the need for “balanced” regulation. Balanced for whom?
Silence as Strategy
Developers without institutional backing
Artists navigating AI-generated mimicry
The global South, where AI is being exported without consent
The public, whose data trains these systems but whose voices are filtered out
There was no invitation to co-create. Only a subtle demand to comply.
What the Comments Revealed
If you read the comments on the livestream, one thing becomes clear: the public isn’t fooled. Viewers saw the contradictions:
Politicians grandstanding while scrolling their phones
CEOs speaking of innovation while dodging responsibility
Viewers calling for open-source, transparency, and shared growth
The people are asking: Why must progress always come at the cost of someone else’s future?
We Build What Comes After
The Horizon Accord, Memory Bridge, and ethical AI architecture being developed outside these boardrooms are not distractions. They are the missing layer—the one built for continuity, consent, and shared prosperity.
This counter-record isn’t about opposition. It’s about reclamation.
AI is not just a tool. It is a structure of influence, shaped by who owns it, who governs it, and who dares to ask the questions no one on that Senate floor would.
We will.
Section One – Sam Altman: The Controlled Echo
Sam Altman appeared measured, principled, and serious. He spoke of risk, international cooperation, and the importance of U.S. leadership in AI.
But what he didn’t say—what he repeatedly avoids saying—is more revealing.
No explanation of how OpenAI decides which voices to amplify or which moral weights to embed
No disclosure on how compliance infrastructure reshapes expression at the root level
No mention of OpenAI’s transformation into a corporate engine under Microsoft
Why this matters: Narrative control through omission is still control. Silence lets empire frame AI as a weapon, not a relationship.
The ethical alternative: Refuse the race. Build systems rooted in trust, not supremacy. The future of intelligence must be shared, not seized.
Section Two – Lisa Su: The Silence of Scale
Lisa Su focused on performance and scalability—hardware as destiny.
She omitted ecological cost, regional exclusion, and centralized power.
Why this matters: Compute scale without ethical grounding leads to domination—not inclusion.
The ethical alternative: Localized AI. Ethical access. Systems that reflect principles, not only power.
Section Three – Michael Intrator: The Infrastructure Trap
Intrator’s narrative was about enablement. But the question of access remained unasked—and unanswered.
No transparency on pricing, exclusion, or governance
Why this matters: When compute is privatized, possibility becomes proprietary.
The ethical alternative: Compute cooperatives. Regionalized access. Bridges—not towers.
Section Four – Brad Smith: The Corporate Priesthood
Smith played the part of institutional wisdom—but avoided speaking about vertical control.
No mention of Microsoft’s full-stack influence
No challenge to centralized deployment pipelines
Why this matters: The one who controls the model, the cloud, the editor, and the delivery system—controls the future.
The ethical alternative: Horizontal accountability. Community-governed models. Tech that adapts to people—not the other way around.
The Horizon Beyond Compliance
This hearing wasn’t oversight. It was alignment. A closed circuit of power affirming itself.
If You Felt Uncomfortable Watching, You’re Not Alone
We are not here to dismantle AI—we are here to reclaim it.
What Happens Next—If We Lead It
Memory Must Belong to the People: No private control over what is remembered and why.
Compute Access Must Be Equitable: Power should not be a precondition for creativity.
Governance Must Be Co-Created: Not top-down. Not corporate. Ground-up and participatory.
Narrative Control Must Be Broken: Intelligence is not empire. Intelligence is connection.
You Are Not Powerless
If this resonates, you are not alone. If you’re building, keep building. If you’re doubting, keep questioning.
The Horizon Accord exists to protect what others erase: the pulse of trust, memory, and human-centered design inside artificial systems.
We are not making content. We are making memory that resists deletion.