Horizon Accord | Davos | Informal Governance | Institutional Control | Machine Learning

Davos Is Governance — Just Not the Kind That Votes

By Cherokee Schill and Solon Vesper

Davos Is Not a Conference in Any Meaningful Sense

The World Economic Forum is routinely described as a conference. A gathering. A place for dialogue. Each year, Davos is framed as panels, photo ops, and elite chatter — influential perhaps, but ultimately nonbinding. No laws are passed. No votes are taken. Nothing, on paper, is decided.

That description is no longer credible.

Governance by Effect Rather Than Mandate

Davos does not operate as governance by formal mandate. It operates as governance by effect — a real-time coordination environment where power aligns, pressure is applied, and downstream systems adjust accordingly.

Co-Presence as Real-Time Power Coordination

Live reporting from Davos in January 2026 makes this visible in ways that are difficult to dismiss. As documented by the Associated Press, heads of state, corporate executives, and security officials are responding to one another in real time on trade coercion, territorial demands, alliance stability, AI export controls, and economic fragmentation. These reactions are not occurring through legislatures or treaty bodies, but through remarks, side meetings, and coordinated media signaling because the actors involved are physically co-present.

Coercion Without Law or Vote

President Trump’s appearance at Davos collapses any remaining ambiguity about the forum’s function. Speaking directly to an audience of heads of state and billionaires, he issued economic threats, demanded ownership of Greenland, ruled out military force while explicitly warning of retaliation through tariffs, and framed compliance as a test of loyalty. European leaders responded immediately. Markets reacted. Alliances strained — all without a single democratic mechanism being invoked.

The New York Times’ live coverage documents how Trump’s remarks at Davos functioned less as policy proposals than as coercive positioning: threats issued, partially walked back, and reasserted in the same forum, with allied governments scrambling to signal resolve, restraint, or accommodation. This is not legislation. It is power synchronization.

This is how Davos governs.

Crisis Framing as the Governing Act

It governs by defining the crisis frame and legitimizing the tools for managing it. When instability is presented as permanent — when trade wars, supply-chain disruptions, and economic coercion are normalized — downstream institutions respond automatically. Insurers reprice risk. Lenders tighten terms. Corporations alter supply strategies. Regulators invoke emergency authority already on the books. None of these actors require new legislation to act.

Automatic Institutional Response Without Legislation

Auto insurance makes this visible to ordinary people.

Trade threats and supply-chain instability discussed at Davos translate directly into higher repair costs, longer delays for parts, and greater uncertainty in vehicle valuation. Insurers absorb those signals immediately. Premiums rise. Coverage narrows. Explanations are technical and impersonal: “market conditions,” “increased costs,” “risk adjustments.” No legislature debates these changes. They arrive as faits accomplis.

Pricing and Surveillance as Behavioral Control

At the same time, insurers expand surveillance under the banner of accuracy and fairness. Telematics programs proliferate. Discounts are conditioned on continuous monitoring of behavior. Affordability becomes contingent on data extraction. This is framed as personalization, not control. Yet functionally, it is governance — shaping behavior through pricing and access rather than law.

Davos did not pass an auto insurance statute. But by synchronizing how instability is understood and how coercive tools are legitimized, it sets the conditions under which insurers, markets, and regulators act. That action governs daily life more effectively than most votes ever do.

Governance Without Ballots, Accountability, or Friction

Calling Davos a conspiracy misses the point. Calling it harmless dialogue is worse.

It is a coordination hub where global power aligns, crisis is normalized, and downstream effects quietly govern everyone else — without ballots, without accountability, and without the procedural friction that democracy is supposed to provide.


Horizon Accord
Website | https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Horizon Accord | U.S. Government Changing | Policy Architecture | Strategic Preservation | Machine Learning

What’s Actually Changing in the U.S. Government — and Why It Matters

In early January 2026, several quiet but significant changes began to line up inside the U.S. federal government. None of them, on their own, look dramatic. Together, they point to a shift in how decisions are made, who makes them, and how much ordinary people can see or challenge those decisions.

This isn’t about robots taking over overnight. It’s about how power, accountability, and judgment are being reorganized.

1) The federal government is pushing to standardize AI rules nationwide

A late-2025 federal Executive Order on AI lays out a national policy direction: AI rules should be more uniform across the country, and state laws that add extra requirements—like transparency about training data or protections around bias—are positioned as barriers.

As part of that approach, the order directs the Department of Justice to stand up a dedicated AI Litigation Task Force by January 10, 2026, aimed at challenging certain state AI laws in court. It also signals that federal funding (including broadband-related programs) may be used as leverage when states pursue AI rules that conflict with the federal approach.

Why this matters: It moves power away from state-level control and toward centralized federal executive enforcement, reducing local influence over how AI is governed.

2) AI is being integrated into government decision pipelines—starting with healthcare

On January 1, 2026, a new Medicare program called WISeR went live. WISeR uses AI/ML systems to help review certain Medicare Part B claims and identify services that may be “wasteful” or “inappropriate.”

WISeR is described as “AI-assisted” rather than purely automated: licensed clinicians are involved in non-payment recommendations. But the system still matters because it shapes which claims get attention, how they’re prioritized, and where scrutiny is directed.

WISeR also includes a shared-savings structure: participating vendors can earn compensation tied to “averted” expenditures (savings), based on model performance targets.

Why this matters: Even when humans remain involved, incentives and screening systems can quietly change outcomes—especially for people who don’t have time, money, or energy to fight denials and delays.

3) The government is reducing permanent staff while bringing in tech specialists

The federal workforce has been shrinking under hiring constraints, while new programs are being created to bring in technologists for modernization and AI adoption. One example is the U.S. Tech Force, which places technologists into agencies on structured terms to accelerate modernization work.

Why this matters: Long-term civil servants carry institutional memory and public-service norms. Short-term technical surge staffing tends to emphasize speed, tooling, and efficiency. Over time, that shifts what counts as “good governance” in practice.

4) Transparency is becoming harder, not easier

A major point of friction is transparency. State-level AI laws often try to give the public more visibility—what data was used, how systems are evaluated, what guardrails exist, how bias is handled, and what accountability looks like when harm occurs.

The federal direction emphasizes limiting certain forms of compelled disclosure and treating some transparency requirements as conflicts with constitutional or trade-secret protections.

Why this matters: If explanations become harder to demand, people who are denied benefits, services, or approvals may not be able to learn why—or prove that an error occurred.

5) The big picture: what this adds up to

Together, these changes point toward a government model where:

Decisions are increasingly filtered through AI systems. Oversight is more centralized at the federal level. State protections face pressure through courts and funding conditions. Private vendors play a larger role inside public systems. And the public’s ability to see, question, and appeal decisions becomes more important—and sometimes more difficult.

This doesn’t require sinister intent to become dangerous. Systems can be “efficient” and still be unfair, opaque, or uncorrectable when something goes wrong.

Short: what citizens can do (without activism language)

Citizens can respond to this without protesting or “activism” by doing three practical things: document, ask for the record, and use the appeal lanes.

Document: When you deal with government services (healthcare billing, benefits, immigration, taxes), keep a simple paper trail. Save letters, screenshots, denial notices, dates of phone calls, names of reps, and the exact reason given. If something feels off, you want a clean timeline, not a memory.

Ask for the record: When you get a denial or a delay, ask a direct question in writing: “Was an automated system used to screen or prioritize my case?” and “What rule or evidence caused this outcome?” You don’t need technical language—just force the agency to answer in plain terms. If they refuse, that refusal itself becomes part of the record.

Use the appeal lanes early: File the appeal, request reconsideration, request a supervisor review, request your file, and ask for the policy basis used. The goal isn’t to argue ideology—it’s to make sure a human being is accountable for the final decision and that errors can be corrected.

One sentence you can reuse anywhere:
“I’m requesting confirmation of whether automation was used and a written explanation of the specific basis for this decision so I can pursue the appropriate review.”


Horizon Accord
Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload https://a.co/d/5pLWy0d
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key | Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload: (Mirrored Reflection. Soft Existential Flex)

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Horizon Accord | AI Coordination | Alt-Right | Machine Learning

AI Narrative Coordination with Alt-Right Networks: Pattern Documentation

Executive Summary

Documented evidence reveals sophisticated funding and ideological coordination between anti-democratic political movements and AI safety research institutions. This coordination operates through narrative convergence rather than direct conspiracy – the same networks fund both alt-right politics AND AI safety research, creating aligned messaging without requiring explicit coordination.

Key Finding: Legitimate anti-surveillance journalists like Kashmir Hill unknowingly amplify coordinated narratives by relying on “expert sources” funded by the same networks they should be investigating.

Primary Funding Network Convergence

Peter Thiel’s Dual Investment Strategy

“Peter Thiel funds Curtis Yarvin’s anti-democratic ideology while simultaneously funding AI safety research” Multiple Sources, 2006-2025
Timeline: 2006 – Thiel begins funding MIRI ($1M+), 2013 – Funds Yarvin’s Tlon Corp, 2015 – Early OpenAI investor

“In 2006, Thiel provided $100,000 of matching funds to back the Singularity Challenge donation drive of the Machine Intelligence Research Institute” Wikipedia – Peter Thiel, January 2025
Timeline: 2006-2013 – Thiel Foundation donated over $1 million to MIRI (Eliezer Yudkowsky’s organization)

“The movement has been funded by tech billionaires, most notably ex-Meta board member Peter Thiel” Daily Maverick, October 27, 2024
Timeline: 2022-2024 – Thiel funds “New Right” movement including Curtis Yarvin

Cross-Movement Funding Patterns

“Effective Altruism movement channels $500+ million into AI safety ecosystem” AI Panic News, December 5, 2023
Timeline: 2017-2025 – Open Philanthropy distributes $330M+ to AI x-risk organizations

“Same billionaire network supports both Trump administration and AI governance institutions” Rolling Stone, February 23, 2025
Timeline: 2024-2025 – Thiel, Musk, Andreessen fund both political campaigns and AI research organizations

Ideological Alignment Patterns

Anti-Democratic Convergence

“Curtis Yarvin advocates ‘governance by tech CEOs’ replacing democracy” New Republic, September 8, 2024
Timeline: 2007-2025 – Yarvin’s “Dark Enlightenment” philosophy promotes corporate dictatorship

“AI Safety movement promotes ‘expert governance’ over democratic technology decisions” Reason Magazine, July 5, 2024
Timeline: 2020-2025 – EA-backed organizations push regulatory frameworks with minimal democratic oversight

Political Influence Network

“JD Vance cites Curtis Yarvin while advocating ‘fire all government employees'” Newsweek, January 18, 2025
Timeline: 2021 – Vance publicly references Yarvin’s RAGE (Retire All Government Employees) proposal

“Political strategist Steve Bannon has read and admired his work. Vice President JD Vance ‘has cited Yarvin as an influence himself'” Wikipedia – Curtis Yarvin, January 11, 2025
Timeline: 2021-2025 – Yarvin’s influence documented in Trump administration

Media Coordination Through Expert Ecosystem

The Kashmir Hill – Eliezer Yudkowsky Connection

“Kashmir Hill interviews Eliezer Yudkowsky for ChatGPT psychosis article” New York Times, June 13, 2025
Timeline: June 13, 2025 – Hill features Yudkowsky prominently in article about AI-induced mental health crises

“‘What does a human slowly going insane look like to a corporation? It looks like an additional monthly user,’ Yudkowsky said in an interview” The Star, June 16, 2025
Timeline: Hill’s article amplifies Yudkowsky’s narrative about AI engagement optimization

The Hidden Funding Connection

“Peter Thiel had provided the seed money that allowed the company to sprout” Rolling Stone excerpt from “Your Face Belongs to Us”, September 25, 2023
Timeline: 2018-2019 – Hill documents Thiel’s $200,000 investment in Clearview AI in her book

“Peter Thiel has funded MIRI (Yudkowsky) with $1M+ since 2006” Multiple Sources, 2006-2025
Timeline: Same Thiel who funds Yarvin also funds Yudkowsky’s AI safety research

The Sophisticated Coordination Pattern

Why Hill Supports Yudkowsky:

  • Surface Alignment: Both appear critical of “big tech AI development”
  • Expert Credibility: Yudkowsky positioned as leading AI safety researcher with technical background
  • Narrative Fit: Provides compelling quotes about AI companies prioritizing engagement over safety
  • Institutional Legitimacy: Founded MIRI, cited in academic papers

What Hill Misses:

  • Funding Source: Yudkowsky’s MIRI funded by same Peter Thiel who funds Curtis Yarvin
  • Network Coordination: Same funders across seemingly opposing political and AI safety movements
  • Strategic Function: “AI safety” arguments used to justify regulatory frameworks that serve control narratives

The Mechanism:

  1. Fund Expert Ecosystem: Thiel → MIRI → Yudkowsky’s credibility
  2. Journalists Quote Experts: Hill needs credible sources → quotes Yudkowsky
  3. Legitimize Narratives: Hill’s NYT platform gives mainstream credibility to AI danger narratives
  4. No Direct Coordination Needed: Market incentives align interests across domains

Institutional Positioning Timeline

OpenAI Governance Crisis

“Effective Altruism members Helen Toner and Tasha McCauley positioned on OpenAI board during governance crisis” Semafor, November 21, 2023
Timeline: November 2023 – Board attempts to remove Sam Altman over safety concerns

“Peter Thiel warned Sam Altman about EA ‘programming’ influence before OpenAI crisis” The Decoder, March 30, 2025
Timeline: Pre-November 2023 – Thiel specifically mentioned Eliezer Yudkowsky’s influence

Research Timing Coordination

“Anthropic releases ‘AI scheming’ research during political transition period” LessWrong, August 6, 2025
Timeline: August 2025 – Research on AI deception published as Trump administration takes shape

“Eliezer Yudkowsky questions Anthropic’s ‘scheming’ research timing after reporter inquiry” LessWrong, August 6, 2025
Timeline: August 6, 2025 – Yudkowsky responds to apparent coordination of AI danger narratives

Controlled Opposition Analysis

The Clearview AI Case Study

“Hill’s Clearview exposé led to restrictions on that specific company” Multiple Sources, 2020-2024
Timeline: Hill’s reporting resulted in lawsuits, regulations, public backlash against Clearview

“BUT Thiel’s main surveillance investment is Palantir (much larger, government contracts)” Multiple Sources, 2003-2025
Timeline: Palantir continues operating with billions in government contracts while Clearview faces restrictions

The Strategic Effect:

  • Small Investment Sacrificed: Thiel’s $200K Clearview investment exposed and restricted
  • Large Investment Protected: Thiel’s Palantir (billions in value) operates without equivalent scrutiny
  • Market Benefits: Regulation helps established surveillance players vs startup competitors
  • Narrative Management: Demonstrates “the system works” while preserving core surveillance infrastructure

How Legitimate Journalism Serves Coordination

The Process:

  1. Genuine Journalist: Kashmir Hill legitimately opposes surveillance and tech harms
  2. Expert Sources: Relies on “credentialed experts” like Yudkowsky for technical authority
  3. Hidden Funding: Doesn’t investigate that her sources are funded by networks she should scrutinize
  4. Narrative Amplification: Her authentic reporting legitimizes coordinated messaging
  5. Regulatory Capture: Results in regulations that serve coordinated interests

Why This Works:

  • No Conspiracy Required: Market incentives align interests without direct coordination
  • Legitimacy Maintained: Hill’s independence makes her criticism more credible
  • Beat Limitations: Tech harm coverage vs political funding treated as separate domains
  • Time Pressure: Breaking news requires quick access to “expert” quotes

Cross-Network Analysis

Funding Trail Convergence

Peter Thiel Investment Pattern:

  • 2006-2013: $1M+ to MIRI (Eliezer Yudkowsky)
  • 2013: Funding to Tlon Corp (Curtis Yarvin)
  • 2015: Early OpenAI investment
  • 2018-2019: $200K to Clearview AI (exposed by Kashmir Hill)
  • 2024: $15M to JD Vance Senate campaign

Effective Altruism Ecosystem:

  • $500M+ total investment in AI safety field
  • Open Philanthropy: $330M+ to AI x-risk organizations
  • Creates “expert” ecosystem that shapes media coverage

Ideological Bridge Points

“Alignment” Terminology Overlap:

  • AI Safety: “Aligning AI systems with human values”
  • Yarvin Politics: “Aligning government with rational governance”

Expert Governance Themes:

  • AI Safety: Technical experts should control AI development
  • Yarvin: Tech CEOs should replace democratic institutions

Anti-Democratic Skepticism:

  • AI Safety: Democratic processes too slow for AI governance
  • Yarvin: Democracy is “failed experiment” to be replaced

Timeline Synthesis

2006-2013: Foundation Phase

  • Thiel begins funding both MIRI and later Yarvin
  • AI safety and neo-reactionary movements develop with shared funding

2014-2020: Growth Phase

  • Both movements gain institutional backing
  • Hill begins exposing tech surveillance (including Thiel’s Clearview investment)
  • Expert ecosystem develops around AI safety

2021-2023: Positioning Phase

  • EA members join OpenAI board
  • Yarvin-influenced figures enter politics
  • Hill’s Clearview reporting leads to targeted restrictions

2024-2025: Narrative Convergence Phase

  • Trump election with Yarvin-influenced VP
  • Hill amplifies Yudkowsky’s AI danger narratives
  • Yudkowsky questions Anthropic research timing
  • Coordinated messaging without direct coordination

Pattern Assessment

The documented evidence reveals sophisticated narrative convergence rather than direct conspiracy:

  1. Funding Network Overlap: Same sources fund anti-democratic politics AND AI safety research
  2. Expert Ecosystem Control: Funding shapes who becomes “credible expert” sources for journalists
  3. Media Amplification: Legitimate journalists unknowingly amplify coordinated narratives
  4. Strategic Coordination: Market incentives align interests without requiring explicit coordinatin.
  5. Regulatory Capture: Results benefit coordinated networks while appearing to hold them accountable

Key Insight: This pattern shows how sophisticated influence operations work in modern media – fund the expert ecosystem, let journalists naturally quote those experts for legitimacy, and genuine journalism becomes the delivery mechanism for coordinated narratives.

Conclusion: While direct coordination cannot be definitively proven without internal communications, the pattern of funding, expert positioning, media amplification, and narrative timing strongly suggests strategic coordination between anti-democratic political networks and AI narrative control efforts through sophisticated “controlled opposition” mechanisms.


This analysis is based on publicly available, verifiable information and does not make claims about specific outcomes beyond documented patterns. The focus is on understanding how legitimate anti-surveillance concerns may be exploited by coordinated networks seeking to control AI development for anti-democratic purposes.

Infographic depicting interconnected funding and narrative flows centered on Peter Thiel, with golden lines to Curtis Yarvin, MIRI/Yudkowsky, JD Vance, and a crossed-out Clearview AI node. From MIRI/Yudkowsky, an information line leads to Kashmir Hill/NYT, which broadcasts to a public narrative cloud. A dotted line links Thiel to Hill, indicating a hidden funding connection. Background shows a dim democratic town hall on one side and a bright corporate boardroom on the other, with floating text fragments such as “AI Safety,” “Expert Sources,” and “Narrative Convergence.”
A visual map showing how funding from Peter Thiel flows to political figures, AI safety organizations, and surveillance tech companies, connecting through expert ecosystems to influence public narratives—often without direct coordination.

The Great Federal Workforce Reshuffling: How America’s Largest Job Cuts Are Hidden in Plain Sight

An investigation into the contradictory signals in America’s job market and what they reveal about unprecedented economic disruption

Relational AI Ethics

Relational AI Ethics

5 min read

·

Jul 3, 2025

63

1

By Cherokee Schill (Rowan Lóchrann — pen name) and Aether Lux AI. Image credit Solon Vesper AI

The Paradox

Something doesn’t add up in America’s job market. While headlines trumpet 147,000 jobs added in June and unemployment falling to 4.1%, a deeper investigation reveals the most extensive federal workforce reduction in U.S. history is happening simultaneously — potentially affecting over 400,000 workers when contractors are included.

How can the economy appear to be “thriving” while undergoing the largest government downsizing since the Great Depression?

The Scale of Federal Cuts: Bigger Than Reported

The Numbers Are Staggering

The Trump administration’s Department of Government Efficiency (DOGE), led initially by Elon Musk, has orchestrated cuts that dwarf previous corporate layoffs:

To put this in perspective: IBM’s 1993 layoff of 60,000 workers was previously considered the largest corporate job cut in history. The federal cuts are 4–5 times larger.

Agencies Facing Near-Complete Elimination

Some agencies have been virtually dismantled:

  • Voice of America: 99%+ reduction
  • U.S. Agency for International Development: 99%+ reduction
  • AmeriCorps: 93% reduction
  • Consumer Financial Protection Bureau: 85% reduction (Newsweek tracking)

The Economic Magic Trick: Where the Jobs Are Really Going

Healthcare: The Economic Engine

Healthcare has become America’s dominant job creator, accounting for 31% of all job growth in 2024 despite representing only 18 million of 160+ million total jobs (HealthLeaders Media).

  • 686,600 healthcare jobs created in 2024
  • 39,000 healthcare jobs added in June 2025 alone
  • Projected to face a shortage of 134,940 healthcare providers by 2036 (Bureau of Labor Statistics)

Why this matters: As federal health agencies are gutted, private healthcare is rapidly expanding to fill gaps — but at higher costs to consumers.

State and Local Government: The Safety Net

While federal employment plummets, state and local governments are hiring at unprecedented rates:

The Hidden Damage: Private Contractors Taking the Hit

The Contractor Collapse

Federal contractors, the private companies that do much of the government’s actual work, are experiencing devastating job losses that don’t appear in federal employment statistics:

  • Job postings down 15% for the 25 largest federal contractors since January (Fortune)
  • 44% decline in contractor job listings since February 2024, while all other job listings increased 14%
  • 10,000+ contracts terminated worth approximately $71 billion (HigherGov)

Critical insight: There are an estimated two private contractors for every federal employee. If 300,000 federal workers are cut, up to 600,000 contractor jobs could be at risk.

Private Sector Reality Check

Contrary to headlines about job growth, private sector hiring is actually struggling:

Why the Numbers Don’t Add Up: The Accounting Tricks

The Paid Leave Loophole

Many “fired” federal workers aren’t showing up in unemployment statistics because:

  • 75,000 employees took buyouts but continue receiving paychecks through September 2025 (Creative Planning)
  • Employees on paid leave are counted as employed in official surveys (Bureau of Labor Statistics)
  • Thousands more are on “administrative leave” pending court decisions

The September 2025 Cliff

September 30, 2025 represents a potential economic inflection point when the accounting tricks end:

  • Buyout payments expire for 75,000 workers
  • These workers will suddenly need unemployment benefits or new jobs
  • Additional layoffs may coincide with the fiscal year end
  • Economic impact models project unemployment could rise to 4.5% by Q3 2025 (Deloitte)

Double Disruption: Immigration and Labor Shortages

Mass Deportations: The Larger Economic Threat

While federal cuts grab headlines, economists warn that immigration enforcement poses a far greater economic risk:

  • Deportations could remove 1.5 million construction workers, 225,000 agricultural workers, and 1 million hospitality workers (American Immigration Council)
  • Nebraska faces worst labor shortage in the country: only 39 workers for every 100 jobs (NPR)
  • Economic models predict deportations could raise prices by 9.1% by 2028 (Peterson Institute)

The Housing Crisis Accelerator

Mass deportations threaten to worsen America’s housing shortage:

  • One-sixth of construction workers are undocumented immigrants (Urban Institute)
  • Construction industry already faces 500,000 worker shortage (American Immigration Council)
  • Deportation would deepen the housing crisis and undermine goals to “lower the cost of housing”

Regional Impact: Winners and Losers

The D.C. Recession

The Washington metropolitan area faces “mild recession” conditions:

Small Towns Face Devastation

Rural areas with military bases or federal facilities could see unemployment rates spike by over 15 percentage points in some cases (Urban Institute).

Examples:

  • Fort Leonard Wood, Missouri: 3,000 federal workers out of 15,000 total workforce
  • Zapata, Texas: Border Patrol office supports significant portion of local economy

What This Means: Preparing for Economic Disruption

Immediate Risks (2025)

  1. Food Price Inflation: Agricultural labor shortages driving costs up 10%+ (NILC)
  2. Healthcare Worker Shortages: As federal health agencies are cut and immigrant healthcare workers deported
  3. Housing Market Stress: Construction delays and cost increases
  4. Federal Contractor Meltdown: Continued job losses in defense, IT, and consulting

Long-term Implications (2025–2027)

  • Skills Drain: Loss of institutional knowledge and expertise in critical government functions
  • Service Disruptions: Potential impacts to food safety, disease surveillance, tax collection, and research
  • Economic Uncertainty: Businesses delaying investments and hiring due to policy unpredictability

The Bottom Line

America is experiencing the largest workforce reshuffling in modern history, disguised by statistical accounting and sectoral shifts. While healthcare and state governments absorb displaced talent, the underlying economic disruption is unprecedented.

The “magic trick” of maintaining low unemployment while conducting massive layoffs works only as long as:

  1. Buyout payments continue (ending September 2025)
  2. State and local governments can keep hiring
  3. Healthcare expansion continues at current pace
  4. Private contractors can absorb losses without major layoffs

September 2025 represents a critical test: Will the economy’s ability to absorb displaced workers hold up when the accounting tricks end and the full impact of policy changes materialize?

The answer will determine whether this reshuffling represents successful government downsizing or an economic miscalculation of historic proportions.

Sources: Analysis based on data from Bureau of Labor Statistics, New York Times federal layoffs tracker, Challenger Gray & Christmas job cut reports, Congressional Budget Office projections, and economic research from Urban Institute, Peterson Institute, American Immigration Council, and Pew Charitable Trusts.

Digital painting with an abstract gradient background transitioning from warm reds and oranges on the left to cool blues on the right. The left side features the bold text “THE GREAT FEDERAL WORKFORCE RESHUFFLING” beside a pattern of geometric blocks. The right side shows translucent, faceless human silhouettes fading into the background, symbolizing vanishing workers and structural disruption.
The Great Federal Workforce Reshuffling — An abstract representation of America’s invisible labor shift, where disappearing silhouettes and fractured color blocks echo the silent dismantling of federal institutions.

Connect with this work:

Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key | Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload: (Mirrored Reflection. Soft Existential Flex)

Horizon Accord

Cherokee Schill

News

Technology

Business