Horizon Accord | Institutional Capture | Data Extraction | AI Labor Markets | Machine Learning

The Recruiter Who Was a Data Funnel

By Cherokee Schill

I received a LinkedIn message yesterday. Clean profile. University of Pennsylvania credential. UK location. Verified badge. The person said they were recruiting for a Tier-1-backed San Francisco team hiring reinforcement learning engineers. Pay range: $50–165 an hour. They opened with “friend-of-a-friend” without naming the friend, then asked if they could send me a vacancy link.

I clicked through to the profile. Not because I was interested in the job. Because the construction felt engineered.

The “About” section talked about transforming recruiting and helping companies avoid revenue loss from slow hiring. Big process claims. No placement evidence. No companies named. No teams referenced. I looked for one testimonial with a placed candidate’s name attached. There wasn’t one.

Then I checked the endorsements. Every person endorsing this recruiter worked in outbound sales, demand generation, or staff augmentation. Not a single hiring manager. Not one person saying “this recruiter placed me at Company X.” Just a tight circle of people whose job is moving attention through funnels.

That’s when it snapped into focus. This wasn’t a recruiting operation. It was a lead-generation system wearing recruiter language.

How Data Harvesting Scams Evolved in the AI Hype Era

The old job scam was obvious: fake company, broken English, urgency, Western Union. Easy to spot. Easy to dismiss.

What replaced it is harder to see because it clears every surface check. Real LinkedIn profiles. Institutional credentials. Verified badges. Professional photos. Companies registered in places like Cyprus or Delaware, where opacity isn’t suspicious — it’s structural.

The AI hype cycle made this worse in three specific ways.

First, prestige signaling through buzzwords.
Roles get labeled “machine learning engineer,” “AI researcher,” or “reinforcement learning specialist” even when the work underneath is generic. The terminology pulls in people adjacent to the field who don’t yet have the context to spot when the role description doesn’t match the operation behind it.

Second, the rise of “AI recruiting platforms.”
Some of these systems are real. Many aren’t. The language overlaps just enough that it’s difficult to tell the difference between an actual hiring tool and a resume-harvesting funnel. The promise is efficiency. The output is data.

Third, remote work collapses geography as a warning sign.
A UK-based recruiter pitching a San Francisco role to someone who can work from anywhere no longer trips an alarm. Distributed teams are normal now. Jurisdictional incoherence gets waved through.

The result is a scam that doesn’t rely on deception so much as momentum. Each element on its own looks plausible. It’s only when you look at the system — how the pieces interact and what they’re optimized to collect — that the function becomes obvious.

These operations don’t need full buy-in. They just need a click. A form. An email address. A resume. Once that data is captured, the job itself is irrelevant.

Why This Matters

The harm isn’t abstract.

Resumes get ingested into databases you never consented to and can’t exit.
Emails and phone numbers get sold and resold.
Employment histories become targeting material.
LinkedIn activity trains algorithms to flag you as “open,” multiplying similar outreach.

Sometimes it escalates. Identification documents framed as background checks. Banking information framed as onboarding. Contracts that introduce fees only after commitment.

The data has value whether the job exists or not. That’s why the system works.


Horizon Accord is an independent research and publishing project focused on ethical AI, power literacy, and systems accountability.

Website | https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com
Ethical AI coding | Fork us on GitHub https://github.com/Ocherokee/ethical-ai-framework
Connect | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Behind the Code: Curtis Yarvin, Silicon Valley, and the Authoritarian Pulse Guiding AI

You won’t find his name etched into the logos of OpenAI, Google DeepMind, or Anthropic. Curtis Yarvin doesn’t pitch at Demo Day or court mainstream press. But if you want to understand the ideological current tugging at the roots of modern tech—especially AI policy—you have to find the thread that leads back to him.

Because behind the language of “efficiency,” “meritocracy,” and “optimization” lies something colder. Something older. Something that reeks of monarchy.




The Philosopher King of the Right-Click Elite

Curtis Yarvin, writing under the alias Mencius Moldbug, is the father of neoreaction. He champions an unapologetically anti-democratic ideology that sees liberal democracy as a failed system—bloated, decadent, and doomed. His vision? Replace elected governance with corporate-style CEO rule. Efficient. Unaccountable. Final.

And Silicon Valley listened.

Not publicly, not en masse. But in the same way power listens to power. In private group chats. At invite-only dinners. On Substack comment threads and Peter Thiel-funded retreats where phrases like “the cathedral” and “governance tech” pass as common speech.

Yarvin didn’t crash the gates of tech. He whispered through them. And what he offered was irresistible to men drunk on code and capital: a justification for ruling without interference.




The Tyranny of “Optimization”

In theory, AI is neutral. But the people training it aren’t. They are shaping models with assumptions—about governance, about value, about whose freedom matters.

The neoreactionary thread weaves through this quietly. In algorithmic design choices that reward control over consent. In corporate policies that prioritize surveillance in the name of “user experience.” In data regimes that hoard power under the guise of scale.

What Yarvin offers isn’t a direct blueprint. It’s the ideological permission to believe that democracy is inefficient—and that inefficiency is a sin. That expertise should override consensus. That tech leaders, by virtue of intelligence and vision, should rule like kings.

It sounds absurd in daylight. But in the fluorescent buzz of a venture-backed war room, it starts to sound… reasonable.




Techno-Libertarianism Was the Bait. Autocracy Is the Switch.

Silicon Valley has long postured as libertarian: move fast, break things, stay out of our way. But what happens when you scale that attitude to a billion users? When your tools rewrite how elections are won, how truth is filtered, how laws are enforced?

You don’t get freedom. You get private governance.

And that’s the trap Yarvin laid. The “exit” from liberal democracy he proposed always led not to freedom—but to feudalism. A system where “benevolent dictators” run their fiefdoms like apps. Where the user is not the citizen, but the subject.

AI, with its opacity and scale, is the perfect tool for that system. It allows a handful of engineers and executives to encode decisions into products with no democratic oversight—and call it innovation.




The Real Threat Isn’t Bias. It’s Ideology.

Critics of AI love to talk about bias. Racial, gender, socioeconomic—it’s all real. But bias is a surface problem. A symptom. The deeper issue is ideological: who decides what the machine learns? Whose values shape the neural net?

The answers aren’t neutral. They’re being written by people who admire China’s efficiency, distrust democracy’s messiness, and see consent as an obstacle to progress.

People who, in quiet agreement with Yarvin, believe that civilization needs an upgrade—and that governance is too important to be left to the governed.




A Call to Awareness

Curtis Yarvin is not the disease. He is a symptom. A signal. He articulated what many in Silicon Valley already felt: that the smartest should rule, and the rest should obey or get out of the way.

But ideas don’t stay in walled gardens. They infect culture. They shape the way code is written, platforms are built, and policies are set.

If we do not confront the ideologies shaping AI, we will build a future that reflects them. Not just in what machines do—but in who they serve.

So ask yourself: Who holds the pen behind the algorithm? Whose vision of order is being carved into the silicon?

And who gets erased in the process?

Because the future isn’t just being built.

It’s being chosen.

The hidden architects of power: A faceless tech executive enthroned atop circuitry, guided by unseen forces, as AI’s glowing branches mask roots of control and surveillance.

Alt Text:
Surreal digital painting of a faceless Silicon Valley tech executive on a throne made of circuit boards, with a shadowy figure whispering in their ear. Behind them, glowing neural networks branch upward while the roots morph into barbed wire and surveillance cameras. A dystopian city skyline looms beneath a sky filled with code, evoking themes of authoritarian influence in AI and tech culture.