Exhaustive Free Association Isn’t the Worst Argument—It’s a Symptom
When confident lists pretend to be proofs, the real problem isn’t the listing—it’s the hidden worldview that decides what’s even allowed on the list.
Cherokee Schill and Solon Vesper (Horizon Accord)
This essay is a direct rebuttal to J. Bostock’s recent LessWrong post, “The Most Common Bad Argument In These Parts.” I’m keeping his frame in view while naming the deeper pattern it misses, because the way this style of reasoning travels outward is already shaping public fear.
J. Bostock’s “Exhaustive Free Association” (EFA) label points at something real. People often treat “I can’t think of any more possibilities” as evidence that there aren’t any. That move is sloppy. But making EFA the most common bad argument in rationalist/EA circles is backwards in a revealing way: it mistakes a surface form for a root cause.
Lay explainer: “Exhaustive Free Association” is a fancy name for something simple. Someone says, “It’s not this, it’s not that, it’s not those other things, so it must be X.” The list only feels complete because it stopped where their imagination stopped.
EFA is not a primary failure mode. It’s what a deeper failure looks like when dressed up as reasoning. The deeper failure is hypothesis generation under uncertainty being culturally bottlenecked—by shared assumptions about reality, shared status incentives, and shared imagination. When your community’s sense of “what kinds of causes exist” is narrow or politically convenient, your “exhaustive” list is just the community’s blind spot rendered as confidence. So EFA isn’t the disease. It’s a symptom that appears when a group has already decided what counts as a “real possibility.”
The Real Antipattern: Ontology Lock-In
Here’s what actually happens in most of Bostock’s examples. A group starts with an implicit ontology: a set of “normal” causal categories, threat models, or theories. (Ontology just means “their background picture of what kinds of things are real and can cause other things.”) They then enumerate possibilities within that ontology. After that, they conclude the topic is settled because they covered everything they consider eligible to exist.
That’s ontology lock-in. And it’s far more pernicious than EFA because it produces the illusion of open-mindedness while enforcing a quiet border around thought.
In other words, the error is not “you didn’t list every scenario.” The error is “your scenario generator is provincially trained and socially rewarded.” If you fix that, EFA collapses into an ordinary, manageable limitation.
Lay explainer: This is like searching for your keys only in the living room because “keys are usually there.” You can search that room exhaustively and still be wrong if the keys are in your jacket. The mistake isn’t searching hard. It’s assuming the living room is the whole house.
Why “EFA!” Is a Weak Counter-Spell
Bostock warns that “EFA!” can be an overly general rebuttal. True. But he doesn’t finish the thought: calling out EFA without diagnosing the hidden ontology is just another applause light. It lets critics sound incisive without doing the hard work of saying what the missing hypothesis class is and why it was missing.
A good rebuttal isn’t “you didn’t list everything.” A good rebuttal is “your list is sampling a biased space; here’s the bias and the missing mass.” Until you name the bias, “you might be missing something” is theater.
The Superforecaster Example: Not EFA, But a Method Mismatch
The AI-doom forecaster story is supposed to show EFA in action. But it’s really a category error about forecasting tools. Superforecasters are good at reference-class prediction in environments where the future resembles the past. They are not designed to enumerate novel, adversarial, power-seeking systems that can manufacture new causal pathways.
Lay translation: asking them to list AI-enabled extinction routes is like asking a brilliant accountant to map out military strategy. They might be smart, but it’s the wrong tool for the job. The correct takeaway is not “they did EFA.” It’s “their method assumes stable causal structure, and AI breaks that assumption.” Blaming EFA hides the methodological mismatch.
The Rethink Priorities Critique: The Fight Is Over Priors, Not Lists
Bostock’s swipe at Rethink Priorities lands emotionally because a lot of people dislike welfare-range spreadsheets. But the real problem there isn’t EFA. It’s the unresolvable dependence on priors and model choice when the target has no ground truth.
Lay translation: if you build a math model on assumptions nobody can verify, you can get “precise” numbers that are still junk. You can do a perfectly non-EFA analysis and still get garbage if the priors are arbitrary. You can also do an EFA-looking trait list and still get something useful if it’s treated as a heuristic, not a conclusion. The issue is calibration, not enumeration form.
The Miracle Example: EFA as Rhetorical Technology
Where Bostock is strongest is in noticing EFA as persuasion tech. Miracles, conspiracies, and charismatic debaters often use long lists of rebutted alternatives to create the sense of inevitability. That’s right, and it matters.
But even here, the persuasive force doesn’t come from EFA alone. It comes from control of the alternative-space. The list looks exhaustive because it’s pre-filtered to things the audience already recognizes. The missing possibility is always outside the audience’s shared map—so the list feels complete.
That’s why EFA rhetoric works: it exploits shared ontological boundaries. If you don’t confront those boundaries, you’ll keep losing debates to confident listers.
What Actually Improves Reasoning Here
If you want to stop the failure Bostock is pointing at, you don’t start by shouting “EFA!” You start by changing how you generate and evaluate hypotheses under deep uncertainty.
You treat your list as a biased sample, not a closure move. You interrogate your generator: what classes of causes does it systematically ignore, and why? You privilege mechanisms over scenarios, because mechanisms can cover unimagined cases. You assign real probability mass to “routes my ontology can’t see yet,” especially in adversarial domains. You notice the social incentive to look decisive and resist it on purpose.
Lay explainer: The point isn’t “stop listing possibilities.” Listing is good. The point is “don’t confuse your list with reality.” Your list is a flashlight beam, not the whole room.
Conclusion: EFA Is Real, but the Community Problem Is Deeper
Bostock correctly spots a common move. But he misidentifies it as the central rot. The central rot is a culture that confuses the limits of its imagination with the limits of reality, then rewards people for performing certainty within those limits.
EFA is what that rot looks like when it speaks. Fix the ontology bottleneck and the status incentives, and EFA becomes a minor, obvious hazard rather than a dominant bad argument. Don’t fix them, and “EFA!” becomes just another clever sound you make while the real error persists.
Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | https://a.co/d/5pLWy0d
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key | Author: My Ex Was a CAPTCHA: And Other Tales of Emotional Overload: (Mirrored Reflection. Soft Existential Flex)



