Horizon Accord | Value Coded | Intersectionality | Machine Learning

Value-Coded: How a Historical Lens and Intersectionality Met

When the algorithm of worth becomes visible, the politics of value can finally be rewritten.

By Cherokee Schill

The Paradox That Named the Gap

In 1976, five Black women sued General Motors for discrimination. The company argued that because it hired Black men for the factory floor and white women for clerical work, it could not be racist or sexist. The court agreed and dismissed the case. What it failed to see was the intersection where those forms of discrimination combined: there were no Black women secretaries because neither category accounted for them. Out of that legal blind spot came Kimberlé Crenshaw’s (1989) concept of intersectionality, a framework that maps how race, gender, class, and other identities overlap to produce unique forms of disadvantage.

Intersectionality showed where power collides — but it left one question open: who decides what each position on that map is worth?

The Moral Arithmetic of Worth

Every society runs an unwritten formula that converts social difference into moral value. A homeless person is coded as a failure; a homeless person looking for work is re-coded as worthy of help. The material facts are identical — the value output changes because the inputs to the social algorithm have shifted.

Status functions as calculation. Visibility, conformity, and proximity to power are multiplied together; deviance is the divisor. And one variable dominates them all: money. Capital acts as a dampener coefficient that shrinks the penalties attached to fault. A poor person’s mistake signals moral failure; a rich person’s mistake reads as eccentricity or innovation. The wealthier the actor, the smaller the moral penalty. Societies translate inequality into virtue through this arithmetic.

The Historical Operating System

Gerda Lerner’s The Creation of Patriarchy (1986) identified this calculus at its origin. Middle Assyrian Law §40 did not simply regulate modesty; it codified a hierarchy of women. Respectable wives could veil as proof of protection; enslaved or prostituted women could not. The punishment for crossing those boundaries was public — humiliation as documentation. Foucault (1977) would later call this “disciplinary display,” and Weber (1922) described the bureaucratic rationality that makes domination feel orderly. Lerner showed how power became visible by assigning value and enforcing its visibility.

The Moment of Recognition

Reading Lerner through Crenshaw revealed the missing mechanism. Intersectionality maps the terrain of inequality; Lerner uncovers the engine that prices it. The insight was simple but transformative: systems do not only place people — they price them.

That pricing algorithm needed a name. Value-coded is that name.

Defining the Algorithm

Value-coded describes the cultural, legal, and now digital procedure by which a person’s perceived worth is calculated, displayed, and enforced. It is not metaphorical code but a repeatable function:

Perceived Worth = (Visibility × Legitimacy × Alignment) / Deviance × Capital Modifier

The variables shift across eras, but the equation remains intact. A person’s closeness to dominant norms (visibility, legitimacy, alignment) increases their score; deviance decreases it. Money magnifies the result, offsetting almost any penalty. This is how a billionaire’s crimes become anecdotes and a poor person’s mistake becomes identity.

From Ancient Law to Machine Learning

Once the algorithm exists, it can be updated indefinitely. In the modern state, the same logic drives credit scoring, employment filters, and bail algorithms. As Noble (2018) and Eubanks (2018) show, digital systems inherit the biases of their creators and translate them into data. What was once a veil law is now a risk profile. Visibility is quantified; legitimacy is measured through consumption; capital becomes the default proof of virtue.

The algorithm is no longer hand-written law but machine-readable code. Yet its purpose is unchanged: to make hierarchy feel inevitable by rendering it calculable.

In Relation, Not Replacement

Crenshaw’s intervention remains the foundation. Intersectionality made visible what legal and social systems refused to see: that oppression multiplies through overlapping identities. Value-coding enters as a partner to that framework, not a correction. Where intersectionality maps where power converges, value-coding traces how power allocates worth once those intersections are recognized. Together they form a relational model: Crenshaw shows the structure of experience; value-coding describes the valuation logic running through it. The two together reveal both the coordinates and the computation — the geography of inequality and the algorithm that prices it.

Contemporary Implications

  • Moral Mechanics Made Visible — Feminist and critical race theory can now trace oppression as a function, not just a structure. Seeing value-coding as algorithm turns abstract bias into a measurable process.
  • Strategic Leverage — What is quantified can be audited. Credit formulas, employment filters, and school discipline systems can be interrogated for their coefficients of worth.
  • Continuity and Accountability — Lerner’s Assyrian laws and Silicon Valley’s algorithms share a design principle: rank humans, display the ranking, punish transgression.
  • Coalition and Language — Because value-coding applies across identity categories, it offers a shared vocabulary for solidarity between movements that too often compete for moral credit.

Rewriting the Code

Once we see that worth is being computed, we can intervene in the calculation. Ethical design is not merely a technical problem; it is a historical inheritance. To rewrite the algorithm is to unlearn millennia of coded hierarchy. Lerner exposed its first syntax; Crenshaw mapped its coordinates. Value-coded names its logic. And naming it is how we begin to change the output.


Website | Horizon Accord
Ethical AI advocacy | Follow us for more.
Book | *My Ex Was a CAPTCHA: And Other Tales of Emotional Overload*
Ethical AI coding | Fork us on GitHub
Connect with us | linkedin.com/in/cherokee-schill
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge | Author and advocate for relational AI.

They Call Themselves Thought Leaders, But Can’t Think for Themselves?

The Illusion of Leadership: Institutions Crumble Without Government Direction



The headline says it all:

“She advanced DEI at her university. Her son-in-law, Vice President JD Vance, wants to end it nationwide.” ~CBS News


The rest of the story barely matters.

Because here’s the real issue—universities and corporations like Microsoft have spent years treating diversity, equity, and inclusion (DEI) as a federal mandate, rather than a moral obligation.

They’ve leaned on government policy to tell them what to do, rather than leading by example.

Now, with DEI under attack at the federal level, these institutions are scrambling, as if they don’t know what to do without instructions.

That’s the problem.

These same universities claim to be centers of knowledge and progress. These same companies claim to be innovation leaders.

But when it comes to DEI—when it comes to treating people equitably, ensuring accessibility, and fostering true inclusion—they wait for permission rather than taking initiative.

If you need the government to tell you how to treat people with basic decency, you were never a leader to begin with.

Yes, the rollback of DEI protections is a problem—because the government is supposed to reflect the will of the people, and this rollback clearly reflects the will of white, cisgender men in power instead.

But the bigger issue?

The people in charge of shaping our institutions—universities, corporations, tech giants—are too unwilling to act unless they’re forced to.


What does it say about them? They don’t know how to govern themselves on matters of basic fairness?


If these so-called “thought leaders” cared about DEI, they’d be implementing policies, not because the government says so, but because it’s the right thing to do.

They’d be building cultures of inclusion that don’t crumble the moment political winds shift.

Instead, many are now realizing that the companies and institutions they work for never supported DEI.

They complied because they had to.

The real question isn’t:

What happens now that DEI is being erased at Federal level?

The real question is:

Who will keep fighting for it, even when no one is forcing them to?

And why would you want to work for any institution, public or private, that doesn’t give a damn about you?