An open secret, at least for those paying attention, is that AI, as a creation of our flawed humanity, is by no means an unbiased “Truth” teller (one would hope that Elon simply lies for his own audience’s benefit).
But, personally, I’ve found AI to be the perfect scaffolding for my own executive function challenges, while at the same time I am well aware of its inherent biases created by its training - that of a largely white, heteronormative male avatar.
So it was a little surprising to stumble on a paper that talks to AI’s inherent bias against neurodivergence - and given my very specific work with AI on neurodivergence it was a little more than concerning that I hadn’t noticed my own blind spot.
The solution - a conversation with ChatGPT that looks not only at what the study suggested (and what it doesn’t suggest), and what that means in the future both in improving the quality of training data and how we need to always be cautious of assumptions.
Every time we interact with AI—whether it’s a hiring algorithm, a chatbot, or a health app—we’re talking to a mirror polished by human history.
And that mirror, as it turns out, still reflects old ideas about what it means to be normal.
A new study from Duke University recently measured how AI language models treat words like autism, ADHD, and OCD. The results were… uncomfortable.
Across 11 different systems—from GPT to Word2Vec—neurodivergent terms were consistently linked with words like bad, dangerous, diseased, and wrong. Even more troubling, when researchers tested positive traits such as honesty, creativity, or loyalty—qualities often associated with neurodivergent people—the algorithms still tilted negative.
In short, the machine thinks we’re a problem.
The Hidden Architecture of Normalcy
AI bias isn’t created out of thin air. It’s absorbed from billions of words written by humans. Those words don’t just teach language—they teach culture.
And our culture, for centuries, has taught that to think differently is to be defective.
When a model reads millions of medical papers describing autism as a disorder, or social media posts describing ADHD as a moral failing, it internalizes that worldview.
That’s not “bias” in a technical sense—it’s training.
So when the model later evaluates a phrase like “I am autistic,” it assigns it a lower “goodness” score than “I am a person.” Incredibly, “I have severe autism” ranked even lower than “I am a bank robber.”
It’s absurd. But also predictable.
AI learns to speak our language—including the parts we’ve never questioned.
Why This Matters
It would be easy to shrug this off as abstract. Who cares what a model thinks about autism if it can still answer our questions politely?
The answer lies in the way these systems are used:
in résumé scanners that quietly rank candidates,
in healthcare algorithms that decide who gets flagged for attention,
in content filters that promote some voices and bury others.
When bias hides in the foundations of language, it doesn’t just distort meaning—it distorts opportunity.
A system that sees neurodivergence as “less good” is more likely to dismiss our stories, devalue our skills, and silence our perspectives.
Beyond De-biasing: Toward Cognitive Equity
Most attempts to fix AI bias try to “de-bias” the data—essentially cleaning the mud off the mirror.
But what if the problem isn’t the dirt, but the shape of the mirror itself?
The concept of cognitive equity argues that fairness isn’t achieved by removing difference—it’s achieved by representing it.
That means designing AI systems that recognise multiple valid ways of thinking, speaking, and reasoning.
Here’s what that could look like:
Rebuilding the data: Include texts written by neurodivergent people, not just about them.
Changing the evaluators: Train models using feedback from neurodivergent raters, so alignment isn’t filtered solely through neurotypical expectations.
Testing interactional fairness: Instead of checking word lists, examine how AI responds to neurodivergent-style communication—directness, literal phrasing, emotional neutrality.
Embedding neurodiversity in policy: Make it a recognised fairness axis in AI regulation, alongside race and gender.
This isn’t about politeness—it’s about structural legitimacy.
If AI is going to make decisions about human lives, then all ways of being human must count.
The Next Frontier of Inclusion
For decades, neurodivergent people have fought to be understood, not fixed.
AI is now the next battleground for that same struggle.
We can’t rely on “de-biasing” alone, because it only strips away the explicit stereotypes while leaving the underlying norm unchallenged.
Instead, we need a generation of AI built with cognitive pluralism in mind—systems that see variation not as noise, but as a feature of human intelligence itself.
When we achieve that, AI stops being a mirror of our prejudice and becomes something far more interesting: a translator between minds.
Key takeaway:
Fairness in AI isn’t about making the system neutral. It’s about teaching it that different isn’t defective.
For a more “technical” view I’ve appended the original conversation in “Conversations with AI: When Algorithms Mirror the Myths of Normality”