November 11

AI, Neurodiversity, and the Myth of Neutral Intelligence

0  comments

Share this

Why fairness in AI needs more than de-biasing—it needs cognitive equity

An open secret, at least for those paying attention, is that AI, as a creation of our flawed humanity, is by no means an unbiased “Truth” teller (one would hope that Elon simply lies for his own audience’s benefit).

But, personally, I’ve found AI to be the perfect scaffolding for my own executive function challenges, while at the same time I am well aware of its inherent biases created by its training - that of a largely white, heteronormative male avatar.

So it was a little surprising to stumble on a paper that talks to AI’s inherent bias against neurodivergence - and given my very specific work with AI on neurodivergence it was a little more than concerning that I hadn’t noticed my own blind spot.

The solution - a conversation with ChatGPT that looks not only at what the study suggested (and what it doesn’t suggest), and what that means in the future both in improving the quality of training data and how we need to always be cautious of assumptions.

Every time we interact with AI—whether it’s a hiring algorithm, a chatbot, or a health app—we’re talking to a mirror polished by human history.
And that mirror, as it turns out, still reflects old ideas about what it means to be normal.

new study from Duke University recently measured how AI language models treat words like autismADHD, and OCD. The results were… uncomfortable.
Across 11 different systems—from GPT to Word2Vec—neurodivergent terms were consistently linked with words like baddangerousdiseased, and wrong. Even more troubling, when researchers tested positive traits such as honestycreativity, or loyalty—qualities often associated with neurodivergent people—the algorithms still tilted negative.

In short, the machine thinks we’re a problem.

The Hidden Architecture of Normalcy

AI bias isn’t created out of thin air. It’s absorbed from billions of words written by humans. Those words don’t just teach language—they teach culture.
And our culture, for centuries, has taught that to think differently is to be defective.

When a model reads millions of medical papers describing autism as a disorder, or social media posts describing ADHD as a moral failing, it internalizes that worldview.
That’s not “bias” in a technical sense—it’s training.

So when the model later evaluates a phrase like “I am autistic,” it assigns it a lower “goodness” score than “I am a person.” Incredibly, “I have severe autism” ranked even lower than “I am a bank robber.”
It’s absurd. But also predictable.

AI learns to speak our language—including the parts we’ve never questioned.

Why This Matters

It would be easy to shrug this off as abstract. Who cares what a model thinks about autism if it can still answer our questions politely?

The answer lies in the way these systems are used:

  • in résumé scanners that quietly rank candidates,

  • in healthcare algorithms that decide who gets flagged for attention,

  • in content filters that promote some voices and bury others.

When bias hides in the foundations of language, it doesn’t just distort meaning—it distorts opportunity.

A system that sees neurodivergence as “less good” is more likely to dismiss our stories, devalue our skills, and silence our perspectives.

Beyond De-biasing: Toward Cognitive Equity

Most attempts to fix AI bias try to “de-bias” the data—essentially cleaning the mud off the mirror.
But what if the problem isn’t the dirt, but the shape of the mirror itself?

The concept of cognitive equity argues that fairness isn’t achieved by removing difference—it’s achieved by representing it.
That means designing AI systems that recognise multiple valid ways of thinking, speaking, and reasoning.

Here’s what that could look like:

  • Rebuilding the data: Include texts written by neurodivergent people, not just about them.

  • Changing the evaluators: Train models using feedback from neurodivergent raters, so alignment isn’t filtered solely through neurotypical expectations.

  • Testing interactional fairness: Instead of checking word lists, examine how AI responds to neurodivergent-style communication—directness, literal phrasing, emotional neutrality.

  • Embedding neurodiversity in policy: Make it a recognised fairness axis in AI regulation, alongside race and gender.

This isn’t about politeness—it’s about structural legitimacy.
If AI is going to make decisions about human lives, then all ways of being human must count.

The Next Frontier of Inclusion

For decades, neurodivergent people have fought to be understood, not fixed.
AI is now the next battleground for that same struggle.

We can’t rely on “de-biasing” alone, because it only strips away the explicit stereotypes while leaving the underlying norm unchallenged.
Instead, we need a generation of AI built with cognitive pluralism in mind—systems that see variation not as noise, but as a feature of human intelligence itself.

When we achieve that, AI stops being a mirror of our prejudice and becomes something far more interesting: a translator between minds.

Key takeaway:
Fairness in AI isn’t about making the system neutral. It’s about teaching it that different isn’t defective.

For a more “technical” view I’ve appended the original conversation in “Conversations with AI: When Algorithms Mirror the Myths of Normality

Loved this? Spread the word


About the Author

Shane Ward is a Certified ADHD Life Coach offering support and accountability to those of us who sometimes think and behave differently to what the rest of society would prefer.

He identifies as Neurodivergent, ADHD, Agitator, Protector of the Underdog, GDB, and recovered alcoholic.


Related posts

ADHD Isn’t a Scam. This Article Is.

A case study in how selective skepticism and click-driven certainty derail serious discussion. The Daily Wire shouts “Best of 2025” and then promptly republishes Matt Walsh’s insipid diatribe on a subject he has no grounding in. Predictably, he leans on the laziest trope in the contrarian playbook: Big Pharma invents diagnoses to pad profits — an argument delivered without irony by a

Read More

The Long Way Round: A Late ADHD Diagnosis Journey

What happens when ADHD is recognised after you’ve already built a life? If you met me in person and, somewhere along the way, we found ourselves talking about ADHD (fair warning…this is not a short conversation), you would very quickly notice my particular fixation on late diagnosis, and my less-than-subtle view that the professional class—clinicians

Read More

ADHD, Autism, and the Shape of a Mind

What a massive genetics study tells us—and what it still can’t explain Every few years a study comes along that quietly shifts the ground beneath psychiatry. Not with a headline like “We’ve found the ADHD gene” (we haven’t), but with something more unsettling:| What if the way we divide mental health conditions doesn’t match how the brain

Read More

Why ADHD Studies Keep Misunderstanding What Actually Works

Short-term trials can’t measure long-term change—and people with ADHD pay the price. A new umbrella review in The BMJ tries to answer a deceptively simple question: what actually works for ADHD?Not in theory, not in opinion—but across hundreds of randomised trials and decades of research. The authors analysed 221 re-estimated meta-analyses covering 31 interventions across preschoolers, children, adolescents, and adults. It’s one of the most comprehensive

Read More

Subscribe to our newsletter now!