What your writing reveals about you (even when you don’t know it)
Sounding the alarm over a July 2025 study on LLMs and autism detection

Imagine if a simple story you wrote about a birthday party revealed the most personal, private things about you. Things you yourself might not even be aware of.
This premise isn’t a thought experiment. It’s reality.
There are those who can detect things beneath the surface of awareness. Humans with visual impairment who use echolocation (making clicking sounds that bounce against surfaces) to navigate the physical world. Dogs who sniff cancer or explosives. Even my ability to recognize faces and voices across decades and contexts. At base, these are all forms of pattern recognition, drawing on our brains’ ability to sift through sensory inputs and find meaning in minute distinctions.
But these slightly magical abilities are beginning to seem superfluous. Because AI is built to recognize patterns at a supernatural scale, dwarfing anything we can do.
The potential applications are both grand and unnerving. This was brought home when I read a research paper published this month. The topic? Determining whether a person is autistic simply from what they’ve written.
For years, researchers have studied how autistic and non-autistic writing is different.
In a 2021 Spanish study, a group of autistic and non-autistic adolescents wrote a story based on a visual prompt of a birthday party scene. Researchers hand-coded the stories and found that the autistic group produced shorter text, used a more limited mix of vocabulary and sentence types, and often skipped the resolution of the story’s central conflict.
Last year, in 2024, a Polish team ran hundreds of eighth-grade exam essays through software that detected emotional undertones and abstract linguistic elements. Autistic essays were again a bit shorter, used emotionally flatter language, employed fewer “mind” verbs like think or wonder, and packed together denser sentences with more advanced vocabulary.
In both studies, the researchers started out by measuring specific things: word choice, syntax, and narrative elements.
They had to decide upfront which aspects of the writing to measure and compare, which meant the findings were limited to differences that humans could detect.

Which takes us to the 2025 study I mentioned. This time, researchers unleashed large language models (LLMs) on writing samples from autistic and non-autistic people. First, they fed the LLMs the raw text of some of those essays, along with the labels “autistic” or “non-autistic.” This provided a training set.
After training on the labeled essays, the models were tested on new essays they hadn’t seen, to see if they could correctly predict whether the writer was autistic or not.
This time, the researchers didn’t set out to measure specific things like vocabulary, narrative coherence, or sentence complexity. Instead, they simply instructed the LLMs to make a binary classification: distinguish which samples were “autistic,” and which were not.
The researchers also enlisted a small group of human evaluators to serve as controls. They were psychologists experienced in autism diagnosis, and they reviewed the same writing samples as the LLMs.
The results?
The humans did about as well as a coinflip; their predictions were not much better than blind guessing.
Some of the LLMs, though, reached about 90% accuracy. Because of their black box nature, it’s not clear what differences they detected to yield such remarkable results. Whatever the process, they found significant variations that the human experts didn’t.
The researchers cautioned that there’s “still a long way to go before these types of models can be used for clinical purposes,” but declared their results “promising.”
My prediction: Clinicians will be using LLMs to screen writing samples for autism before too long.
The research paper focused on positive implications of the results: the development of a better screening tool.
And I agree, that’s positive. I have real concerns with our current system of diagnosis, which is too inconsistent and too subjective.
On the other hand…
Doesn’t this worry you a bit? If these researchers attained 90% accuracy in this early effort, people with less benign goals could presumably achieve something similar without much effort.
They could use the public writing of self-identified autistic people as a training tool, and then demand writing samples as a condition of employment, government benefits, and more. In the process, a dossier of your cognitive imprint could be compiled—without your knowledge, let alone consent.
And it doesn’t stop at autism.
What else can LLMs tell about us from our writing? A lot.
Researchers are using LLMs to detect other conditions and traits, including depression, suicidal ideation, general personality traits, and early Alzheimer’s disease.
To a limited extent, this is good! There are interventions that help with depression and suicidal ideation. I know people who wouldn’t be here today without them. I also knew people who didn’t receive them in time.
But it’s also scary that our writing can be used so invasively, to determine things about us that we may not know ourselves. This feels like the premise of a dystopian novel.

Ironically, might some of this be mitigated by our growing use of LLMs to write?
I can’t help thinking of the steady march of articles declaring the end of human writing. That students won’t ever learn how to write, since AI will do it for them.
In the future, how much public writing will truly be human-authored? If you can grab a sample of a person’s writing online, you may not be grabbing their writing. It may be an LLM’s.
And, as more writing is produced by LLMs, all writing, even human-produced, may converge toward a uniform style. Our writing is influenced by what we read. If we read mostly AI authorship, then perhaps that will come out in our writing.
If everyone’s writing starts to echo the same AI style, we may lose some of the subtleties those 2025 study results depended on.
In a dark twist, the spread of LLM tools could blunt their diagnostic edge.
Cold comfort, right?
But maybe that’s the strange bargain we’re entering: as AI tools detect more cognitive signals, they also start to retrain those signals in us—remodeling our writing to be generic.
Or maybe I’m underestimating their ability to see what we can’t.
So, what do you think? Does this worry you, excite you… both?
Did you enjoy this post? Please support my work—for free!
1. Subscribe for regular updates and 2. Tap below to heart this post so others discover it.
Looking for more to read? Check out these past posts:
Research cited in this post:
Inmaculada Baixauli, Belen Rosello, Carmen Berenguer, Montserrat Téllez de Meneses, Ana Miranda. 2021. Reading and Writing Skills in Adolescents With Autism Spectrum Disorder Without Intellectual Disability. Sec. Educational Psychology. 12. Read online.
Izabela Chojnicka & Aleksander Wawer. 2024. Analysis of Autistic Adolescents’ Essays Using Computer Techniques. Journal of Autism and Developmental Disorders. 2024. Read online.
Izabela Chojnicka & Aleksander Wawer. 2025. Predicting autism from written narratives using deep neural networks. Scientific Reports. 15:20661. Read online.
Stay curious,
Laura
I hardly know what to think about it all really. I'm intrigued by the idea there's such a thing as autistic writing. But would most definitely not like to be 'outed' by a LLM such a personal thing about my identity. Not that I keep the fact I'm autistic secret by any means, I'm openly so. But I do like to be in control of how I share the information. Otherwise it feels pretty objectifying.
There are many of these diagnostic models based on AI now. It doesn't need to involve writing samples. There are ones based on voice analysis, body language, EEG patterns, etc. The implications are frightening since diagnosis will no longer require the active participation or consent of the patient. The diagnostic process won't need to be initiated by a doctor. The diagnosis will be deemed reliable based on assurances of whoever wrote the AI. I don't want to game out the implications here but i really don't see any upside. It's against privacy, autonomy and might lead to discrimination in various forms as other commenters noted.