The Quiet Erosion: AI, Cognitive Decline, and What We Risk Losing as a Species
- Anna Kiaos

- 1 day ago
- 9 min read

We have outsourced memory to our phones, navigation to GPS, and now reasoning itself to large language models. The question is no longer whether this changes us — it is whether we will notice before the change becomes irreversible.
Dr Anna Kiaos
Founder & Director, Mind Culture Life Australia | Researcher, UNSW Sydney
There is a version of the AI conversation happening in boardrooms and on op-ed pages right now that focuses almost entirely on productivity: what AI can automate, accelerate, or augment. That conversation is not wrong — it is simply incomplete. It says very little about what we give up in the exchange. And what we give up, I argue, is among the most distinctly human things we possess: the capacity for sustained, effortful, original thought.
I want to be clear that this is not a technophobic argument. I use AI tools in my own research and consulting practice. The concern I am raising is not about technology per se — it is about the cultural conditions under which that technology becomes normalised, and what those conditions do to the cognitive architecture of individuals, organisations, and ultimately the species.
Cognitive Offloading Is Not New. The Scale Is.
Cognitive offloading — the practice of delegating mental tasks to external systems — is as old as written language.1 We externalised memory when we began writing things down. We externalised arithmetic when we built calculators. The human brain has always been adaptive in this way: when a reliable external tool exists, we tend to stop maintaining the internal equivalent.
What is different now is the scope of the offloading. AI now stands ready to store the process of reasoning itself. It will synthesise, evaluate, draft, decide, and recommend — not just record. The phenomenon was first documented systematically by Sparrow, Liu, and Wegner, whose research found that when people expect information to remain accessible online, they show lower rates of recall of the information itself — offloading not just storage but the motivation to remember.2 AI takes this logic several steps further: it is not merely a repository but an active reasoner. When the external system can do the thinking, the internal machinery for thinking atrophies. This is consistent with decades of neuroscience research on experience-dependent neural plasticity: neural circuits strengthen with use and weaken with disuse.3
The empirical evidence on AI-specific cognitive effects is accumulating. A large mixed-methods study involving close to 700 participants found a significant negative correlation between frequent AI tool usage and critical thinking abilities, with younger participants showing the highest AI dependence and the lowest critical thinking scores.4 A neural-behavioural study at MIT Media Lab, measuring brain activity via EEG, found that participants who wrote essays using ChatGPT showed the lowest brain engagement of any group, consistently underperforming at neural, linguistic, and behavioural levels — with engagement declining further over time as participants increasingly resorted to copy-paste rather than original thought.5
— When we outsource the struggle, we also outsource the growth that the struggle produces.
The struggle to hold competing ideas in tension, to sit with uncertainty, to construct an argument from first principles — these are not inefficiencies to be optimised away. They are the mechanisms through which cognitive capability is built and maintained. Remove the productive friction, and you do not get a more efficient thinker; you get a less capable one who feels productive.
The Organisational Dimension
When Cultures Stop Thinking
The individual dimension of AI-assisted cognitive atrophy is concerning enough. But what interests me professionally — and alarms me more — is the collective dimension. Organisations are not simply aggregations of individuals; they are cultures. They have norms, shared assumptions, and unwritten rules about how thinking is done, whose thinking counts, and what good reasoning looks like. Those norms are now being reshaped, rapidly and largely without conscious design, by the introduction of AI into the workflow.
In my work mapping organisational culture through the Culture Pressure Map™ framework, I consistently find that the deepest indicators of a culture’s health are not visible at the surface level of policy or stated values. They live in subcultures and microcultures — the lived, daily experience of teams and individuals doing actual work. And at that microculture level, something significant is shifting. People are increasingly asking AI first, before sitting with the problem, before exercising their own judgment or before consulting a colleague. The internal threshold for tolerating not-knowing has dropped.
Controlled research comparing ChatGPT-assisted analytical work with standard web search has found that while AI users complete tasks more quickly, their arguments are lower in quality and shallower in reasoning depth — a pattern described as ‘cognitive ease at a cost’.6 This matters because organisational learning — real learning, not the compliance-training variety — is built on the threshold of productive discomfort: the moment before the answer arrives. Compress that moment to zero, and you compress the learning embedded in it. An organisation that no longer struggles with difficult problems is an organisation that is losing its capacity to solve them.
Consider: In your team’s last significant decision, how much of the analytical work was done by people, and how much was delegated to a generative AI tool? More importantly — would you know the difference? And if the AI reasoning was flawed, would anyone in the room have the depth to identify it?
The Homogenisation Problem
There is a second and less-discussed dimension to the cultural impact of AI on cognition: homogenisation. When millions of people route their thinking through the same large language models — trained on similar data, optimised for similar outputs, biased toward similar framings — the diversity of intellectual output narrows. Not by coercion, but by default. This has been empirically demonstrated: analysis of over 2,200 essays found that each additional human-written essay contributed more new ideas than each additional AI-generated essay, and this gap widened as more essays were added — the homogenising effect compounds at scale.7 A complementary study found that participants who used LLMs to answer open-ended survey questions produced responses that were measurably more homogeneous and less reflective of genuine variation in human attitudes.8
Human cognitive diversity has historically been one of our greatest evolutionary assets. The outlier thinker, the contrarian, the person who approaches a problem from an entirely unexpected angle — these are the sources of genuine innovation and, in moments of civilisational challenge, survival. Research by Page on the mechanics of collective intelligence demonstrates that groups whose members bring different mental models and interpretive frameworks systematically outperform groups of like-minded experts on complex problems.9 That diversity is not just a product of different information; it is a product of different cognitive processes, shaped by different educations, embodied experiences, emotional histories, and cultural inheritances. No language model, however sophisticated, can replicate that texture.
When organisations train their people to use AI as the first step in any analytical process, they are not just improving efficiency. They are quietly standardising the cognitive starting point. And cultures that begin from the same starting point tend, over time, to arrive at the same destinations.
The Species Question
A Co-evolutionary Crossroads
Zooming out to the species level, we are at a genuinely novel moment in human cognitive evolution. For the first time, we have created an external system capable of performing higher-order cognition — not just storing data, but reasoning, creating, and making judgments. This is a qualitatively different kind of tool, and it demands a qualitatively different kind of response than previous technological transitions.
Evolutionary biologists would note that organisms tend to lose capabilities they consistently delegate to their environment. This is the principle of use-dependent plasticity applied at species scale.3 If the selective pressure to develop and maintain deep cognitive capacity is reduced — because AI absorbs much of the demand — there is no biological guarantee that we maintain it. We are not immune to our own adaptive logic.
The optimistic counterargument is that AI will free human cognition for higher-order work: creativity, wisdom, ethical reasoning, relational intelligence. There is merit in this view, but it is conditional. As scholars have noted, the concern is not AI assistance in targeted domains — it is AI as a general reasoner upon which people offload thinking about any topic whatsoever.10 That requires that we make active, deliberate choices about which cognitive capacities to protect, cultivate, and pass on — and that we build cultures, in organisations and in societies, that treat those capacities as non-negotiable. That is not happening by default. The default is convenience.
— The question is not whether AI will change how humans think. It already has. The question is whether we are shaping that change, or simply experiencing it.
What Intentional Culture Design Demands
For leaders and organisations, the practical implication of this analysis is not to resist AI adoption — that ship has sailed, and the competitive logic of its adoption is real. The implication is to treat AI integration as a cultural design challenge, not merely a technology deployment challenge.
That means being explicit about which cognitive capacities your organisation is choosing to protect. It means designing workflows in which AI augments human judgment rather than replacing it — which requires knowing the difference. It means investing in the conditions that make deep, effortful thinking possible: psychological safety, time for reflection, tolerance for dissent, and cultures where being wrong out loud is safer than being right by proxy.
Research on the sequencing of human reasoning and AI assistance is instructive here: participants who completed their own analysis before turning to AI showed significantly better retention and deeper engagement than those who used AI first.11 The question for organisations is not whether to use AI, but at which point in the reasoning process to introduce it.
At the subcultural and microculture level, it means managers and team leaders actively modelling the behaviours of independent reasoning: asking their teams “what do you think?” before “what does the tool say?”, creating space for genuine intellectual disagreement, and refusing to mistake fluency for competence.
None of this is anti-AI. All of it is pro-human. The two are not in conflict — but sustaining that balance requires intention. And intention, in organisations as in evolution, requires someone to decide what is worth preserving.
The species that learns to use powerful tools without surrendering the capacities that made those tools possible in the first place is the species that survives the tools it creates. That is the challenge in front of us — and it is, at its core, a cultural one.
About the Author
Dr Anna Kiaos is the Founder and Director of Mind Culture Life Australia, a Sydney-based organisational culture consultancy, and a Researcher at the Discipline of Psychiatry and Mental Health, UNSW Sydney. Her proprietary Culture Pressure Map™ framework maps organisational culture across three levels of analysis, from shared ethos to lived workplace experience. Her peer-reviewed research on customer-centric ideology and burnout was published in the Journal of Workplace Behavioral Health (Taylor & Francis, 2026).
Notes
1 Risko, E.F., & Gilbert, S.J. (2016). Cognitive offloading. Trends in Cognitive Sciences, 20(9), 676–688.
2 Sparrow, B., Liu, J., & Wegner, D.M. (2011). Google effects on memory: Cognitive consequences of having information at our fingertips. Science, 333(6043), 776–778.
3 Kleim, J.A., & Jones, T.A. (2008). Principles of experience-dependent neural plasticity: Implications for rehabilitation after brain damage. Journal of Speech, Language, and Hearing Research, 51(1), S225–S239.
4 Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6.
5 Kosmyna, N., et al. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv preprint arXiv:2506.08872.
6 Stadler, M., Bannert, M., & Sailer, M. (2024). Cognitive ease at a cost: How ChatGPT assistance reduces cognitive load but also reduces reasoning depth. British Journal of Educational Technology.
7 Doshi, A.R., & Hauser, O.P. (2024). Homogenizing effect of large language models on creative diversity: An empirical comparison of human and ChatGPT writing. Research and Practice in Technology Enhanced Learning.
8 Zhang, S., Xu, J., & Alvero, A.J. (2025). Generative AI meets open-ended survey responses: Research participant use of AI and homogenization. Sociological Methods & Research.
9 Page, S.E. (2007). The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies. Princeton University Press.
10 Harvard Gazette. (2025, November). Is AI dulling our minds? Faculty perspectives on critical thinking in the age of AI. Harvard University.
11 Akgun, S., & Toker, S. (2024). Effects of pretesting on retention and cognitive engagement in AI-assisted learning contexts. Referenced in: Çela et al. (2025). The cognitive paradox of AI in education. Frontiers in Education.
References
Akgun, S., & Toker, S. (2024). Effects of pretesting on retention and cognitive engagement in AI-assisted learning contexts. Referenced in: Çela et al. (2025). The cognitive paradox of AI in education. Frontiers in Education.
Doshi, A.R., & Hauser, O.P. (2024). Homogenizing effect of large language models on creative diversity: An empirical comparison of human and ChatGPT writing. Research and Practice in Technology Enhanced Learning. https://doi.org/10.1016/j.rpte.2024.100091
Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006
Harvard Gazette. (2025, November). Is AI dulling our minds? Faculty perspectives on critical thinking in the age of AI. Harvard University. https://news.harvard.edu/gazette/story/2025/11/is-ai-dulling-our-minds/
Kiaos, A. (2025). Customer-centric ideology and symptoms of burnout: A case study in the NSW public sector. Journal of Workplace Behavioral Health. Taylor & Francis.
Kleim, J.A., & Jones, T.A. (2008). Principles of experience-dependent neural plasticity: Implications for rehabilitation after brain damage. Journal of Speech, Language, and Hearing Research, 51(1), S225–S239. https://doi.org/10.1044/1092-4388(2008/018)
Kosmyna, N., Hauptmann, E., Yuan, Y.T., Situ, J., Liao, X.H., Beresnitzky, A.V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv preprint arXiv:2506.08872.
Page, S.E. (2007). The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies. Princeton University Press.
Risko, E.F., & Gilbert, S.J. (2016). Cognitive offloading. Trends in Cognitive Sciences, 20(9), 676–688. https://doi.org/10.1016/j.tics.2016.07.002
Sparrow, B., Liu, J., & Wegner, D.M. (2011). Google effects on memory: Cognitive consequences of having information at our fingertips. Science, 333(6043), 776–778. https://doi.org/10.1126/science.1207745
Stadler, M., Bannert, M., & Sailer, M. (2024). Cognitive ease at a cost: How ChatGPT assistance reduces cognitive load but also reduces reasoning depth. British Journal of Educational Technology. https://doi.org/10.1111/bjet.13431
Zhang, S., Xu, J., & Alvero, A.J. (2025). Generative AI meets open-ended survey responses: Research participant use of AI and homogenization. Sociological Methods & Research. https://doi.org/10.1177/00491241251327130




Comments