Why Academics Are Burning Out Over AI — And Why the Fix Isn't More Training
- Anna Kiaos

- 3 days ago
- 5 min read

Dr Anna Kiaos | Mind Culture Life Australia
There's a conversation happening in every university in Australia right now. It's happening in hallways after faculty meetings, in private group chats between colleagues and in the quiet despair of academics staring at another AI policy update they didn't ask for and don't feel ready to implement.
The conversation isn't about whether AI is useful. Most people accept that it is. The conversation is about what AI is doing to the people inside these institutions — and why nobody in leadership seems to be asking.
I've spent the last several years researching organisational culture and workplace mental health, including how large institutions manage — and mismanage — the human side of transformation. What I'm seeing in universities right now is a pattern I've encountered before: an organisation transforming faster than its people can absorb and a leadership response that addresses the technology while ignoring the culture.
I recently published a white paper, The Cultural Cost of AI Adoption in Universities, that examines this dynamic in detail. This article is a summary of its core argument — and a challenge to the sector to do better.
The Problem Isn't Resistance. It's Identity.
When universities roll out AI strategies, the most common complaint from leadership is that staff are "resistant to change." This framing is dangerously simplistic.
For an academic, expertise isn't just a skill set — it's an identity. A professor's sense of professional self has been built over decades of deep knowledge accumulation, disciplinary mastery and the authority that comes with being the expert in the room. When AI can generate plausible answers in any discipline instantly, the implicit message is: your knowledge is no longer unique.
This triggers what organisational psychologists call an identity threat — a perceived challenge to the core attributes that define one's professional self. The responses that follow are entirely predictable: withdrawal, cynicism, resistance, and in severe cases, burnout and departure.
These are not signs of an obstinate workforce. They are psychologically rational responses to poorly managed change. Telling people to "embrace AI" without first acknowledging what they're being asked to give up doesn't just fail — it accelerates the very resistance it's trying to overcome.
Five Hidden Costs Nobody's Measuring
In my white paper, I identify five cultural costs that emerge when AI adoption is managed as a technology project rather than a cultural transition. Briefly, they are:
Occupational identity threat, where staff feel their expertise — and therefore their professional worth — is being diminished or made redundant by technology.
Psychological safety erosion, where the rapid pace of policy changes, combined with the surveillance-like quality of AI detection tools, creates an environment in which staff feel exposed, uncertain and afraid to voice concerns.
Cultural misalignment between leadership and staff, where the gap between what leadership announces and what staff actually experience produces cynicism and performative compliance — people appear to adopt AI while privately resisting it.
Mental health impact, where the cumulative pressure of identity threat, safety erosion and cultural disconnection pushes staff toward a tipping point that most institutions are not equipped to recognise, let alone address.
Attrition of senior expertise, where the people with the deepest institutional knowledge and strongest research track records — the very people universities cannot afford to lose — are the most vulnerable to leaving when the cultural environment makes them feel obsolete.
None of these costs appear on an AI adoption dashboard. None of them are captured by technology readiness surveys. And none of them will be solved by another workshop on prompt engineering.
The Missing Lens
Most universities are approaching AI through three lenses: technology (what tools to deploy), policy (what rules to set) and pedagogy (how to redesign teaching). These are necessary, but they are not sufficient.
What's missing is the fourth lens: culture.
Culture is the human ecosystem that determines whether everything else succeeds or fails. You can have the most sophisticated AI tools in the sector, the most comprehensive policies, and the most innovative assessment redesign — and all of it will underperform if the people expected to implement it are disengaged, distrustful, or quietly burning out.
Without a cultural strategy, universities risk a pattern I've observed repeatedly in government and corporate restructures: the technical transformation succeeds on paper while the human organisation quietly deteriorates. Productivity metrics may initially improve, but engagement, innovation and institutional loyalty decline — and the true costs become visible only when it's too late to reverse them.
A Different Approach
In the white paper, I propose a five-stage framework for what I call culturally intelligent AI adoption. The stages move from diagnosis through to ongoing stewardship:
Cultural Diagnosis — before deploying tools or policies, conduct a rigorous audit that maps where fear, identity threat and cultural resistance actually sit within the institution.
Identity-Affirming Communication — explicitly and consistently reframe AI as a tool that amplifies human expertise rather than replacing it and back that message with visible action.
Participatory Change Design — involve staff as co-designers of the transition, not passive recipients of it. Give people genuine decision-making power over how AI is integrated into their work.
Wellbeing-Integrated Implementation — embed mental health support directly into the AI adoption process, including training managers to recognise the signs of identity threat and role anxiety in their teams.
Ongoing Cultural Stewardship — treat cultural health as an ongoing metric, measured and reported alongside technology adoption, not as a one-off intervention.
This isn't about slowing down AI adoption. It's about making it work — by ensuring the people who must implement it are supported, heard and psychologically equipped to do so.
The Sector Can't Afford to Get This Wrong
Australian universities are under extraordinary pressure. International student caps are squeezing revenue. Job cuts are making national headlines. SafeWork NSW has intervened at least one major institution over staff psychological safety concerns during restructuring. And into this already strained environment, AI is arriving as yet another demand on people who are already stretched to their limit.
The institutions that treat AI adoption as a purely technical exercise will find themselves with sophisticated tools and a depleted, disengaged workforce — the very outcome that AI was supposed to prevent.
The institutions that get this right — that adopt AI while strengthening their culture, protecting their people and reinforcing the irreplaceable value of human expertise — will attract and retain the best academics, produce the most innovative research, and deliver the most meaningful education.
The choice is not whether to adopt AI. That's inevitable. The choice is whether to adopt it in a way that honours the culture and the people who give the institution its meaning.
The full white paper, The Cultural Cost of AI Adoption in Universities: What Leadership Teams Are Missing, is available for download at mindculturelife.com.au.
Dr Anna Kiaos is the founder of Mind Culture Life Australia, a research and consulting firm specialising in organisational culture and workplace mental health. Her peer-reviewed research on organisational subcultures, microcultures, and cultural blind spots in the workplace has been published in the Australian Journal of Public Administration and the Health Promotion Journal of Australia.
To discuss how Mind Culture Life can support your institution's AI transition, contact us at info@mindculturelife.com.au.




Comments