Artificial intelligence is moving rapidly into the domain of emotional and mental health. Therapy chatbots, AI-driven self-help applications, and large language model-based support tools have proliferated over the past few years, driven by genuine need: mental health services across Europe and beyond are under-resourced, waiting lists are long, and the gap between those who need support and those who can access it remains substantial. AI, its proponents argue, offers a scalable, always-available, low-cost partial solution.
But a growing body of peer-reviewed research published in late 2025 and early 2026 is posing sharper questions about what AI can and cannot do in this space — and what may be lost when emotional support is mediated by algorithms rather than human relationship.
What the research says AI can do
A systematic review published in JMIR Mental Health in May 2025, which examined the capabilities and limitations of generative AI models in mental health applications, found that AI tools show genuine promise in areas such as psychoeducation, clinical note generation, and providing accessible first-contact support (Wang et al., 2025). One notable finding: ChatGPT was found to outperform humans on certain standardised measures of emotional awareness in laboratory conditions — a result that, while striking, researchers were careful to contextualise. Performing well on an emotional awareness test is not the same as being capable of genuine therapeutic relationship.
A 2025 study published in Scientific Reports found that AI-based mental health tools were associated with improved psychological wellbeing among users, with emotional self-efficacy and perceived autonomy as mediating factors (Kumar et al., 2025). AI tools have also demonstrated effectiveness as adjuncts to formal care — supporting people between sessions, helping them track mood and behaviour patterns, and providing structured psychoeducational content.
What the research says AI cannot do
The more probing question is where AI’s limits lie — and the emerging literature is increasingly candid on this point. A critical review published in the journal Societies in December 2025, analysing 40 peer-reviewed studies on AI and emotional wellbeing from 2020 to 2025, identified a cluster of risks that sit at the heart of what human-centred emotional work is actually for (Santiago-Torner et al., 2026). The authors documented concerns around simulated empathy, affective dependence, algorithmic fatigue, and — most fundamentally — the erosion of relational authenticity.
The distinction between simulated and genuine emotional resonance is not merely philosophical. In therapeutic work, the practitioner’s nervous system is actively engaged with the client’s. Attunement — the process by which one regulated nervous system helps regulate another — is a bodily, relational phenomenon. It depends on presence, on the capacity to tolerate and stay with difficult emotional states, and on the kind of genuine reciprocity that AI tools structurally cannot offer. As Torous and Cipriani (2025) noted in their JMIR Mental Health commentary, even as of August 2025 no AI chatbot is willing to assume medical or legal responsibility for therapy — and current models may not be suited to clients with more severe mental health conditions.
The risk of affective dependence
Perhaps the most clinically significant concern raised in the recent literature is what researchers have termed affective dependence: the risk that individuals who use AI tools for emotional support develop a reliance on algorithmically responsive interaction that actually reduces their capacity for emotional self-regulation. The Santiago-Torner et al. (2026) review raised a particular epistemological concern that is directly relevant for those working in the neuro-emotional space: if AI increasingly mediates how people experience and express emotional states, what happens to the internal, somatic, bodily dimension of emotional life?
Emotional processing is not only a cognitive event. It involves the body’s nervous system, its rhythms of activation and settling, its felt sense of safety or threat. AI-mediated emotional interaction, however sophisticated, operates at the cognitive and linguistic surface — and may, over time, train people away from the deeper interoceptive awareness that genuine emotional integration requires.
A question of complement, not replacement
The emerging research consensus is not that AI has no place in emotional health — it clearly does, particularly as a means of expanding access and supporting continuity of care. But the evidence strongly supports a model in which AI serves as a complement to human, relational, embodied forms of support, rather than a substitute for them. Torous and Cipriani (2025) concluded that realising AI’s potential in mental health will require careful attention to the lessons of previous digital health waves, including the importance of evidence standards, human oversight, and honest appraisal of what the technology can and cannot do.
For those working in integrative, body-based, and neuro-emotional approaches, this moment is both a challenge and an opportunity. The challenge is to articulate clearly, and with evidence, what human therapeutic presence offers that technology cannot replicate. The opportunity is that the AI debate is directing public and professional attention toward exactly the questions that have always motivated this work: what does it actually mean to help a human being process emotional experience? What does genuine integration require? And what kind of contact — with another person, with one’s own body, with the living present moment — makes that possible?
References
Kumar, D., Uchoi, E., & colleagues. (2025). Use of AI-based mental health tools and psychological well-being among Chinese university students: A parallel mediation model of emotional self-efficacy and perceived autonomy. Scientific Reports.
Santiago-Torner, C., Corral-Marfil, J., & Tarrats-Pons, E. (2026). Artificial intelligence and the reconfiguration of emotional well-being (2020–2025): A critical reflection. Societies, 16(1), Article 6.
Torous, J., & Cipriani, A. (2025). A paradigm shift in progress: Generative AI’s evolving role in mental health care. JMIR Mental Health.
Wang, L., Bhanushali, T., Huang, Z., Yang, J., Badami, S., & Hightow-Weidman, L. (2025). Evaluating generative AI in mental health: Systematic review of capabilities and limitations. JMIR Mental Health, 12, e70014.