Artificial Intelligence and Its Impact on Emotional Wellbeing in Schools

The integration of artificial intelligence in education is changing the way we learn and teach. But beyond technological advances, some questions arise about its emotional and social impact. How does AI affect the wellbeing of students and teachers? Can a tool that promises personalisation end up generating pressure, anxiety, or disconnection? This article analyses the socio-emotional implications of educational AI with a critical perspective, recent data, and proposals for conscious implementation.

Artificial Intelligence and Its Impact on Emotional Wellbeing in Schools

The emergence of artificial intelligence (AI) in the educational field has brought many benefits. But what about emotions? So far, the public debate has focused on its ability to personalise content, automate tasks, or improve academic performance, but the conversation is leaving out important issues. For example, how does this technological integration emotionally affect teachers and students? What social and wellbeing implications result from its growing use in classrooms? Can a technology that does not feel help us cultivate emotions, bonds, and wellbeing in the classroom?

The ODITE 2025 Report (Connected Intelligences: How AI is Redefining Personalised Learning) warns about the excessive focus on AI’s technical benefits (such as learning personalisation or operational advantages) compared to a shallow exploration of the risks and uncertainties—particularly the socio-emotional ones. This imbalance in analysis highlights a pressing need: to rethink the impact of AI beyond academic performance.

This text proposes looking beyond efficiency and data. It addresses, from a critical and human perspective, how AI impacts wellbeing, relationships, and the emotional experience of those who live education from the inside: students and teachers.

Dehumanising Risks

IA y bienestar emocional

The entry of artificial intelligence into classrooms does not only change pedagogical dynamics. It also directly touches the most sensitive aspects of educational life: the emotions, bonds, and wellbeing of those who teach and learn. Thus, in contrast to the optimistic narrative that exalts personalisation and efficiency, voices are rising to warn of the invisible side effects that accompany this transformation (without questioning the technology itself, but rather how it is used and to what ends). The expert in education and technology Carlos Magro summarises and analyses these positions in This Time It Will Work, one of the chapters of the ODITE report.

Thus, inspired by authors like Neil Selwyn, Mariana Ferrarelli, and Gert Biesta, Magro questions some key trends in AI use in education. These include technological solutionism (Selwyn), which suggests that every educational problem can be solved with more technology, thus reducing the complexity of pedagogy to mere technical failures; false data-driven personalisation (Ferrarelli), which simulates adapting learning to the student but actually segments based on behaviour patterns without considering the student’s emotional and social dimensions; and the loss of educational purpose in the face of performance logic (Biesta), a critique of how education has shifted focus from purpose, bonding, and holistic formation to almost exclusively measurable and efficient outputs.

From this perspective, AI is not neutral: it shapes relationships, defines priorities, and affects subjectivities. For this reason, Magro proposes, considering its emotional and social impact requires a return to a pedagogy of bonding, where students and teachers are not passive technology users but conscious protagonists of an education that does not sacrifice wellbeing for efficiency.

Impact on Students’ Emotional Wellbeing

The growing use of artificial intelligence in school settings is beginning to leave its mark not only on teaching methods but also on students’ emotional experience. While AI promises—and in some cases already provides—significant support in personalised learning, it also presents risks that directly affect students’ emotional and social wellbeing, especially when implemented without a critical and human pedagogical approach.

In this regard, the ODITE report identifies three main risks from a meta-analysis of 28 articles. These are: social isolation (automated personalisation, rather than generating closeness, can reinforce solitary paths where students interact more with platforms than with peers or teachers); cognitive dependency (reliance on virtual assistants to solve problems can also inhibit the development of metacognitive skills); and increased performance anxiety (when constant algorithmic performance measurement leads to silent but persistent pressure).

However, when used with pedagogical intent, AI can be a valuable tool for promoting wellbeing. In the same report, Rosa María de la Fuente presents examples where adaptive virtual tutors provide support to students with specific educational needs, allowing them to reinforce content at their own pace and style. These applications, in the hands of sensitive and trained teachers, become empathetic resources that accompany rather than replace, that open paths instead of imposing trajectories.

For artificial intelligence to genuinely contribute to students’ emotional wellbeing, it must be integrated within a pedagogy of care. That is, not only serving performance but also recognition, inclusion, and listening. Like all technology, its impact will depend on the hands (and heart) that implement it.

Emotional Risks for Teachers

The narrative surrounding the introduction of artificial intelligence in classrooms is often accompanied by a recurring promise: to free teachers from repetitive and administrative tasks so they can focus on what truly matters pedagogically. However, this promise does not align with the real perception of those who work in schools. Far from lightening the load, many teachers report a growing digital overload that increases stress, bureaucracy, and a sense of professional burnout, especially in contexts of rapid implementation and lack of support.

The ODITE 2025 Report collects numerous reflections on this matter. For instance, the aforementioned Carlos Magro states that digitalisation has contributed to a culture of efficiency that displaces pedagogical reflection and increases technical demands on teachers, generating “more connected schools, but less human ones”.

Another consequence highlighted by Magro is the loss of pedagogical agency. When decisions about what, how, and when to teach start being delegated to automated platforms, teachers may feel stripped of their role as critical guides in the educational process.

The challenge, therefore, is not merely technical but deeply professional. It is about preserving and strengthening the teaching profession. AI must not become a pre-packaged solution that weakens educators’ judgment. On the contrary, its integration should reinforce the role of teachers as key players in the educational system, ensuring they have the time, resources, and training needed to carry out their work with autonomy, purpose, and care.

A technology that does not enhance our capacity to feel, understand, and accompany can hardly be considered educational.

How to Get It Right?

So far, we have explored the emotional and social risks of using artificial intelligence in education. But although these risks exist and must be acknowledged, it is equally true that this technology’s transformative capacity—when implemented from an ethical, human, and pedagogical perspective—is undeniable. Therefore, as we have always maintained at this Observatory, far from rejecting technology, what we must do is integrate it meaningfully to ensure it serves the holistic development of students and teachers.

What conditions ensure the conscious and healthy adoption of technology? The ODITE Report outlines several key factors, which we summarise below:

The first is teacher training, not only in the technical use of AI tools but in critical digital skills. It is urgent that teachers understand the functioning, biases, and limits of these technologies while also receiving support to care for their own emotional wellbeing. As authors like Liliana Arroyo and Miquel Àngel Prats point out, AI literacy must be accompanied by an ethical and emotional approach that restores the teacher’s role as a human mediator—not as an operator of automated systems.

Secondly, educational policies must be designed to protect pedagogical autonomy. Decision-making in the classroom must not be subordinated to the dictates of commercial platforms or metrics generated by opaque algorithms. Governments must define regulatory frameworks that prioritise data protection, equitable access, and teacher participation in the selection and implementation of AI systems.

Another essential condition is emotional and human support in digital transformation processes. The report stresses that the integration of AI-based technologies cannot be addressed merely as a technical transition but as a deep cultural change requiring pedagogical leadership, active listening, and management of teacher discomfort. Resistance to change, when ignored, can lead to rejection or burnout; when acknowledged, it becomes an opportunity for conscious innovation.

Likewise, the need to promote a progressive, guided, and critical use of AI by students is emphasised. Inspired by models such as Frey and Fisher’s, a sequence is proposed in which students move from assisted use to responsible autonomy, always with teacher mediation. This approach fosters critical thinking, avoids algorithmic dependency, and strengthens emotional self-regulation.

Finally, the report insists that the success of AI in education does not depend on the tool but on the pedagogical model guiding it. As Neus Lorenzo states, it is not enough to “put AI in the classroom”; it is necessary to design educational experiences with intention, with an ethical framework and appropriate technical support. The most inspiring experiences cited in the report share a common trait: they put people at the centre and understand technology as a means to strengthen—not replace—human connections.

A healthy and critical integration of artificial intelligence will only be possible if it is approached as a pedagogical project, not as a trend or an automatic solution. AI can contribute greatly, yes, but its educational value will always depend on who uses it, how, and for what purpose. And in that equation, professional judgement, care, and awareness will remain irreplaceable.

An AI That Teaches Us Humanity

The question that should guide us today is not whether to use artificial intelligence in education but how to use it without losing sight of those who teach and learn. AI is not just a tool—it is a technology with the power to reshape our relationships, our emotions, and our understanding of learning. Therefore, more than adapting schools to AI, we need to adapt AI to schools. And that implies a radical shift: placing care, listening, and wellbeing at the heart of innovation.

The challenge, then, is not technical but pedagogical and ethical. Can we teach with AI without reducing the educational experience to a sequence of data? Can we automate tasks without automating relationships? Are we willing to train teachers not only as system users but as “curators of humanity” in algorithmic times?

AI will be transformative to the extent that we know how to use it to amplify what is human—not replace it. Because a technology that does not enhance our capacity to feel, understand, and accompany can hardly be considered educational. In this dilemma, the role of teachers, school communities, and public policies will be decisive—not to decide whether AI enters the classroom but to ensure that, when it does, it does not sweep away what makes us deeply human.

You may also be interested in…