The expansion of generative artificial intelligence has radically transformed perspectives on contemporary education. Applications capable of writing essays, producing illustrations, or creating musical pieces have sparked mixed reactions among teachers, students, and the general public.
While some celebrate the possibility of automating routine tasks, such as marking exams or designing lesson plans, others express concern over the potential erosion of the teaching profession and the loss of intellectual autonomy.
Mary Burns, a renowned specialist in educational technology, addresses this issue in her text Eyes Wide Open: What We Lose from Generative Artificial Intelligence in Education. Her analysis urges us to reflect on what we are setting aside in our constant pursuit of technological solutions.
What AI Gives Us
For many teaching professionals, the promise of saving time when preparing exams or marking assessments can be highly appealing. A literature teacher, for example, might use an automated system to suggest questions about a novel. Similarly, a math’s teacher could review students’ completed exercises without dedicating long hours to repetitive tasks.
According to AI advocates, this would allow teachers more time to focus on tasks that require greater human engagement, such as individual support, emotional guidance, or identifying specific needs. Would it not be reasonable to make the most of these advantages? Those who support this approach highlight the potential to strengthen teacher-student relationships, as the time previously spent on repetitive activities could now be redirected towards fostering motivation and confidence.
Furthermore, Burns argues that AI can serve as a creative source of inspiration for education professionals. Imagine a writing workshop where a computer programme presents different opening ideas for a story. Instead of being restricted to a single model, the teacher would have the opportunity to enrich these ideas and promote exercises that integrate reflection and written production. In this sense, technological tools could act as a catalyst for students and teachers to explore their own strategies and talents. Rather than limiting imagination, they could serve as a starting point for debates, analysis, and experimentation.
Nevertheless, the article emphasises that this contribution is only positive if the teacher maintains an active role, remaining aware that aspects such as empathy and genuine communication cannot be replicated by any algorithm.
What AI Takes Away
Teacher Authority and Agency
Burns warns that the indiscriminate use of generative systems can lead to a loss of teacher authority and agency. When lesson plans are generated almost automatically, there is a risk that teachers will be left only to refine superficial details. An example of this could be a school that adopts software to structure its academic programmes, resulting in educators merely overseeing the initial proposal.
This weakens the teacher’s ability to design activities based on the characteristics of their student community and the unique needs of each class. In this case, AI diminishes the value of pedagogical expertise and blurs the essential connection between the teacher and their subject, reducing the educator’s role to that of a mere intermediary between the machine and the student.
Decline in Critical Thinking and Creativity
Both students and teachers could become overly reliant on AI for content production, sidelining tasks that encourage deep analysis and independent thought. Imagine a student who, instead of reading an article and writing their own summary, relies on a text generator to do it in seconds. This behaviour would limit their exposure to the complexity of language, their ability to discern nuances in meaning, and their capacity to develop arguments. The same effect could occur among teachers who, by delegating the creation of study guides or exercises to software, gradually lose their ability to reflect and reason – and ultimately, their ability to teach effectively.
Technological Dependence and the Dehumanisation of Classrooms
The author also highlights the risks of technological dependence and the potential dehumanisation of classrooms. As computers take on an increasingly dominant role, there is a risk that both students and teachers will place excessive trust in these systems.
Faced with an academic challenge, rather than posing new questions or engaging in discussion, there may be a tendency to rely solely on an AI assistant as the primary source of information. This habit not only undermines intellectual autonomy but also weakens direct interaction and the empathy that emerges when people engage in discussion. Burns emphasises that education is much more than the mere transmission of data; it encompasses values, attitudes, and a sense of belonging that can only develop through daily human interaction.
Impact on the Student-Teacher Relationship
A fundamental concern raised by Mary Burns is the impact on the relationship between students and teachers. A rich educational experience involves dialogue that extends beyond the simple transfer of knowledge. A teacher perceives a student’s mood, responds to their concerns in a personalised way, and provides guidance based on experience and direct observation. If artificial intelligence were to take over this role, the warmth that arises from human interaction would be lost.
We might imagine a programme offering automated support to a student struggling with concentration. While it may provide tailored suggestions, it cannot empathise in the same way as a teacher who listens attentively and patiently. This emotional exchange is a cornerstone in fostering motivation and perseverance—essential traits for long-term learning. Its absence could have consequences for students’ emotional well-being and social development.
What Happens to the Quality of Knowledge?
Burns also raises concerns about the quality of knowledge generated by AI. Not all algorithms guarantee accurate information, as they may rely on incomplete or even biased data. As a result, teachers are obliged to review and verify every piece of content suggested by AI to ensure its accuracy. If this verification is not carried out thoroughly, we risk spreading errors or distorted interpretations.
The situation worsens when the educational community begins to humanise the software, believing that its responses come from an infallible source. Losing the habit of scrutinising the origins of information compromises the learning process. This leads to a fundamental question: to what extent are we willing to relinquish the critical judgement that takes so much effort to develop in schools and universities?
Erosion of Essential Skills
Added to this concern is the erosion of essential skills, such as deep reading and rigorous writing. According to Burns, when software generates reports or essays, students lose the opportunity to practise structuring their ideas, selecting precise vocabulary, and constructing coherent arguments. These skills are not confined to academia; they shape how individuals understand and interact with the world.
Similarly, teachers may also be affected if they stop exercising their ability to create teaching materials with their own distinct style. In the long run, the personal identity of each educator—reflected in their teaching approach—could be homogenised by the patterns deemed most appropriate by an AI system. Such a scenario would not only weaken the diversity of educational perspectives but also impoverish the learning experience for future generations.
Ethical Dilemmas
Finally, ethical dilemmas and concerns over data privacy emerge as pressing issues. It is true that certain AI programmes require large amounts of student data to tailor responses to their needs. But to what extent is it wise to share personal and academic data with tech corporations? The consequences of a data breach or misuse could be severe, ranging from the exposure of grades to the manipulation of sensitive information.
Burns also highlights the growing divide between those with access to these technologies and those without. Such disparities exacerbate existing inequalities, which are already stark in many regions. In the face of these uncertainties, responsibility does not rest solely with teachers but also with educational authorities and regulatory bodies tasked with safeguarding information and ensuring equitable access to technology.
Proposals and Reflections for the Future
In her article, Burns highlights the importance of maintaining a vigilant stance. The goal is not to reject artificial intelligence outright but to use it in a balanced and prudent manner. To this end, she offers a series of recommendations, summarised below:
Teacher Training in Critical Aspects of AI
A teacher who understands how these algorithms function will be better equipped to assess the value of AI-generated responses and resources. At the same time, they will be able to teach students how to identify unreliable information and critically evaluate its sources. Additionally, Burns advocates for the use of tools that detect machine-generated content, making plagiarism more difficult. This strategy safeguards the authenticity of students’ work while also fostering academic responsibility and integrity. Along the same lines, she encourages pedagogical practices that strengthen argumentation skills, creative problem-solving, and independent thinking.
Designing New Assessment Methods
Assessments must also be reconsidered within this new paradigm. If an AI system can effortlessly solve an exam or even generate answers for a student, rethinking the structure of evaluations becomes urgent. Burns suggests designing assessment methods that promote original thinking, the pursuit of unique solutions, and face-to-face interaction. For instance, projects could be developed where the teacher plays a decisive role, requiring students to gather data through hands-on experiments or to present and defend their conclusions before a panel. This approach does not eliminate AI’s contributions but rather relegates them to a secondary role, allowing students to exercise their capacity to reason, create, and debate.
Clear Regulations
Another key aspect highlighted by Burns is the need for clear regulations on the use of AI in educational settings. Institutional policies cannot remain on the sidelines, as the introduction of these technologies raises questions about copyright, commercial data use, and accountability for the accuracy of information provided to students. Burns argues that authorities must establish strict guidelines while allowing sufficient flexibility to adapt to different contexts. A rural school with limited connectivity does not face the same circumstances as a large city with constant access to advanced devices. Therefore, regulations should consider this diversity and ensure the protection of all stakeholders, particularly those most vulnerable to the digital divide.
What Do We Want to Protect and Nurture in Our Schools?
As we examine Mary Burns’ analysis, it becomes clear that generative artificial intelligence presents both exciting possibilities and significant challenges. The author takes a cautious stance, urging teachers, students, and society as a whole to reflect: what do we want to protect and nurture in our schools and universities? The development of autonomy, the acquisition of strong communication skills, and meaningful human interaction require a commitment that no technology can fully replace.
Perhaps the most sensible path forward is to make room for AI without abandoning close guidance and critical thinking. Certain applications can help reduce bureaucratic burdens or spark creativity, as long as we remain mindful of their impact on the learning process. For this reason, it is essential to continue the conversation on how to balance the benefits of automation with the need to keep teachers and students at the heart of education.