No Innovation Without Safeguards: AI in the Classroom

On a classroom desk there is an open laptop. On the screen, a text that has been corrected in seconds. On one side, a simplified rubric. On the other, a summary that did not exist five minutes ago. Everything seems to run smoothly. The class continues as usual. What is not visible is the path taken by that data, the conditions under which it was processed, and the rules that allow (or do not allow) that exchange to be legitimate. The innovation is visible. The safeguards, less so.

No Innovation Without Safeguards: AI in the Classroom

Datos y educación

A teacher opens her laptop before class begins and asks an artificial intelligence assistant to reformulate a rubric so that it is understandable for twelve-year-old students. The system responds in seconds. She reviews the text, adjusts a nuance and projects it on the screen. Each of those interactions produces data, activates models trained on massive amounts of information, and runs within technological infrastructures that the school does not fully control.

With artificial intelligence now fully installed and partially normalized in classrooms, we must begin to ask under what institutional conditions the use of this technology is legitimate. That question, among others, is one of the focal points of Entre aulas y algoritmos. Inteligencia artificial, equidad y derechos digitales en educación  (Between Classrooms and Algorithms: Artificial Intelligence, Equity and Digital Rights in Education), by Manuel José Ruiz García, an analysis of how generative models work, what opportunities they open for teaching, and what their limits are.

In this article we focus on those limits. Because, as this author argues, educational innovation depends less on the power of the tools than on the safeguards that accompany their use.

Digital rights as a condition of possibility

When artificial intelligence in education is discussed, the conversation usually focuses on what the tools can do: generate exercises, summarize texts, provide automatic feedback or analyze large volumes of information about learning. But this technical perspective leaves aside a tremendously important question: what happens to the data that make these capabilities possible.

Educational digitalization has generated an enormous amount of information about school activity. Learning platforms, education management systems and digital applications record exercise results, connection times, patterns of interaction with content, or navigation paths within learning materials. The introduction of artificial intelligence systems multiplies the potential value of those data, but it also amplifies the risks associated with their use.

Learning analytics promise to detect patterns that are invisible to human observation, identify students at risk of dropping out, or personalize educational pathways. However, that technical capacity does not eliminate the need to justify what data are collected and for what purpose. Because is everything that can be measured pedagogically relevant?

The indiscriminate accumulation of information can generate educational surveillance environments that transform the school experience into something different from what it is meant to be. The problem is not only individual privacy. It also affects the institutional climate. An education system that normalizes constant monitoring risks redefining the relationship between students, teachers and technology.

This is where Manuel Ruiz returns to one of the central principles of European data protection regulation: data minimization, which—put very simply—means collecting only the information necessary to fulfill a specific purpose.

Another equally important principle must be added: meaningful human oversight. Decisions with significant impact on a student’s trajectory (academic assessments, educational recommendations or identification of learning difficulties) should not depend exclusively on automated systems.

This point does not respond only to an ethical concern. It also has an epistemological dimension. Artificial intelligence systems operate by detecting patterns in past data. Educational work, by contrast, involves interpreting individual trajectories, social contexts and development potential that are not always reflected in the available data.

For that reason, transparency becomes a third fundamental safeguard. Knowing what data are used, how they are processed and under what criteria recommendations are generated is not only a regulatory requirement. It is also a condition for teachers, students and families to understand the decisions that affect the educational process.

In this context, the publication argues, digital rights become a structural condition of educational innovation: without safeguards regarding data, human oversight and transparency, educational innovation loses institutional legitimacy. Technologies may improve the efficiency of certain processes, but they can hardly be sustained within public education systems if they do not generate social trust.

The indiscriminate accumulation of information can generate educational surveillance environments that transform the school experience into something different from what it is meant to be.

Infrastructure is not neutral

Let us now shift our focus: from applications—the visible face of artificial intelligence—to the technological infrastructure that makes them possible.

Current generative systems are not isolated tools. They are part of a much broader technological ecosystem that includes models trained on enormous volumes of data, distributed processing centers and platforms that operate at a global scale.

Most of the artificial intelligence models currently used in education have been developed by large technology companies or by institutions with exceptional computational capacity. Educational institutions, by contrast, operate in local contexts and under specific public responsibilities. When they adopt tools based on those models, they become dependent on systems whose functioning and evolution they do not control.

This dependency has several implications. The first is technical opacity. The training of language models or recommendation systems is usually based on massive datasets whose composition is rarely fully transparent. A second implication is institutional asymmetry. Decisions about updates, access policies or conditions of use are taken in environments that do not directly respond to educational priorities.

None of this implies that these tools should not be used. It means that their adoption requires reflection on technological governance. Being able to answer questions such as: who defines the criteria for adoption? What conditions must technological providers meet? How are the risks associated with rapidly evolving systems evaluated?

At this point, Manuel Ruiz refers to a line of analysis that is beginning to gain relevance in international debate: the development of more controllable technological alternatives. Faced with exclusive dependence on large generative models hosted in the cloud, some education systems are beginning to experiment with what the author calls forms of proximity AI: smaller and more specialized models capable of operating within institutional infrastructures or even on local devices.

These systems do not have the power of large commercial models, but they offer important advantages in terms of data control, operational transparency and pedagogical adaptability. By reducing the distance between the tool and the institution that uses it, they also expand the decision-making margin of education systems regarding how data are processed and under what conditions they are used.

This perspective introduces a concept that the book calls pedagogical and data sovereignty, which essentially consists of avoiding absolute dependencies on large technological platforms that limit the capacity of education systems to define their own rules of operation.

Moreover, in contexts with limited connectivity or scarce technological resources, solutions based on local infrastructures may be more sustainable than permanent dependence on external services. Tools capable of operating offline, models trained for specific educational tasks, or systems that process information within the institution itself can offer viable alternatives where digital infrastructure is less robust.

From this perspective, the discussion about artificial intelligence in education is not limited to which tools to use, but to what technological architecture sustains the education system and who has the capacity to govern it.

Innovating under conditions

Technology does not replace educational institutions, but neither is it neutral with respect to them. Every system incorporates assumptions about how information is organized, how decisions are made and who has access to data. When these tools are integrated into the daily functioning of schools, those rules become part of the educational environment itself. But what conditions must exist for that assistance to be compatible with the principles that sustain public education?

The history of educational innovation shows that the most durable technological transformations have not been the fastest ones, but those that have been integrated within solid institutional frameworks. Educational radio, school computers or digital platforms did not transform education by themselves. They did so when they became integrated into pedagogical practices and institutional regulations capable of guiding their use.

Artificial intelligence introduces an additional challenge: its ability to operate on large volumes of data and generate automated recommendations expands the reach of technology within the education system. This makes it even more necessary to clearly define the limits within which these tools can be used.

Far from restricting innovation, these limits are its condition of possibility. They protect students from improper uses of their data, preserve the professional responsibility of teachers and ensure that relevant educational decisions remain subject to understandable and reviewable criteria.

Ruiz García proposes understanding these limits on several levels. First, limits on the automation of educational decisions: artificial intelligence systems may assist processes of assessment or learning analysis, but they cannot replace pedagogical judgment. Second, limits on data collection. Not everything that can be recorded on an educational platform constitutes pedagogically relevant information. Finally, limits derived from transparency and technological governance: when a tool intervenes in significant educational processes, its functioning must be understandable and its decisions reviewable.

A window of opportunity

Educational institutions exist precisely to filter, organize and give social meaning to technological change. School is not only a place where tools are used. It is the space where it is decided which tools deserve public trust and under what conditions they may become part of learning.

From this perspective, the challenge currently posed by artificial intelligence is to define the institutional criteria that will make its integration possible without altering the principles that justify the existence of the education system: equity, public responsibility and the professional autonomy of teachers.

If the history of educational digitalization has taught us anything, it is that technology advances faster than the institutions that must govern it. Artificial intelligence amplifies that gap. But the speed with which generative systems have entered the public debate has had an unexpected effect: it has made visible a technological infrastructure that for years had remained in the background. That visibility creates a rare opportunity in the history of educational technology: to discuss the rules governing its use before those infrastructures become naturalized.

You may also be interested in…