
Educational digitalization has advanced in just a few years at a pace that would have been difficult to imagine only a decade ago. Learning platforms, school management systems, data analytics, and artificial intelligence–based tools are now part of the everyday functioning of many educational institutions. And, let’s be honest, this has caught us largely unprepared.
In recent years, we have tried to catch up by focusing the debate on their pedagogical potential or their usefulness for improving the management of education systems. However, as these tools become consolidated, other very important issues need to be put on the table. The expansion of digital platforms raises questions that are not only pedagogical or technical, but also normative: for example, who controls the data generated by students, what information is collected about their educational trajectories, or how the algorithms that intervene in learning processes actually function.
These issues connect the digitalization of education with a broader debate: that of so-called digital rights, in which the school becomes a particularly sensitive arena of discussion.
The Charter of Digital Rights: a framework of reference, not a law
In 2021 the Spanish government presented the Charter of Digital Rights, a document that attempts to respond to a question that is becoming increasingly present in deeply digitalized societies: how to translate fundamental rights into the contemporary technological environment.
The Charter is not a law and does not create direct legal obligations. Its function is primarily programmatic. It is a framework of principles intended to guide future regulations, public policies, and institutional practices in a context shaped by the expansion of the internet, digital platforms, and artificial intelligence.
Its starting point is that rights that already exist — such as privacy, freedom of expression, equality, or protection against discrimination — and that are enshrined in most democratic constitutions, must continue to apply when social life takes place in digital environments.
However, contemporary technologies introduce new situations that require these rights to be reinterpreted. The document addresses issues such as the protection of personal data, digital identity, transparency of automated systems, and the protection of minors online.
Some of these principles have a direct connection with education. The Charter highlights, for example, the need to protect minors’ privacy in digital environments and to ensure that the use of technology does not generate new forms of discrimination. It also raises the importance of transparency in algorithmic systems that may influence decisions relevant to individuals — an issue that becomes particularly important as tools based on artificial intelligence begin to be incorporated into educational processes.
The document also includes a specific section on the right to education in the digital environment. It emphasizes that the incorporation of technology should contribute to expanding educational opportunities rather than reproducing existing inequalities. It also stresses the importance of developing critical digital competences that allow individuals to understand how technologies function and what their social implications are.
Its legal scope is limited and its reference framework is the Spanish context. But its importance lies elsewhere: in introducing a language that allows digitalization to be understood from a rights-based perspective. And that language, beyond national borders, connects with an increasingly broad international conversation about how to govern technological development.
A debate that is already global
In recent years, different countries and international organizations have begun to develop frameworks aimed at adapting the protection of rights to the digital environment. The objective is similar in all cases: to ensure that the expansion of technologies such as digital platforms, large-scale data processing, and artificial intelligence takes place within limits compatible with fundamental rights.
Europe has been one of the spaces where this debate has taken shape most clearly. A significant example is the Artificial Intelligence Act, the first comprehensive regulation on artificial intelligence adopted by the European Union.
The regulation introduces a risk-based approach and establishes stricter obligations for uses considered sensitive. Among them is the educational sector. Artificial intelligence systems that may influence relevant decisions — such as admission processes, evaluation, or monitoring of academic performance — may be classified as high-risk applications and be subject to specific requirements regarding transparency, human oversight, and institutional accountability.
In parallel, multilateral organizations have produced recommendations intended to guide the ethical development of these technologies. The UNESCO Recommendation on the Ethics of Artificial Intelligence, adopted by the organization’s Member States, emphasizes the need to ensure that the development of artificial intelligence respects human rights, promotes equity, and avoids new forms of exclusion.
These frameworks share the recognition that digital technologies have become central infrastructures of social life. The internet, digital platforms, and artificial intelligence are no longer marginal tools but systems that influence relevant decisions affecting millions of people. In that context, relying solely on technological innovation or on corporate self-regulation is insufficient. Institutional frameworks capable of accompanying this development are needed.
What these rights mean inside the school
The debate about digital rights may seem abstract until one looks at what is happening in classrooms. Digitalization has introduced something new into education: the massive production of data about learning. Every exercise completed on a platform, every activity finished, every access to content leaves a record. Learning, which for centuries was largely invisible, is beginning to become information.
This has obvious pedagogical advantages. Analyzing this data allows educators to detect difficulties before they become school failure, adjust exercises to each student’s pace, or monitor the evolution of a class with greater precision. Learning analytics relies precisely on this capacity to observe what was previously barely visible.
But this same transformation raises a new question for schools: who controls that information.
Educational data are not simple technical records. They describe learning trajectories, mistakes, progress, and study habits. In many education systems they are stored on platforms managed by technology companies operating at a global scale. Deciding who can access them, how they are used, or how long they are kept is not a secondary issue. It is a question of power.
The second transformation is automation. Many educational tools already incorporate algorithms that recommend content, classify responses, or detect learning patterns. In some cases they are beginning to intervene in relevant decisions within the educational process. When this happens, transparency ceases to be an ethical aspiration and becomes a basic condition for trust.
There is also an important difference with other digital environments: in schools the users are minors. A student cannot decide whether or not to use their school’s platform. They cannot negotiate terms or choose a provider. For that reason, when an educational institution adopts a technology, it is not only introducing a pedagogical tool. It is also defining how data are managed, how certain decisions are made, and what rights accompany those who learn within that digital environment.
Thinking about digital rights within schools therefore means expanding the focus of the debate on educational technology. It is not only about evaluating whether a tool improves learning or facilitates the management of the system. It also requires considering what guarantees exist to protect students’ rights and what responsibilities institutions assume when they incorporate these technologies into the everyday functioning of education.
Digital rights in vulnerable educational contexts
When the debate on digital rights is transferred to vulnerable educational contexts, the starting point changes. In many countries, the primary concern is not the regulation of algorithms or advanced data protection, but something more basic: access to digital infrastructure itself. Without stable connectivity, available devices, or accessible platforms, the discussion about rights in the digital environment risks remaining abstract.
However, precisely for that reason, the issue of digital rights takes on a particular dimension in these contexts. When education systems depend heavily on technological solutions provided by external actors such as companies, foundations, or international initiatives, decisions about platforms, data, or educational infrastructure are not always made within the education system itself. This can limit the ability of states and school institutions to define how these tools are used and under what conditions.
A second challenge is added to this: the institutional capacity to regulate and supervise the use of educational technology. The governance of educational data, the evaluation of automated systems, or the definition of transparency standards require technical resources, regulatory frameworks, and specialized teams that many education systems are still in the process of building.
In this context, the debate about digital rights cannot be limited to protection against potential technological abuses. It must also include the right to participate in digital environments under equitable conditions and with clear institutional guarantees. The expansion of educational technology can broaden learning opportunities, but only if it is accompanied by policies that ensure that this transformation strengthens — rather than weakens — the ability of education systems to protect students’ rights.
Educating also in digital rights
Educational digitalization will continue to advance in the coming years. Learning platforms, data analytics systems, and tools based on artificial intelligence will continue to expand their presence in classrooms and in the management of education systems.
In that context, the debate on digital rights introduces a necessary dimension into the conversation about educational technology. Documents such as the Charter of Digital Rights, together with emerging regulations such as the Artificial Intelligence Act and other international frameworks, reflect a growing concern to ensure that digital transformation develops within limits compatible with fundamental rights.
Education occupies a particular place in this discussion. Not only because it uses technologies that manage sensitive information about millions of students — who also have no margin of choice — but also because it has the responsibility to prepare new generations to function in environments increasingly mediated by digital systems.
Thinking about digital rights in education therefore implies a double task. On the one hand, establishing clear institutional safeguards for the use of technology in education systems: data protection, transparency in automated systems, and appropriate oversight mechanisms. On the other, incorporating these debates into the educational process itself, so that students understand how the digital environments in which they participate function and what rights they have within them.
If education aims to form citizens capable of participating in democratic societies, it must also prepare them to exercise their rights in the space where an increasing part of social life is now organized: the digital environment.


