Before incorporating new technology, it is worth asking a fundamental question: why do we teach? Artificial intelligence has simply held up a mirror to that question. Four Spanish schools (Aula Escola Europea, Nazaret Oporto, Santa Gema Galgani, and Institució Igualada) decided to take some time to think about it. With the support of EduCaixa, they devoted an entire school year to talking, listening, and debating how to coexist with this new presence in the classroom.
The result was a jointly developed user guide: Designing a protocol on AI in schools. A resource for those who believe that education requires time, listening, and reflection. Because even (and especially) with technology, teaching requires calm thinking.
An unusual invitation
During the 2023–2024 academic year, four Spanish schools accepted an unusual invitation: to stop and think. They did so with the help of EduCaixa, the educational program of the “la Caixa” Foundation, which proposed an experiment that is rare in these fast-paced times: before deciding how to use artificial intelligence in schools, ask why.
The goal was to build their own protocol based on internal reflection. To do this, each school participated in sessions with experts in ethics, law, science, and leadership, and worked with their own teaching teams to develop a shared vision.
Throughout the process, each school found its own way of looking at AI. Aula Escola Europea did so from the perspective of philosophy and the ethics of knowledge. Nazaret Oporto approached it from the perspective of teacher training and intergenerational dialogue. Santa Gema Galgani opted for shared pedagogical leadership. Institució Igualada chose to give prominence to the students. Four different paths towards the same question: how can we coexist with technology without losing our educational soul?
The result of this support was, in the words of the teachers themselves, an ongoing conversation. A guide to continue thinking together. An open and replicable model to help schools think more critically.
Why a protocol (and not a decalogue or user guide)
When artificial intelligence began to be part of school conversations, many voices proposed creating a decalogue: a list of rules, warnings, and recommendations to keep the issue under control. But the schools supported by EduCaixa preferred something else: thinking together before deciding.
They wanted to build agreements through dialogue. And that’s where the idea of developing a protocol came from. A tool that doesn’t impose rules, but opens up a space for shared reflection. It is developed as a group, reviewed, and adapted. Its value lies not in the document itself, but in the process that makes it possible.
The development of a protocol requires listening: to teachers who fear losing their jobs, to students who experiment without fear, to families who wonder what place remains for human intuition. And from there, it reaches agreement.
This idea of the protocol as ethical and pedagogical governance also ties in with the principles of UNESCO on the responsible use of artificial intelligence and anticipates the spirit of the future European AI Law (2026), which will require institutions to be flexible, dialogical, and humane.
Before writing a single line, the centers learned to ask themselves questions: What role do we want AI to play in our school? What values should guide its use? What risks do we want to avoid, and what opportunities do we want to promote?
Five steps to building your own protocol
The four schools supported by EduCaixa followed a gradual process, guided, as we have already mentioned, by reflection and dialogue. Before writing a single line, the schools learned to ask themselves questions: What role do we want AI to play in our school? What values should guide its use? What risks do we want to avoid and what opportunities do we want to promote? These questions served to open a dialogue between teachers, students, families, and experts. The process unfolded in five consecutive steps (explore, listen, define, prototype, and evaluate), which we explain below:
Step 1. Explore: understand what AI is (and what it is not)
The first step was to learn. Before deciding how to use it, the teaching teams wanted to understand how artificial intelligence works, its limits, and its ethical, legal, and environmental implications. In doing so, they discovered an essential idea: AI does not think or understand, but rather predicts patterns. This difference helped to reduce fears and unrealistic expectations.
Sessions with specialists offered complementary perspectives from philosophy, law, science, and education, and helped them understand that technology is not only analyzed from a technical standpoint, but also from a cultural one.
Step 2. Listening: giving a voice to the educational community
After looking outward, the centers looked inward. They consulted, interviewed, and debated. In working groups, students, teachers, and families shared their concerns and expectations. What opportunities do we see, what fears do we have? What do we really know? The answers reflected different points of view. Some students were fascinated; others feared losing control of their learning. Teachers talked about time, ethics, and the pressure to be “up to date.”
From this mosaic emerged a more realistic view of what it means to live with AI. No one had all the answers, but everyone had something to say. And there, among different voices, a more conscious community began to form. The result of this second step was a more collective and transparent view of AI in schools.
Step 3. Define: the “why” before the “how”
The next step was to formulate the common intention. It was not enough to talk about AI: we had to decide what to use it for. Each center defined its purpose and guiding values. Some focused on creativity: using technology to imagine more, not to do less. Others opted for personalized learning or critical thinking.
The exercise forced them to avoid the trap of “how”: which platform, which application, which tool. EduCaixa insisted on a phrase that ended up becoming a mantra: “First the why, then the how.”
That order made all the difference. Putting meaning before the instrument meant that the protocol was not reduced to a list of uses, but became a pedagogical statement.
Step 4. Prototyping: writing the first draft
With the objectives defined, it was time to write. Each school drew up an initial draft of its protocol: a document setting out objectives, limits, responsibilities, and a training plan.
The process was as important as the result. At the Santa Gema Galgani school, for example, an interdisciplinary team was created to support teachers in integrating AI and evaluating pedagogical experiments. Other schools included sections on privacy, authorship, evaluation, and digital ethics, aware that teaching is also caring.
The draft was not a final product, but a first version open to change: a living text that would be refined through practice.
Step 5. Evaluate and adjust: keep it alive
The last step was to learn to never consider it finished. The schools understood that the protocol had to be reviewed periodically. It must be reviewed each school year, corrected, and adapted to new realities.
Some centers organized internal seminars to share progress; others proposed forums and workshops open to families and students. This ongoing review turned the protocol into a useful tool for continuing to learn and adapt to changes. More than a closed document, it became a shared practice.
At the end of the process, the teachers agreed on the essential point: what was valuable was not the document, but the conversation that made it possible. A conversation that remains open, because teaching, with or without algorithms, has always been just that: a way of thinking together.
What the pilot schools learned
When the course ended, the participating teams agreed on something that was not in the program: the most valuable learning was not about artificial intelligence, but about education.
The first discovery was clear and decisive: before using technology, you have to understand it. Training was key to losing fear, gaining confidence, and making informed decisions. AI does not eliminate the need for reflection; it makes it even more necessary.
The second lesson had to do with leadership. No protocol can be sustained if it depends on a single person or a slogan. The centers understood that AI governance had to be shared and thoughtful, a collaborative effort where management accompanies and listens, and where each teacher takes an active role in decision-making.
They also understood that AI does not jeopardize the school’s identity. On the contrary, it can help reaffirm its humanistic mission. When discussing algorithms, the schools rediscovered old questions: what does it mean to learn, how is judgment constructed, what is the value of creativity or error?
And, perhaps most importantly, there is no single model. Each school found its own rhythm, its own language, and its own way of looking at technology. What united them was conversation: that common space where they could think, disagree, and start again. Talking about AI was, in fact, a way to strengthen the pedagogical culture.
Change begins at school
The arrival of artificial intelligence has forced schools to rethink a question that is vital to their existence: what does it mean to teach and learn? Therefore, beyond being a risk or a fad, AI is an opportunity for reflection.
The protocols that emerged with the support of EduCaixa seek to promote pedagogical autonomy. They are tools for collective reflection, designed to enable educational communities to take an active role in building their own framework for action.
Every school that decides to open this debate is already exercising a form of educational leadership. And although no law or model will offer perfect solutions, the important thing is not to stand still and wait for others to define the future of education.