The Expansion of Educational AI

In recent months, the integration of generative artificial intelligence tools into schools and education systems has accelerated significantly, moving to the center of public policy agendas. Governments across continents are signing agreements with major technology companies to introduce chatbots, virtual tutors, and AI training programs at scale.
A recent New York Times article captures this acceleration well. Beyond specific cases, a common pattern emerges: rapid deployment, national or regional programs, public–private partnerships, and an optimistic narrative about the potential of these tools to modernize education. AI is associated with familiar goals—efficiency, personalized learning, and reduced teacher workload—as well as with a future-oriented ambition that is difficult to ignore: preparing students for an increasingly automated economy.
This narrative resonates because it connects with real challenges. Education systems face resource shortages, overcrowded classrooms, and overburdened teachers. In this context, AI is presented as a pragmatic solution capable of generating materials, providing immediate feedback, and adapting content to each student’s pace at relatively low cost and with high scalability.
However, the almost exclusive focus on technological deployment leaves a deeper issue in the background. In many contexts, public discussion no longer centers on whether artificial intelligence should enter schools, but rather on how to implement it. This shift sidelines a key question: what kind of learning AI encounters once it becomes integrated into the classroom. Which pedagogical practices it reinforces, which it displaces, and which capacities it gradually renders expendable. AI does not arrive in a vacuum; it enters education systems shaped by existing rules, incentives, and vulnerabilities. And that is where the real debate begins.
The Learning Already in Retreat
Long before the arrival of generative artificial intelligence, school learning had already been increasingly oriented toward outcomes. The process itself—how students think, test ideas, make mistakes, and revise—gradually lost prominence compared to the production of correct and assessable results. Expanding curricula, evaluation systems centered on measurable indicators, and constant pressure to meet targets have narrowed the space for slow thinking and intellectual exploration.
As a result, what happens between the beginning of a task and its final outcome has become progressively invisible. Drafts that do not count, mistakes that are penalized rather than corrected, ideas that require time to mature or be reformulated leave little trace in systems designed to certify products rather than processes. Yet it is precisely within this intermediate journey that deep understanding, independent judgment, and autonomous learning are developed.
In practice, schools had long rewarded correct answers more than the reasoning behind them. The expansion of digital platforms, rigid rubrics, and automated assessment systems reinforced this trend by prioritizing what can be quantified over what is cognitive. Over time, this logic consolidated as a way of managing complex education systems through efficiency and comparability.
As a consequence, artificial intelligence is being introduced into a context where learning was already strongly oriented toward final results. Within this framework, a technology capable of generating plausible, well-structured, and linguistically correct responses integrates easily. In many classroom practices, tasks are designed to reach solutions rather than explore problems, and assessments rarely allow teachers to follow students’ reasoning. AI did not introduce this logic, but it intensifies it by offering an effective shortcut within a system already inclined toward such outcomes.
Delegating Thought: What Changes When the Answer Comes First
Generative artificial intelligence introduces a significant novelty into the educational experience: immediate access to complete, coherent, and convincing answers. In certain contexts, this can be useful as support—for example, to clarify concepts or generate examples—but it also alters the traditional sequence of learning. For the first time, the answer may appear before the student has fully formulated the question.
This shift introduces a specific risk: early cognitive delegation. When access to a convincing answer is immediate, part of the intermediate mental effort—analyzing, comparing, doubting, experimenting—may no longer be activated. The issue is not simply copying, but relying on reasoning that one has not constructed oneself. The result may be correct, but the process becomes externalized.
Empirical evidence on these effects remains limited but is beginning to provide consistent signals that call for caution. Recent studies, including research conducted by teams at Microsoft and Carnegie Mellon University, suggest that frequent chatbot use may reduce the activation of critical thinking in certain tasks, particularly when used as a substitute rather than a complement to personal reasoning. These concerns are compounded by growing evidence of uncritical trust in AI-generated responses, which may contain errors or biases despite their confident tone.
At the same time, classroom studies and teacher surveys point to increasing instrumental uses of AI in schoolwork, including text generation, problem-solving, and assignment structuring without clear pedagogical mediation. For example, a recent report found that two out of three teenagers use ChatGPT for school-related tasks such as essays or academic problems, suggesting widespread use beyond occasional support. Research on academic practices also indicates that while AI may save time and help initiate tasks, it can encourage complacency and reduce the activation of critical thinking among students.
The key distinction, therefore, is not between using or not using AI, but between supporting learning and replacing intermediate cognitive effort. The problem is not that artificial intelligence makes mistakes, but that it can be correct too efficiently, concealing the intellectual journey that normally leads to an answer.
A Growing Sense of Caution
The New York Times report itself notes that alongside initial enthusiasm, voices of caution are beginning to emerge. Researchers, educators, and international organizations warn that accelerated and poorly regulated adoption of AI may have unintended consequences for learning. UNICEF, in its updated guidance on artificial intelligence and children’s rights, emphasizes that rapid AI adoption may pose risks if institutional and pedagogical frameworks are not developed to protect children’s rights and well-being, recommending policies that balance opportunities with safeguards.
These warnings do not arise in isolation. As the New York Times recalls, recent history offers instructive precedents in educational technology. Large-scale programs focused on device access, such as the One Laptop per Child initiative, failed to significantly improve learning outcomes despite substantial investment. That experience highlighted that technological access alone does not necessarily translate into better learning.
In response, some education systems are adopting more deliberate approaches. Rather than prioritizing immediate AI use by students, they emphasize AI literacy, teacher training, and pedagogical design. Artificial intelligence is thus framed as an object of learning—something to understand, question, and evaluate—rather than as a shortcut for producing results.
These experiences do not resolve the debate but make clearer the tensions AI introduces into learning processes. The question becomes which model of learning each approach reinforces and which capacities are sidelined when efficiency becomes the dominant criterion.
What We Should Not Delegate, Even If We Can
Ultimately, the expansion of artificial intelligence in education forces a normative question: what should not be delegated, even if technology allows it. Formulating good questions, sustaining doubt, building one’s own arguments, and learning from error are not secondary skills. They lie at the core of learning and intellectual development.
These capacities do not develop automatically or efficiently. They require time, guidance, and spaces where mistakes are not immediately penalized. Yet they are precisely the hardest to protect in education systems oriented toward rapid performance and the evaluation of final products. If AI is integrated without prior reflection, it risks reinforcing this logic: correct answers, coherent texts, completed tasks, but without visible traces of the process behind them.
Protecting learning as a process implies concrete decisions. In assessment, it means giving greater weight to learning trajectories—drafts, revisions, partial arguments—rather than final results alone. In teacher training, it requires preparing educators not only to use AI tools but to mediate their use critically in the classroom. And in curriculum design, it demands asking which forms of learning require friction, slowness, and effort, and which can benefit from technological support without losing depth.
Within this framework, AI literacy cannot be treated as an optional add-on. Understanding how these systems work, their limitations and biases, and the contexts in which their use is—or is not—appropriate has become a basic condition for teaching and learning. The challenge, rather, is to clearly define the role these tools should play within educational processes.
Artificial intelligence can support education in many ways. It can facilitate tasks, expand resources, and reduce workloads. But it cannot—and should not—define what it means to learn. That remains an educational, and therefore human, decision.


