The pedagogical value of difficulty

There are school tasks whose value lies not only in the final result, but in the time and effort they require: summarizing a text, organizing an argument, solving a problem, finding the precise words to explain an idea. What happens when these tasks can be completed in seconds? In a recent lecture, the educator Carlos Magro proposed looking at AI not only as a technological tool, but as a mirror forcing us to rethink what it means today to learn, teach, and think at school. This article “dialogues” with his ideas to explore which intellectual processes remain essential for learning, and what schools might lose if some of them cease to be part of the educational experience.

The pedagogical value of difficulty

There was a time when writing a text forced people to think while writing it. Sentences appeared before they were fully clear. Ideas took shape along the way. Summarizing required distinguishing what mattered from what did not. Searching for information meant losing time, wandering off course, reading useless things, taking wrong turns. Studying meant confronting the difficulty of mentally reorganizing a problem until it became understandable. Even beginning was difficult. Especially beginning.

Part of the intellectual experience consisted precisely in going through that process. Not only to reach a result, but because much of understanding was built along the way.

The expansion of artificial intelligence is beginning to alter that relationship between thought and intellectual production. Not because machines “think” for us, but because they allow certain results to be obtained without necessarily going through all the cognitive operations that previously made them possible. An outline can appear before a subject has been fully understood, an essay may take shape before ideas have fully matured, and answers can arrive before doubt itself.

The relationship between intellectual effort and the construction of knowledge is beginning to change.

In a recent lecture titled Reading the World Today: School in Times of Artificial Intelligence, educator and educational innovation specialist Carlos Magro proposed looking at AI less as a technological innovation and more as a mirror reflecting contemporary education. His reflection revolved not only around tools or platforms, but around something much harder to measure: the place of thought in a culture obsessed with productivity, efficiency, and acceleration.

This matters because many traditional school tasks never derived their value solely from the final result. Writing, summarizing, solving a math problem, or interpreting a text were also ways of organizing thought. Processes of elaboration. Ways of learning to observe, relate, distinguish, prioritize, or argue. Intellectual work did not come after learning. It was learning.

Artificial intelligence introduces a new tension into that logic. For the first time, a growing portion of the visible products of thought—texts, answers, summaries, explanations—can be generated without fully traveling the path that once led to them. For a long time, writing, summarizing, solving a problem, or preparing for an exam were seen primarily as ways of demonstrating learning. Today, artificial intelligence reveals another possibility: that part of learning was happening precisely while those tasks were being carried out.

School was always also a technology of attention

School was never merely a place for transmitting information. Long before the internet, long before digital platforms and artificial intelligence, its function also consisted in creating the conditions necessary to sustain attention and organize thought.

Reading a long novel, solving a complex math problem, writing an essay, or interpreting a philosophical text required time. But not only chronological time. They required endurance. The ability to remain with an idea even when boredom, frustration, or fatigue appeared. To a large extent, learning consisted in staying inside a difficulty long enough.

That does not retroactively turn school into an ideal space. School has also been rigidity, mechanical repetition, empty memorization, and excessive discipline. It has often confused attention with obedience and silence with learning. Yet even in its most bureaucratic or routine forms, it preserved an intuition: certain intellectual processes require duration, continuity, and some resistance to the impulse to give up quickly.

Deep reading, for example, involves much more than decoding information. It requires sustaining connections, remembering what was read several pages earlier, anticipating meanings, coexisting with ambiguity. Something similar happens with writing. Writing is not simply about communicating an already-formed idea. Often, the idea itself emerges during the process of writing, correcting, deleting, reorganizing, and starting over again.

Part of the concerns raised by Carlos Magro in his lecture point precisely in that direction. Not only toward the impact of artificial intelligence on school tasks, but toward the cultural conditions of thought itself. The issue is not only what machines can do, but what happens to our relationship with attention, reading, waiting, or intellectual elaboration in an environment shaped by permanent acceleration.

Because AI does not emerge in a vacuum. It arrives in a culture already shaped by hyperstimulation, fragmented attention, and the growing difficulty of sustaining long processes of concentration. Even before chatbots began writing texts or solving exercises, much of contemporary experience was already organized around speed, interruption, and immediate response.

There is a long tradition of research in cognitive psychology that supports an apparently contradictory idea: learning does not always become more effective when tasks become easier. In the early 1990s, psychologists Robert and Elizabeth Bjork coined the concept of desirable difficulties to describe those cognitive efforts that, although they make learning slower or more demanding in the short term, strengthen long-term understanding and memory.

The idea challenges a widespread perception in contemporary culture: that reducing friction automatically improves learning. In education, some difficulties are not obstacles to thinking, but part of the process of understanding itself.

This helps explain why the debate about artificial intelligence in education cannot be reduced simply to a discussion about tools. Part of the concerns raised by Carlos Magro point precisely there: what happens when certain technologies allow intellectual results to be obtained without fully going through the cognitive operations that previously made them possible.

The issue is significant. Formulating a question, summarizing a text, reorganizing notes, writing an essay, or solving a math problem are not merely ways of showing what has been learned. They are also ways of elaborating thought. Writing, for example, is rarely limited to expressing an already-finished idea. More often, it forces people to discover what they think, detect contradictions, reorganize arguments, or find relationships that were not yet clear.

Something similar happens with metacognition: the ability to review, supervise, and reflect on one’s own learning process. Various recent studies are beginning to warn that excessive dependence on generative tools may reduce part of that cognitive work. Some studies show that students improve their performance while they have active access to AI systems, but encounter greater difficulties when solving similar tasks autonomously. Other research warns of risks of superficial reasoning, premature automation, or reduced sustained intellectual effort.

That does not automatically make artificial intelligence an educational threat. Some forms of automation can free up time, reduce repetitive tasks, and provide valuable support for certain students or contexts. Delegating certain mechanical operations does not necessarily impoverish learning. The problem depends largely on what is being delegated.

Because not all efficiency produces understanding.

The deeper question may not be whether artificial intelligence should enter classrooms, but which intellectual processes remain worth going through.

Formulating a question, summarizing a text, reorganizing notes, writing an essay, or solving a math problem are not merely ways of showing what has been learned. They are also ways of elaborating thought.

The new educational conflict: replacing or expanding thought

For decades, much of schooling operated under a relatively stable assumption: the visible work produced by a student offered reasonable clues about their learning process. An essay allowed teachers to infer reading comprehension, argumentative ability, or the handling of complex ideas. A math problem revealed procedures. A summary required selecting, organizing, and prioritizing information. The final product never perfectly reflected thought, but there was a relatively recognizable relationship between intellectual effort and the result submitted.

The expansion of artificial intelligence is beginning to make that relationship more unstable. Today it is possible to produce convincing texts, solve complex exercises, or generate elaborate explanations without necessarily going through the same cognitive process those tasks once required. And that introduces a new difficulty for schools: it is no longer always evident which part of the work has truly been thought through, understood, or elaborated by the person presenting it.

The problem does not affect only authorship. It also affects assessment. For a long time, schools mainly evaluated visible products: answers, exercises, essays, final projects. Artificial intelligence forces educators to look more carefully at what those products do not always reveal: the intellectual journey that led to them.

Part of the reflections proposed by Carlos Magro point precisely toward that tension. Not only because AI allows certain tasks to be automated, but because it makes more fragile an educational culture heavily oriented toward visible performance and final results.

This is already beginning to modify concrete practices in many classrooms. Some teachers are returning to oral assessments, in-class assignments, or exercises focused on explaining processes rather than merely providing answers. Others ask for intermediate drafts, conversations about how a text was constructed, or activities in which what matters is no longer simply delivering a correct product, but showing how one arrived at it.

The issue does not seem reducible to detecting cheating. In fact, many of these changes predate artificial intelligence and already responded to criticisms of assessment models overly focused on quick results and insufficiently attentive to deep understanding. AI accelerates that discussion because it makes more visible a distinction that for a long time remained partially hidden: producing an answer does not always equal constructing knowledge.

What deserves to remain difficult

School never organized learning solely around efficiency. In fact, many of its historical practices retained meaning even when faster ways of obtaining the same result already existed. We still teach handwriting even though keyboards dominate everyday life. Students continue solving mathematical operations even though calculators can perform them in seconds. Reading a long novel still occupies weeks of schoolwork in a culture accustomed to the speed of fragments and screens.

This is not always about nostalgia. Often the decision follows another logic: preserving certain intellectual experiences considered important for learning how to think.

Part of the questions raised by Carlos Magro point precisely there. Artificial intelligence does not only force us to decide which tools to incorporate into classrooms. It also forces us to ask which intellectual practices remain worth going through even when faster, easier, or more efficient ways of avoiding them exist.

The issue is especially complex because there is no clear boundary between support and substitution. Delegating certain mechanical tasks can free time for more meaningful activities. But externalizing some processes too early may also weaken capacities that require sustained exercise in order to develop: writing, arguing, interpreting, relating ideas, tolerating uncertainty, sustaining attention, or confronting difficulty without immediately abandoning it.

Perhaps for that reason, the educational discussion around artificial intelligence is slowly beginning to shift its focus. Less toward the question of what technology can do, and more toward another, much older and more difficult question: what kinds of intellectual experiences we still consider necessary to form individuals capable of thinking for themselves.

You may also be interested in…