Learning in the Age of Cognitive Delegation

What happens to learning when obtaining an answer no longer requires effort? What happens when part of the intellectual work that once underpinned learning can be delegated? A New Direction for Students in an AI World, developed by the Center for Universal Education at Brookings after a year of global research, begins with these questions and arrives at a clear conclusion: at present, the risks outweigh the benefits in terms of how artificial intelligence is being used in education. Yet the report is not fatalistic. It also outlines a framework for redirecting the trajectory.

Learning in the Age of Cognitive Delegation

Brookings on AIThe concern that gave rise to this study was not an abstract academic hypothesis. It was an empirical pattern that surfaced repeatedly in conversations with teachers, students, and families: generative AI is being integrated into learning at a pace that exceeds education systems’ ability to understand its implications. In many contexts, use precedes policy, and practice precedes deliberation.

Unlike previous waves of educational innovation—platforms, devices, digital content—generative AI does more than expand access or distribute materials. It intervenes in the production of intellectual work itself. It can draft text, structure arguments, generate hypotheses, solve equations, or synthesize complex information. In school settings, these are not merely steps toward an answer; they are the very processes through which capacities are developed.

The research team—Mary Burns, Rebeca Winthrop, Natasha Luther, Emma Venetis, and Rida Katim—therefore adopted a preventive approach. Rather than waiting for positive or negative effects to become entrenched, the report sought to identify plausible risks before certain practices became normalized. The methodology—more than 500 interviews and focus groups across 50 countries, a review of 400 studies, and an international Delphi panel—was not designed to measure immediate impact, but to map possible trajectories.

Their conclusion is direct: at this moment, the risks of generative AI in education outweigh its benefits. Not because the technology is inherently harmful, but because its integration is occurring without sufficiently developed pedagogical frameworks or safeguards—and because it affects dimensions that extend well beyond academic performance.

What AI Can Contribute to Learning

Discussing the benefits of AI in education requires balance. It demands restraint from both uncritical enthusiasm and reflexive skepticism. Across interviews, teachers did not describe AI as revolutionizing schooling. Rather, they described it as opening narrow but meaningful spaces of possibility within daily practice.

The first area where AI shows clear advantages is personalization. This is not the long-promised adaptive learning that rarely delivered. New platforms, powered by generative models and predictive systems, do more than adjust difficulty levels. They identify patterns, detect recurring errors, and reformulate explanations in real time. For many teachers, this level of responsiveness feels almost aspirational—supporting diverse learners without dividing themselves into thirty. In resource-constrained environments, where a single teacher may serve more than 50 students, the potential is even more significant: AI can create micro-tutoring spaces that were previously unattainable.

A second contribution lies in accessibility. The report documents uses that go beyond isolated anecdotes: synthetic voice systems that allow children with speech impairments to communicate in voices that resemble their own; intelligent annotation tools for students with dyslexia; assisted reading and differentiated materials for students learning in a second language. These tools do not replace specialists, but they extend their reach—particularly in systems where specialized support is limited or absent.

A third, less visible but systemically important contribution involves teacher time. When some educators refer to AI as their “favorite colleague,” it is not technological fantasy. Planning lessons, grading work, designing materials, and writing reports consume hours that are not spent teaching. Controlled trials cited in the report show reductions of up to 31 percent in lesson preparation time without a decline in quality. If protected institutionally, this time dividend can be reinvested in conversation, feedback, and human presence.

AI may also play a critical role in contexts of vulnerability and exclusion. The study includes cases such as Afghan girls, denied access to formal schooling, who receive curricular content through WhatsApp and automated tutors. This is not an ideal model of schooling. Yet in certain contexts, it is the difference between learning something and learning nothing.

Finally, when integrated with clear pedagogical intent, AI can expand human capacity. It does not make students more intelligent—that remains the work of students and teachers. But it can reduce initial cognitive load, create mental space, and allow learners to focus on interpretation, connection, and questioning.

The evidence, however, is consistent on one point: these benefits emerge only when AI functions as an extension, not a substitute. When embedded in strong instructional practice rather than used as a shortcut, AI can enhance learning. Recognizing its potential is not technological evangelism. It is a necessary step in defining what should be preserved, what can evolve, and what should not be delegated.

When Technology Affects Development

If there was initial uncertainty about where the greatest concern might lie, interviews clarified it. The central risk is not that AI will fail, but that it will succeed too well—that it will perform tasks so effectively that it shifts from tool to cognitive prosthesis. This concern surfaced repeatedly among parents, teachers, and students.

Early delegation of thinking was the first observable pattern. Teachers across countries described similar scenes: students no longer attempting to solve problems independently, but issuing prompts to chatbots instead. The issue was not academic dishonesty—that is not new—but the speed with which “thinking” became externalized, and the ease with which students accepted this shift. Many acknowledged that AI use made them more passive, confident that the system would produce accurate results without sustained effort.

The immediate consequence is dependency. Students see improved grades, time savings, and reduced frustration. Over time, this creates a cycle incompatible with cognitive development: the more one delegates, the less one exercises capacity; the less one exercises capacity, the more one delegates. The report references concepts such as “cognitive debt” and “cognitive atrophy” to describe this gradual erosion: when tools assume mental load, the brain relinquishes it.

This erosion extends beyond abstract thinking to motivation. Teachers described declines in curiosity. When polished answers appear in seconds, exploration, trial, and productive struggle become less appealing. Learning, historically shaped by friction, risks becoming frictionless—and therefore shallow. Some educators referred to students as “passengers”: physically present, intellectually disengaged.

The distinction between improved results and strengthened capacities becomes critical here. AI can generate a well-structured essay. That does not mean the student has learned to argue. It can summarize a text without cultivating comprehension. It can solve mathematical problems without fostering reasoning. The report cautions against conflating product with process, performance with development.

Learning, the authors remind us, is cognitive, social, and emotional. It unfolds through dialogue, uncertainty, negotiation. When AI becomes a universal shortcut, these dimensions can thin. This is why the report maintains that, at present, risks outweigh benefits—not because AI performs poorly, but because it performs tasks that once compelled learning.

Recognizing AI potential is not technological evangelism. It is a necessary step in defining what should be preserved, what can evolve, and what should not be delegated.
 

Two Possible Trajectories

The report outlines two plausible paths. The first is enriched learning: AI tools designed around learning science; expanded access; personalization without isolation; teacher time redirected toward relational work; assessment that moves beyond multiple-choice formats. In this scenario, AI strengthens the interaction among students, teachers, and content—the core of effective schooling.

The second path is diminished learning: indiscriminate adoption; effort replaced by automation; thinking and motivation externalized; trust weakened; privacy compromised by opaque systems. In this trajectory, apparent efficiency masks cumulative losses in cognitive, social, and emotional development.

Technology does not determine the outcome. Human decisions do—pedagogical decisions about when to use AI and when not to; institutional decisions about safeguarding teacher time dividends; regulatory decisions about privacy by design and age-appropriate protections. With these levers aligned, systems can move toward enriched experiences. Without them, they risk sliding toward convenience-driven substitution.

For this reason, the report emphasizes early action: clarify acceptable classroom uses; prepare teachers to integrate AI as augmentation rather than replacement; support families navigating unfamiliar terrain; and require industry standards that prioritize verified content and child protections. Governance, in other words, matters. The future remains contingent on policy, curriculum, and collective priorities.

Prosper, Prepare, and Protect

The report makes clear that it is not enough to simply “manage” AI in education. Education systems must be reoriented so that students do not merely survive in an algorithmic world, but are able to prosper within it. To prosper means more than learning to use new tools. It means restoring the center of learning—curiosity, deliberate thinking, and human relationships—and using AI only when it expands, rather than diminishes, those capacities. It requires a careful balance: knowing when technology adds value and when it interferes with processes that should remain fundamentally human.

The second pillar, prepare, speaks to building the cognitive and institutional infrastructure necessary to live and learn alongside AI. Preparing does not mean teaching students to “master” the models. It means equipping them with criteria: how to evaluate an AI-generated response, identify bias, understand limitations, and calibrate appropriate use. It also involves preparing teachers, who need tools, time, and professional learning to integrate AI from a pedagogical—not merely functional—perspective. And it requires education systems to plan with a long-term view: connectivity, equitable access, ethical frameworks, and governance. None of this happens automatically.

The third pillar, protect, reminds us of a simple but important truth: AI’s risks are neither abstract nor hypothetical. They appear in eroded privacy, emotional manipulation, compulsive use, exposure to inappropriate content, and—above all—in impacts on cognitive and socioemotional development. To protect means demanding child-safe design from industry, establishing clear boundaries, strengthening adult oversight, and keeping children’s well-being at the center, rather than technological fascination.

These three pillars are not separate compartments, but a shared framework of co-responsibility. Governments, schools, families, and technology companies must share the task. The goal is not to halt AI nor to surrender to it, but to decide collectively how it should coexist with childhood and education.

We Still Have Time

AI will not disappear from classrooms or from daily life. The enduring question is what humans should continue to do when they learn. That reframing underpins the report. It does not deny AI’s potential—to personalize, to expand access, to reduce administrative burden. But based on available evidence and global voices, it concludes that, for now, risks predominate.

The trajectory is not fixed. It can be redirected if systems act before shortcuts solidify—using AI to extend capacities rather than replace them; embedding privacy and well-being in design; preparing teachers, families, and students to exercise judgment.

Artificial intelligence compels education to confront a longstanding question: which human capacities must remain at the center of learning? Answering that question—calmly, empirically, and deliberately—may be the most important step we can take.

You may also be interested in…