March 15, 2026

AI in education: Can AI Support – Not Replace – Teachers? Protecting Educator Agency in Smart Classrooms

Teacher versus AI prompt operator. AI in education

AI in education: Can AI Support - Not Replace - Teachers? Protecting Educator Agency in Smart Classrooms

When it comes to AI in education policy, discussions often overlook the most critical voices – the teachers. A new UNESCO position paper, Promoting and Protecting Teacher Agency in the Age of Artificial Intelligence, aims to shift that imbalance by putting teacher autonomy at the heart of the conversation on artificial intelligence in classrooms.

Published in 2025 by UNESCO and the International Task Force on Teachers for Education 2030 (Teacher Task Force), the paper asserts a clear position: while AI technologies may transform education systems, teachers must remain the central decision-makers in AI-powered learning environments. A stance I have also defended in various articles on artificial intelligence in classrooms.

This global policy paper was led by Carlos Vargas, Head of the Teacher Task Force Secretariat, and drafted by Professor Mutlu Cukurova (University College London), in collaboration with experts from the European Commission, Saudi Arabia, Ghana, Finland, and the Task Force’s Digital and AI Thematic Group.

What emerges from this document is more than another “AI in schools” report. It rather directly challenges techno-solutionism in education, maps the risks AI poses to the teaching profession and offers actionable strategies to safeguard teacher agency in AI-driven classrooms.

You can read the 36-page document, or just follow my analysis below.

Why teacher agency in AI-powered classrooms is important

It’s easy to talk as an outside about AI in education. Few seems to realize that artificial intelligence is entering education systems which are already under extreme pressure. According to the UNESCO Global Report on Teachers and the Teacher Task Force, the world will need 44 million more primary and secondary teachers by 2030, that includes 15 million in sub-Saharan Africa, where 90% of secondary schools already face severe teacher shortages.

At the same time, many educators work in resource-limited classrooms lacking basic necessities – from textbooks and internet access to digital devices and local-language content. In some schools, a single textbook is even shared among 12 or more students.

AI in education is therefor entering uneven, underfunded, linguistically diverse school systems which raises a critical question: Who controls what counts as good teaching when AI shows up?

The problems by the numbers

To make the argument, the position paper pulls in a series of data points that show where AI in education may deepen existing fault lines rather than repair them.

1. Global teacher shortages and resource gaps: AI in education is not the real issue

The below data brings the global teacher crisis into sharp focus. It links headline shortages to what this means in real classrooms: overcrowded lessons, shared textbooks and learning materials that rarely appear in learners’ own languages. It shows how structural gaps in staffing and resources form any realistic discussion about AI in education.

Teacher shortages and learning resources

IssueData pointSource
Additional teachers needed by 203044 million primary and secondary teachersTTF & UNESCO Global Report on Teachers
Needed in sub-Saharan Africa15 million teachersTTF & UNESCO Global Report on Teachers
Secondary schools with severe shortages90% of secondary schools face severe shortagesTTF & UNESCO Global Report on Teachers
Textbook access1 textbook shared by 12+ pupils in some classroomsTeacher Task Force position paper
Language of open education resources92% of OER are in EnglishGEM Report 2023 (cited in the paper)

2. Infrastructure and connectivity: AI on shaky foundations

The paper stresses that much of the world still lacks the infrastructure needed to even run modern AI tools. The below data shows how fragile the basic infrastructure is in much of Africa. Limited access to electricity and patchy broadband make it impossible to roll out cloud-based AI tools at scale, and force schools to rely on low-tech or offline solutions long before they can even think about “intelligent” systems.

Infrastructure constraints in Africa

IndicatorFigureSource
Access to electricity, sub-Saharan Africa53% of populationWorld Bank SDG 7.1.1 (2023)
Access to electricity, rural sub-Saharan Africa33% of rural populationWorld Bank SDG 7.1.1 (2023)
Access to electricity, rural Gambia30% of rural populationWorld Bank SDG 7.1.1 (2023)
Broadband connectivity, Africa37% of the continent coveredUNESCO (2023), cited in the paper

3. Teachers’ digital skills: an uneven starting line for AI in education

The paper indicates that teacher digital skills cannot be taken for granted. The data from four Latin American countries shows that most educators still operate at a basic level, with only a small minority confident in more complex digital tasks. Any plan to integrate AI in classrooms will have to confront this skills gap head-on. AI in education is in these countries not a core skill.

In such contexts, dropping complex AI tools into classrooms without intensive professional development risks deepening inequalities between confident “power users” and everyone else.

Teacher digital skills in four Latin American countries

Skill levelShare of teachers
Only basic digital tasks39%
Simple internet use40%
More complex digital functions13%

Problem cluster 1: De-professionalisation and loss of agency

One of the paper’s strongest messages is about the risk of de-professionalisation. With the surge of AI in education and AI tools taking over routine tasks – from grading and feedback to lesson planning – there is a growing fear that teachers will lose core skills and autonomy.

The paper echoes a long line of research on teacher autonomy. It warns that, if AI systems impose standardised lesson scripts, automated assessments and fixed sequences of “optimised” content, teachers may become supervisors of platforms – not operators – rather than professionals making informed pedagogical choices.

Several mechanisms are identified:

  • Deskilling – Heavy reliance on AI for marking, feedback and resource creation can erode teachers’ capacity to diagnose learning, tailor instruction and design meaningful tasks over time.
  • Data-driven surveillance – The same analytics that promise performance insights can be redirected towards punitive monitoring of teachers’ behaviour and outcomes. Unions already warn that dashboards might become tools for high-stakes evaluation rather than support.
  • Standardisation over diversity – When AI curriculum tools embed a narrow view of “effective teaching”, they may crowd out pedagogical diversity, local knowledge and culturally grounded practices.

The paper does not argue against automation outright or AI in education, let that be very clear. But it does call for a shift from automation to augmentation: AI should extend what teachers can do, not hollow out their expertise.

Problem cluster 2: Cognitive and emotional costs of AI-mediated learning

The text also zooms in on the impact of AI on thinking and relationships, both for students and teachers, and it describes well what a previous article on WINSS already says: Only use AI in education when there is a proven educational benefit.

Cognitive shortcuts and “metacognitive laziness”

Drawing on emerging research, the paper notes that generative AI tools can reduce mental effort and weaken deep engagement when they are used as default writing or problem-solving aids. Studies show that learners who write essays without large language models engage more cognitively than those who rely on AI outputs.

For teachers, similar patterns emerge. Knowledge workers with high confidence in AI systems tend to lean more on automated outputs and spend less time on critical reflection. Those with stronger confidence in their own instructional expertise are more likely to adapt AI suggestions critically instead of copying them.

The paper’s concern is not about banning AI in education, but about a subtle shift: from teachers as designers and reflective practitioners towards teachers as purely “prompt operators” whose role is to ask, select and paste.

Trust, feedback and the irreplaceable human

The paper also cites a study of 457 higher education students whose perceptions of feedback quality dropped once they learned that the feedback they had received came from AI.

That reaction reveals something important about trust. Students do not only care about correctness. They care about whether feedback feels attuned to their effort, history and emotional state — something current AI systems struggle to provide.

The paper links this to a broader argument: genuine teaching rests on a “beautiful dynamic between human beings”, a phrase borrowed from UNESCO’s earlier work on the futures of education. Teachers provide empathy, conflict resolution, ethical guidance and care, all of which are hard to reduce to algorithms.

Social and mental health risks

At the same time, the paper acknowledges that AI chatbots can offer short-term relief from isolation and emotional distress. Yet it cites research showing that over-reliance on such tools may slowly erode social bonds and increase loneliness.

In other words, AI in education can approximate warmth but cannot substitute for the slow work of trust built between learners and teachers.

Problem cluster 3: Digital divides and “AI colonialism”

The paper leans heavily on lessons from previous waves of edtech. Massive open online courses (MOOCs) promised to democratize access, but in practice we see that they mainly benefited learners in richer countries and better-educated families. Completion rates skew towards students in high-income neighborhoods and large cities.

Edtech, short for education technology, is the umbrella term for all digital tools, platforms and systems used to support teaching and learning, from hardware like laptops, tablets and interactive whiteboards to software such as learning management systems, quiz apps, video lessons, MOOCs, adaptive learning platforms and AI tools for feedback, translation, lesson planning and assessment. It helps deliver online and blended lessons, personalise practice, automate repetitive tasks like marking and admin, and generate data to track learner progress. However, edtech is not a pedagogy in itself and cannot replace well-trained teachers or make up for underfunded education systems; its value depends on how educators use it within a clear, human-centred teaching strategy.

The authors argue that AI could repeat this pattern at a larger scale:

  • Edtech history – Learning management systems and computer-based learning have often widened gaps instead of closing them, particularly where infrastructure and support were weak.
  • AI colonialism – Because most AI systems are developed in high-income countries, trained on anglophone data and optimized for rich markets, they risk exporting one set of cultural and educational norms to the rest of the world. Something we already wrote about when we talked about the Apertus LLM. The paper cites the fact that 55% of all websites are in English and that 92% of open educational resources are in English, sidelining other languages and knowledge systems, including indigenous ones.

At the same time, only about 37% of Africa has broadband connectivity, which means that most learners and teachers on the continent struggle to access the very resources and platforms that global AI and edtech strategies often take for granted.

AI colonialism describes a power imbalance where a small group of companies and countries – mostly in the Global North – design, train and control AI systems using their own languages, norms and data, while people in the Global South mainly provide cheap labour (like data labelling) and then receive tools that don’t reflect their cultures, needs or interests. Local communities lose control over their data and digital infrastructures, while imported AI systems can overwrite indigenous knowledge, reinforce existing inequalities and lock countries into dependency on foreign platforms.

Where AI genuinely helps teachers

Don’t mistake, the paper is not anti-technology. It documents several areas where AI in education, when carefully designed and supported, eases pressure on teachers.

AI in education can offer time savings and workload relief

One of the strongest data points comes from a randomized controlled trial run by the Education Endowment Foundation (EEF) and evaluated by the National Foundation for Educational Research (NFER). In this study, 259 secondary school teachers in England used ChatGPT to support lesson and resource planning.

The takeaway in the paper is careful: AI helped teachers reclaim more than 25 minutes a week, but it did not automatically improve resource quality. AI-generated materials still require review and adaptation.

AI in education can offer better support for diverse learners

The paper points to evidence that AI in education can help teachers adapt content to different learning levels, languages and needs, especially in crisis-affected or multilingual environments. Intelligent tutoring systems and generative tools can scaffold tasks, generate examples and offer multimodal explanations.

At the same time, the authors insist that these tools need strong guardrails: strong privacy protections, specific expertise on disability and inclusion, and teacher training to interpret AI-generated recommendations rather than follow them blindly.

Human–AI hybrid models

The paper also handles human–AI hybrid tutoring systems such as Tutor CoPilot, evaluated at Brown University. These systems support novice tutors by suggesting high-quality strategies, increasing the use of inquiry-based questioning instead of direct instruction.

Tutor CoPilot is a human–AI tutoring assistant that runs alongside live online lessons and gives real-time suggestions to support tutors rather than replace them. Developed by a research team around Susanna Loeb and colleagues, it uses large language models trained on expert tutor “think-aloud” data to analyse the ongoing conversation and propose better questions, hints and explanations, nudging tutors away from simply giving answers and towards inquiry-based teaching. In a large randomized trial with hundreds of tutors and over a thousand K–12 students from under-served US schools, access to Tutor CoPilot made students a few percentage points more likely to master the topic, with the biggest gains for learners taught by lower-rated tutors, at an estimated cost of about 20 USD per tutor per year.

Yet even here, interviews revealed mismatches – for instance, AI suggesting the wrong class level – clearly showing the need for teachers to remain decision-makers rather than passive recipients of AI prompts.

AI in education needs a framework for protecting teacher agency

The last part of the paper sets out a structured response. The key recommendations are aimed at governments, education leaders, unions and technology developers.

1. Reaffirm teachers as irreplaceable

First, the Teacher Task Force calls on governments to explicitly state in policy and AI governance frameworks that teachers are not replaceable by tools for AI in education. AI systems should be designed and regulated as tools that support, not substitute, core responsibilities such as ethical guidance, emotional support, cultural mediation and creative teaching.

This may sound symbolic, but it anchors downstream decisions about procurement, accountability and funding.

2. Invest in teacher AI competencies – beyond technical skills

The paper argues that professional development cannot stop at “how to use the tool”. Teachers need:

  • Foundational understanding of how systems of AI in education systems, including their limitations and biases.
  • Ethical literacy to judge when tools for AI in education aligns with humanistic goals and when it conflicts with student well-being or equity.
  • Practical pedagogical strategies for integrating tools for AI in education into planning, assessment and feedback without giving up control.

Frameworks such as UNESCO’s AI competency framework for teachers and the European Commission’s DigCompEdu are presented as starting points, but the paper does stress that they must be adapted to local context and readiness levels.

It also stresses the importance of teacher-led communities of practice and networks where teachers experiment with AI in education together, share failures and shape use cases, instead of having tools imposed from the top down.

3. Design for teacher–AI complementarity

Moving away from “AI versus teachers”, the paper pushes for teacher–AI complementarity: tools for AI in education handle narrow, well-defined cognitive tasks and data processing; teachers focus on relational, ethical and higher-order work.

Examples include:

  • AI as an “external memory” or pattern detector that surfaces trends in student performance.
  • AI-supported feedback drafts that teachers then personalize and contextualize.
  • Adaptive practice systems that give teachers more time for small-group mentoring instead of routine drilling.

The authors warn that complementarity is not something that arrives automatically. It requires time, training and governance structures that keep teachers in the loop and give them authority to override AI recommendations.

4. Build robust, ethical AI governance for education

The paper calls for clear, enforceable governance frameworks around AI in schools, covering:

  • Transparency: teachers and students must understand what data AI systems use, how they make recommendations, and what limits they have.
  • Privacy and data protection: strict controls on data collection, storage and secondary use, especially for minors.
  • Bias and fairness: regular, independent evaluations of AI systems to detect and correct biases around gender, ethnicity, language, disability and socio-economic status.
  • Environmental costs: explicit reflection on when the energy and resource footprint of AI is justified for educational purposes, and when simpler tools would do the job.

This governance dimension is part of protecting teacher agency: without transparency, teachers cannot use AI in education critically.

5. Close digital divides with low-tech and offline AI

Given the infrastructure data, the paper pushes against a high-bandwidth, cloud-only AI model for education.

Instead it advocates:

  • Prioritising low-tech, high-impact solutions that work offline or on basic devices.
  • Supporting initiatives such as ProFuturo Mathematics, which adapts AI-powered learning platforms for offline use in low-connectivity contexts; the project so far has reached nearly 1,500 schools in 39 countries, engaging over 16,000 teachers and 417,000 students.
  • Promoting solar-powered devices and partnerships with telecom providers (World Mobile comes to mind) to subsidize data costs.

In short, pushing cloud-heavy generative AI into schools without solving basic infrastructure is poor policy and risks widening inequalities.

6. Support research that goes beyond Anglophone, small-scale pilots

A recurring critique in the paper concerns the evidence base itself. Most available studies on AI in education:

  • Come from anglophone contexts (UK, EU, US).
  • Focus on small-scale trials with bespoke academic tools under close researcher supervision.

The Teacher Task Force calls for:

  • Large-scale randomised trials of mainstream tools such as ChatGPT, Gemini, Claude and Midjourney “in the wild”.
  • More research in under-represented regions and languages.
  • Meta-analyses that synthesize findings over time.

Without that, policy debates risk being dominated by a narrow slice of contexts and tools.

7. Use international cooperation to spread capacity, not just tools

Finally, the paper situates AI in education within broader SDG4 governance. It points to platforms such as the Education 2030 High-Level Steering Committee, the Teacher Task Force itself, the Global Education Coalition and the Broadband Commission as spaces for sharing practice, building capacity and avoiding fragmented, duplicative efforts.

International cooperation, in this framing, should not be about one-way transfer of tools from North to South, but about co-developed standards, shared evaluation methods and cross-regional learning.

What this analysis adds to the ‘AI in education’ debate

Taken together, the position paper does three important things for the ‘AI in education’ debate.

  1. It recenters teachers: Instead of asking how AI can “fix” education, it asks what teachers need in order to keep exercising ethical judgment, creativity and care in AI-rich environments. That reframing pushes ministries and vendors to treat teachers as co-designers, not as users who must adapt.
  2. It links AI debates to older structural problems: The text repeatedly stresses that AI in education will not solve chronic underfunding, teacher shortages or weak support structures. At worst, it can mask these problems behind a thin layer of “innovation” and make them harder to confront.
  3. It insists on equity and power: By using concepts like AI colonialism, and by foregrounding language, data and infrastructure imbalances, the paper situates AI in education within broader struggles over whose knowledge counts and who benefits from technological change.

For policymakers, the uncomfortable conclusion is that responsible AI in education starts far away from the app store. It goes back to the basics of education, it starts with stable investment in teachers, clear rights over data and autonomy, and infrastructure that reaches the last rural school.

For teachers, the paper offers a different message: your role is not to keep up with every tool; your role is to hold the line on what education is for. AI in education can help with planning, differentiation and administration. But, and keep this very well in mind, it cannot decide what counts as a good question, a fair assessment, or an act of care in your classroom.

The Teacher Task Force position paper does not resolve the tensions around AI in education and schooling. It does something more useful though: it maps the pressure points, names the trade-offs, and gives education systems a baseline test for every new AI proposal: Does this strengthen teacher agency, or does it erode it?

FAQ on AI in education – Promoting and protecting teacher agency in the age of artificial intelligence

What does “teacher agency in the age of artificial intelligence” actually mean?

Teacher agency in the age of artificial intelligence means that teachers keep real decision-making power over how, when and why AI tools enter their classroom. It covers their professional judgment on pedagogy, assessment, feedback and the ethics of AI in education. The document argues that AI must augment, not replace, these core responsibilities.

Why did UNESCO and the Teacher Task Force publish this position paper on AI and teacher agency?

UNESCO and the International Task Force on Teachers for Education 2030 published the paper because AI is entering school systems already under stress: global teacher shortages, resource gaps and weak infrastructure. The paper responds to growing pressure to deploy AI in education and puts a clear condition on that agenda: protect teacher agency as a non-negotiable principle.

Who wrote the paper on promoting and protecting teacher agency in AI-powered education?

The paper was prepared under the leadership of Carlos Vargas, Head of the Teacher Task Force Secretariat. Drafting was led by Professor Mutlu Cukurova (University College London), with input from the Teacher Task Force Digital & AI Thematic Group and experts from the European Commission, Saudi Arabia, Ghana and Finland.

How does AI impact teacher agency and professional autonomy in the classroom?

AI impacts teacher agency when it automates core tasks like planning, grading or feedback and then “locks in” one way of teaching. If platforms prescribe lesson sequences, assessments and pacing, teachers risk becoming supervisors of systems rather than professionals making informed choices. The paper calls this a risk of de-professionalisation and argues that AI design and policy must explicitly protect teacher autonomy.

What are the main risks of AI replacing teachers in low-income and under-resourced education systems?

In low-income systems, AI risks being used as a cheap substitute for missing teachers. The paper warns that this can widen inequalities: crowded classrooms, weak infrastructure and scarce textbooks do not disappear because a chatbot appears. When ministries present AI as a solution to structural underfunding, they shift attention away from hiring, training and retaining human teachers.

How do teacher shortages and textbook gaps affect AI in education strategies?

Teacher shortages and textbook gaps show what AI in education can realistically do. And that is fairly limited. In many countries, millions of additional teachers are needed and pupils still share a single textbook. In that context, AI cannot come in as a glossy add-on. Any AI in education strategy has to start from the reality of overcrowded classrooms, limited materials and overworked staff; otherwise, it reinforces existing deficits instead of easing them.

Why are infrastructure constraints in Africa a barrier to AI in education rollouts?

Large parts of Africa still lack reliable electricity and broadband. Many rural schools have unstable power and no affordable data. Cloud-based AI tools that assume constant high-speed connectivity simply do not work in these conditions. The paper highlights this as a core barrier and pushes for low-tech, offline-capable AI solutions before ambitious “AI transformation” plans.

How do teacher digital skills influence successful AI integration in classrooms?

Teacher digital skills set the ceiling for what AI can realistically achieve. In several Latin American systems, most teachers can only perform basic digital tasks. If AI tools arrive without strong, long-term professional development, a small minority of digitally confident teachers will benefit, while many others feel overwhelmed or sidelined. The document argues for intensive, continuous training that combines technical, pedagogical and ethical competence.

What is AI colonialism in education and why does it matter for teacher agency?

AI colonialism in education refers to a pattern where AI tools are designed and trained in a few high-income countries, using mainly English-language and Western data, and then exported to the rest of the world. Local teachers and students contribute data and labour but receive systems that don’t reflect their languages, cultures or needs. This undermines both teacher agency and local knowledge, as imported platforms start to dictate what “good learning” looks like.

How can AI tools like Tutor CoPilot support teacher agency instead of eroding it?

Tools like Tutor CoPilot show how AI can support educators. They give live suggestions for questions, explanations or feedback while a human tutor remains in control. The document says that these hybrid models are promising because they respect teacher judgment: the human decides what to use, what to adapt and when to ignore the AI prompt.

What governance and regulation does the paper recommend for AI in schools?

The paper recommends clear AI governance for education: transparency about how systems work and what data they process, strong privacy and child-protection rules, bias audits, environmental impact checks and explicit safeguards against surveillance uses of data. These governance measures help teachers understand, question and override AI recommendations, which is central to protecting teacher agency.

How should ministries design AI in education policies that protect teacher rights and voice?

Ministries should design AI in education policies with teachers at the table from the start. That means involving unions and teacher organisations in procurement decisions, training plans, data governance and classroom guidelines. Policies should state that teachers cannot be replaced by AI, guarantee time for professional development, and provide complaint and appeal mechanisms when AI-based monitoring threatens working conditions.

What practical steps can schools take to use AI while safeguarding teacher agency?

Schools can start by treating AI as a tool that drafts and suggests, it is not a tool that decides. They can create internal guidelines co-written with staff, give teachers protected time to experiment and reflect, avoid tying high-stakes evaluation directly to AI analytics, and invest in training on both digital skills and AI ethics. When school leaders use AI as support for teacher-led pedagogy rather than a performance-control instrument, they strengthen agency instead of weakening it.

How does this UNESCO Teacher Task Force paper help policy makers searching for ethical AI in education frameworks?

For policy makers searching “ethical AI in education framework with teacher focus”, this paper offers a ready-made reference. It connects human rights, SDG4 targets, teacher shortages, digital divides and AI design choices into one narrative. It also provides language that ministries can lift directly into strategies and regulations, especially around the non-replaceability of teachers and the need to anchor AI use in professional autonomy.

What are the key takeaways for teachers who want to explore AI tools responsibly?

For teachers looking up “how to use AI tools in class without losing control”, the key takeaways are clear: keep AI in a support role, never delegate grading or sensitive feedback entirely to machines, double-check outputs for bias and error, and use AI mainly to free time for deeper human interaction with students. The paper encourages teachers to see themselves as critical co-designers of AI use in education. As such it is definitely not meant for passive end-users of ready-made platforms.


Become a Sponsor

Our website is the heart of the mission of WINSS – it’s where we share updates, publish research, highlight community impact, and connect with supporters around the world. To keep this essential platform running, updated, and accessible, we rely on the generosity of you, who believe in our work.

We offer the option to sponsor monthly, or just once choosing the amount of your choice. If you run a company, please contact us via info@winssolutions.org.

Select a Donation Option (USD)

Enter Donation Amount (USD)