AI in Schools, a Practical Guide for 2025 and 2026
AI in Schools, a Practical Guide for 2025 and 2026
Generative AI didn’t wait for school boards, instead AI is schools and technology. Students and teachers started using it before policies even existed. That created urgency and confusion in equal measure. The guide this article is based on – “A Guide to AI in Schools: Perspectives for the Perplexed” (August 2025) – collects voices from over 100 teachers and students to map real-world practices, tensions, and first-draft solutions.
The source document focuses on generative AI – systems that create text, images, audio, or video. It uses “AI” as shorthand because schools won’t run separate policies for generative vs. other AI.
Primary contributors to this guide which was published in Summer 2025 were Julie M. Smith, Jesse Dukes, Josh Sheldon, Manee Ngozi Nnamani, Natasha Esteves, and Justin Reich. The main traget group for the guide are educators and it was designed to support teachers and school leaders as they develop AI policies and practices.
The authors of the guide are quite clear, AI is in the building, and schools must act, experiment, and iterate, not wait. Here’s how you can do this in your school and classroom.
- Ethical guardrails are needed, before AI tools touch classrooms.
- Students and learning, mixed effects you must plan for.
- Where AI in schools saves hours – and where it adds work
- Policy, AI in schools bans won’t hold
- AI literacy, for teachers and students.
- When and how students may use AI in schools
- 10 clear steps for AI in schools for 2025–2026
- FAQ on AI in schools
- Q1. Why not ban AI in schools?
- Q2. What must be in an AI policy?
- Q3. How do we protect student data privacy?
- Q4. Do AI tools help learning?
- Q5. What should teachers know about AI in schools?
- Q6. Should students learn about AI in schools?
- Q7. Are AI detectors reliable?
- Q8. How do we start tool vetting?
- Q9. What class-ready policy modes work?
- Q10. How can AI in schools actually save teacher time without losing your voice?
- Become a Sponsor
Ethical guardrails are needed, before AI tools touch classrooms.
The guide provides a concise ethics checklist you can adapt for policy and practice. These are the principles that prevent harm and keep AI uses aligned with learning goals.
Teachers center ethics first. The guide surfaces eight recurring principles for ethical AI in K-12 education:
- Transparency. Disclose AI use and recognize that model self-explanations can be wrong.
- Justice & fairness. Watch for bias in outputs and images; ensure inclusive access.
- Non-maleficence. Consider harms, including the hidden labor behind AI content moderation.
- Responsibility. Clarify who is accountable when AI advice goes wrong.
- Privacy. Understand what data tools collect, store, and infer – even when students don’t type PII.
- Beneficence. Prefer uses that expand opportunity and equity.
- Freedom & autonomy. Avoid turning AI into a crutch that narrows critical thinking.
- Pedagogical fit, teacher wellbeing, children’s rights. Use research-based approaches; don’t offload human connection; protect uniquely child-specific vulnerabilities.
The guide also lists U.S. legal touchpoints that often apply to AI in schools (FERPA, COPPA, PPRA, CIPA, ESEA/ESSA), and explains how AI features can trigger consent, privacy, and evidence standards.
Students and learning, mixed effects you must plan for.
One thing we must realize is that research to date is limited and mixed. What we do know is that teachers and students report both learning gains and risks:
- Potential gains: targeted scaffolds, quicker feedback loops, and help when stuck.
- Risks: shortcuts that erode skill-building, attention costs from more screen time, misinformation, and reduced independent thinking.
The guide also offers example patterns you can use:
- Higher-order work can expand if you offload low-level tasks and then push analysis, synthesis, critique.
- Scaffolded reading at different levels helps diverse learners engage with the same core lesson.
- Screen-time and attention require active management in class culture and task design.
Where AI in schools saves hours – and where it adds work
Teachers report time savings for lesson ideation, rubric drafting, comprehension quizzes, seating charts, and tone-safe parent emails. However, they also report time costs: staying current on fast-changing tools and investigating integrity concerns. The net effect varies by context.
Here are a few fast-win workflow ideas from the guide’s interviews:
- Draft a standards-aligned lesson outline, then localize tasks and exemplars.
- Generate a rubric structure, then edit descriptors in your voice.
- Produce formative MCQs from a news article for daily bell-work; verify items before use.
Policy, AI in schools bans won’t hold
Most educators in the guide argue against blanket bans. Equity suffers when only students with personal devices can use AI at home while others can’t use it at school. Instead, schools are building nuanced, revisable policies with clear “where/when/how.”
Working process that schools used: cross-discipline conversations; shared vocabulary; student surveys; case studies; and an explicit “no-until-yes” baseline that allows teachers to open AI use for defined tasks.
Schoolwide vs. teacher-set rules. A single policy reduces confusion; teacher autonomy respects subject differences. Some districts publish a small menu of policy modes (e.g., “No AI,” “Assistive AI only,” “Open with citation”) for teachers to apply by assignment.
Policy checklist highlights: core values alignment; AI literacy for decision-makers; stakeholder input including skeptics; legal compliance; data privacy updates; integrity processes; opt-outs; communication to families; and scheduled revisions.
Next to this, there are two realities to address in procurement:
- Hidden AI in familiar software. Tools may add generative features by default, changing classroom dynamics overnight. Plan for quick toggles and classroom norms.
- Vetting pipelines such as the Student Data Privacy Consortium can help, but you still need local due diligence.
AI literacy, for teachers and students.
As we reported before, teachers are not always fully AI aware or knowledgeable. Teachers need prompting skills, bias awareness, source-checking routines, and a candid sense of where AI undermines relationships and voice. Use iterative prompting, critique outputs, and never paste protected student data into public tools.
For students, teach when to use AI in schools (brainstorming, feedback on structure) and when not to (original analysis, personal reflection, closed-book skill practice). Tie every allowed use to disclosure and citation.
Teachers also report equity issues with AI detectors, including higher false positives for multilingual learners and students of color. Build integrity processes that start with dialogue, drafts, and process evidence, not accusations.
When and how students may use AI in schools
The guide advises to adopt a simple assignment-level label:
- No AI use. Reason: mastery check or original voice.
- Assistive AI allowed. Examples: idea lists, clarifying questions, outline suggestions; student must disclose use.
- Open AI use with attribution. Research aid or translation; student must document prompts and verify claims.
This model appears across departments in the guide’s case studies and helps teachers teach the meta-skill of using AI responsibly.
AI brings unknown error rates and opacity; it also brings levers for personalization and workload relief. The best leaders embrace ambiguity, reserve time for faculty discourse, and iterate in public with their communities.
10 clear steps for AI in schools for 2025–2026
The guide offers 10 steps which can be followed in order to make sure that AI in schools stays on the right path.
- Form a cross-functional AI group with at least one thoughtful skeptic.
- Map legal baselines (FERPA, COPPA, PPRA, CIPA, ESSA) to any candidate tool.
- Publish a shared glossary and a short AI literacy primer for staff.
- Survey students about current AI use and needs.
- Pilot policy modes (“No/Assistive/Open with citation”) in 2–3 subjects.
- Adopt disclosure rules for all AI assistance by teachers and students.
- Stand up a tool-vetting lane (privacy + efficacy) and track surprise AI add-ons.
- Publish integrity processes that avoid detector-only judgments; collect drafting evidence.
- Create an “AI Idea Bank” with prompts, examples, and caveats about errors and bias.
- Time-box the policy (“2025–2026 AI policy”) and schedule a mid-year review.
FAQ on AI in schools
Q1. Why not ban AI in schools?
Blanket bans break on equity and reality. Students with home devices will still use AI. Nuanced policies reduce confusion and let teachers open AI for defined tasks.
Q2. What must be in an AI policy?
Core values, legal fit, privacy updates, tool-approval process, when/where student AI use is allowed, teacher AI use rules, integrity process, opt-out paths, and revision cadence.
Q3. How do we protect student data privacy?
Assume tools infer identity even without explicit PII. Avoid uploading student work to public models. Use vetted tools and contract clauses; align with FERPA/COPPA/PPRA/CIPA.
Q4. Do AI tools help learning?
Sometimes. They can scaffold, personalize, and unstick thinking. They can also enable shortcuts and reduce attention. Design tasks that reward original thinking and require process evidence.
Q5. What should teachers know about AI in schools?
How models err and bias; how to iterate prompts; how to check facts and sources; where AI undermines relationships; and how to keep student data out of public tools.
Q6. Should students learn about AI in schools?
Yes. It’s part of their future work and media landscape. Teach smart use, limits, and how to spot deepfakes and shallow summaries.
Q7. Are AI detectors reliable?
Use caution. False positives hit multilingual learners and students of color harder. Prefer portfolio evidence and conversation before discipline.
Q8. How do we start tool vetting?
Track data flows, storage, training disclosures, opt-outs, and youth appropriateness. Expect vendors to add AI suddenly – require toggles and teacher control. Consider SDPC-vetted options.
Q9. What class-ready policy modes work?
Adopt three labels per assignment: No AI; Assistive AI allowed (disclose use); Open with citation (document prompts, verify claims). Post modes on syllabi and in LMS.
Q10. How can AI in schools actually save teacher time without losing your voice?
Use it to outline, not to finalize. Generate starting rubrics, activity lists, or parent-note drafts, then edit for tone and accuracy. Maintain human feedback for relationship-rich tasks.
Become a Sponsor
Our website is the heart of the mission of WINSS – it’s where we share updates, publish research, highlight community impact, and connect with supporters around the world. To keep this essential platform running, updated, and accessible, we rely on the generosity of you, who believe in our work.
We offer the option to sponsor monthly, or just once choosing the amount of your choice. If you run a company, please contact us via info@winssolutions.org.
I specialize in sustainability education, curriculum co-creation, and early-stage project strategy. At WINSS, I craft articles on sustainability, transformative AI, and related topics. When I’m not writing, you’ll find me chasing the perfect sushi roll, exploring cities around the globe, or unwinding with my dog Puffy — the world’s most loyal sidekick.
