January 16, 2026

AI + Learning Differences, a blueprint for inclusive learning without boundaries

AI + Learning Differences, a blueprint for inclusive learning without boundaries

AI + Learning Differences, a blueprint for inclusive learning without boundaries

Stanford’s Accelerator for Learning convened a two-day working symposium and hackathon to answer a hard question: how do we design AI that expands access for every learner, and not just the “average” one. The resulting 80-pages strong white paper (which you can download here) maps 9 action areas – co-design, UDL-aligned instruction, IEP modernization, early identification, social-emotional supports, assistive technology, teacher learning, workforce pathways, and interdependence – then compresses its guidance into one mnemonic, “PIVOT+C.”

The goal is practical. Build with the people most affected. Measure access, agency, and time saved. Protect privacy from day one. Fund what works at scale.

The document was authored by Nneka J. McGee, J.D., Ed.D. (project lead), Elizabeth Kozleski, Ed.D., Christopher J. Lemons, Ph.D., and Isabelle C. Hau.

The paper is quite explicit about its scope and provenance. It synthesizes what participants shared during the December 2024 convenings and pairs those insights with current policy signals – from EU accessibility guidance to UNICEF’s child-rights framing and U.S. state-level AI memos.

At its core, the paper treats learner variability as the default, and independence as only part of the outcome. It argues for interdependence – which means people, tools, and communities working together – as the benchmark for success. It also calls on developers, educators, researchers, and policymakers to move now on concrete steps that make AI in education useful, safe, and equitable for learners across the lifespan.

At the end of the article I compiled 30 questions that dive even further into the white paper.

What this document is and its purpose

The paper synthesizes the insights of the AI + Learning Differences Working Symposium (December 2024) and a companion hackathon hosted by Stanford Accelerator for Learning. It centers lived experience – “nothing about us without us” – and aims to steer design, research, classroom practice, and policy toward AI systems that expand access, autonomy, and well-being for learners across the full spectrum of variability.

Having read the document thoroughly I can safely say that the tone is very pragmatic: build with, not for; measure what matters; protect privacy; and fund what works.

The document is aimed not only at developers, researchers and policymakers, but also at educators. Belows I will first clearly indicate what each of the 9 chapters says.

Chapter by chapter

1) “Ensuring Co-Design and Collaboration,” the foundation

The paper starts by stating the obvious and then proving it: when people with lived experience drive design, products work better for everyone. It points to accessibility wins like the Xbox Adaptive Controller to show how co-design delivers utility beyond the original target group. But it also names the friction: siloed teams, rushed product cycles, and investor pressure can push accessibility into a late-stage add-on.

The fix to this: compensate lived-experience experts, build ethical and privacy guardrails up front, and slow down where trust-building requires it, especially with youth.

The chapter offers a clean lens for design teams by distinguishing permanent, temporary, and situational disabilities – useful for personas and for stress-testing features. It argues for open-ended surveys, ethnographic methods, and incentives that avoid “cultural taxation,” so community members aren’t asked to educate builders for free.

Shifting Disability Boundaries (from the white paper)

CategoryDefinitionImpact ContextsExample
PermanentLong-term physical or mental impairmentWork, home, socialDeafness
TemporaryShort-term impairmentWork, home, socialRecovery after ear surgery
SituationalContext-specific constraintsWork, home, socialTrouble hearing in a loud crowd

Spotlight. A Santa Clara research-practice partnership illustrates what embedded co-design looks like in a district: inclusive lesson design, paraeducator training, teacher residencies, and an open-access literacy tool (ROAR) all folded into one learning ecosystem.

2) “Designing Learning for the Edges,” or: erase the edges

“Edges” isn’t a place; it’s where one-size-fits-most breaks down. The chapter argues that learner variability is the rule, not an exception, and pushes schools toward Universal Design for Learning (UDL) – multiple ways to engage, represent, and express – updated in 2024 with a stronger emphasis on identity, agency, and barrier removal. The method here is change management: assemble advisory boards, run focus groups, synthesize research, publish a draft, then update with feedback. CAST’s new UDL Guidelines 3.0 are used as a procedural model for systemic shifts.

In classroom terms: AI can serve as a design assistant – scaffold tasks, personalize pacing, flag patterns, and surface options. But the warning lights flash on bias and privacy; teams must set ethical baselines, not bolt them on.

3) “Special Education and IEPs,” the place where law meets workload

Half a century after IDEA, IEPs remain the legal backbone for supporting students with disabilities. The friction isn’t purpose; it’s capacity. The paper describes a familiar split: special educators swim in compliance and caseloads; general educators juggle content coverage with minimal training on accommodations. It’s a coordination problem, multiplied by time.

What AI can do here. Draft the routine parts, keep documents current, aggregate data across tools, monitor progress, and propose evidence-aligned goals – while keeping humans in the loop, because IEPs must be individualized, not copied from a goal bank. The paper cites tools like Polaris for step-by-step IEP support, and it spotlights “Kai,” an AI-assisted reading-comprehension intervention that teaches students to craft gist statements and get immediate corrective feedback.

Bottom line. Special education isn’t going away; specialization matters. The goal is tighter collaboration – backed by adaptive tooling – so general and special educators can share data, tactics, and timing, then adjust instruction with less administrative drag.

4) “AI in Needs Identification and Mediation Design,” catch earlier, label smarter

The chapter argues for early and accurate identification, noting wide variation in eligibility criteria and screeners across states and countries. Mislabeling – both under- and over-identification – has real costs. AI can help by using ambient signals, adaptive screeners, and continuous classroom data to flag risks. But those flags must be reviewed by trained humans, and the models must be trained on diverse data to avoid simply codifying old inequities.

Spotlight. ROAR (Rapid Online Assessment of Reading) is open-access, gamified, validated on 20,000+ students, and approved as a dyslexia screener in California. The team is adding CAT (computerized adaptive testing) and generative item creation with an explicit bias-reduction lens – exactly the kind of measured, evidence-first build the paper promotes.

5) “Social and Emotional Well-Being with AI,” beyond grades

Learning is social, emotional, and relational. The paper stresses the risk of over-automation – tools that “do for” students can rob them of agency. Used well, AI can coach reflection, model conversations, and personalize supports that build connection and resilience. The through-line: keep the human relationship in the center, and design AI to amplify it, not replace it.

6) “AI as Assistive Technology,” expand agency, not dependency

This chapter tracks a familiar pattern in accessibility history: innovations built for a specific barrier (closed captions, speech recognition) spill over to mainstream use. Today’s AI can upgrade existing AT – smarter readers, flexible AAC, context-aware prompts – and should be evaluated by the same yardsticks: Does it increase user control? Does it reduce friction in real tasks? Does it protect data that, if leaked, could harm the user? The paper points back to UDL and co-design as the non-negotiables.

7) “AI in Career-Long Teacher Education,” build time, not just tools

Teachers repeatedly raised the same constraint: time. The paper doesn’t hand-wave; it treats time as a design variable. It recommends job-embedded learning, hands-on tinkering, and AI that actually gives time back by removing low-value admin tasks so educators can reinvest minutes in planning, feedback, and collaboration. The stance is pragmatic: teachers don’t need another dashboard; they need fewer tabs and faster paths to adaptive materials aligned to IEPs.

8) “AI and the Workforce,” pathways, accommodations, and entrepreneurship

The workforce chapter looks past school walls. Two points stand out.

  1. Accommodations are about function, not products. People aren’t “entitled to a tool,” they’re entitled to the function required to do the job. That framing gives employers flexibility and protects employee agency.
  2. Entrepreneurship matters. Backing founders with disabilities and AT ventures creates a positive employment cycle and spreads opportunity across families and generations.

Participant makeup at the symposium

Stakeholder groupShare of participants
Innovators19.8%
Stanford (internal)19.8%
Researchers17.2%
Educators16.4%
Funders9.5%
Ecosystem leaders9.5%
Students/Parents7.8%

The mix matters: design improves when the people affected by decisions sit next to those who fund, research, and ship products.

9) “AI, Interdependence, and Life Satisfaction,” re-framing the goal

The final chapter asks a sharp question: is independence the goal – or interdependence? For many, dignity and satisfaction come from connection – people, tools, and communities woven together. The paper urges researchers to define and measure life satisfaction with affected communities, and to evaluate AI tools against those broader outcomes, not just short-term test score deltas.

The hackathon: five prototypes worth watching

The companion hackathon compressed twelve hours of co-design into 22 prototypes and five winners:

  • Empower IEP (Grand Prize). Summarizes and personalizes complex IEP docs; records/transcribes meetings across languages and reading levels; guides families to advocate effectively.
  • Maestra. A “conductor” that coordinates multiple AI tools in a classroom – triaging tasks from facilities requests to real-time engagement tips – via voice or text.
  • FeelLink. Scenario-based social skills and EI training for young adults, personalized to age, personality, and interests.
  • BCBAwesome. A real-time AI coach to support behavior technicians inside ABA therapy, with a clear line: augment human BCBAs, don’t replace them.
  • Roleability. A role-playing system that builds empathy, confidence, and adaptability across real-world social scenarios.

The hackathon isn’t presented as end-state product; it’s a model for how to convene, prototype, and learn fast – complete with a Toolkit in the addendum to help others replicate the format.

Policy context: fast-moving, uneven, and incomplete

The paper situates the work inside a global policy churn. It notes:

  • The EU leans on accessibility in ethics guidance;
  • Australia frames responsible, ethical design;
  • The African Union calls for equitable implementation;
  • RIINEE (eight Latin American nations + Spain) collaborates on inclusive innovation;
  • China mandates AI instruction in compulsory education;
  • UNICEF stresses children’s rights, privacy, and safety;
  • In the U.S., 26 states had issued education AI guidance by end-2024, with a 2025 federal executive order underlining national readiness.

The takeaway isn’t “policy will save us.” It’s: governance is part of the build. The paper asks policymakers to set durable rules for privacy, transparency, and investment – and to convene the right people early, not after a backlash.

The conclusions

  1. Center lived experience. Co-design is not a nice-to-have; it’s a quality standard. Pay people for their expertise. Protect their data. Start early. Stay.
  2. Treat variability as default. Use UDL to remove barriers and give learners multiple ways in, through, and out of a task. Measure access and agency, not just completion.
  3. Fix the collaboration gap around IEPs. Use AI to reduce paperwork and surface evidence, but keep educators and families in the loop. Individualization remains a legal and ethical requirement.
  4. Identify earlier, with better data and human judgment. Pair ambient/classroom data with adaptive screeners and professional review. Build to strengths; avoid deficit-only framing; guard against tracking.
  5. Protect social-emotional development. Design for agency, reflection, and relationships. Don’t automate the very work that makes humans resilient.
  6. Shift teacher learning from “extra” to embedded. If a tool doesn’t give time back, it won’t last. Tie AI to immediate planning and feedback wins.
  7. Think beyond graduation. Align K-12, postsecondary, and employers around function-based accommodations and inclusive entrepreneurship.
  8. Aim for interdependence. Evaluate AI on its contribution to autonomy, community, and life satisfaction across a lifespan, not just near-term test scores.

The paper closes with a clear statement: the potential of AI appears when we design it with those it impacts, align it with universal design, and anchor it in durable governance and evidence. The pledge is not utopian; it’s procedural.

PIVOT+C

The recommendations compress into PIVOT+C:

  • Privacy. Embed ethical data practice from the first prototype; comply with education and health privacy regimes; explain what you collect and why.
  • Investment. Back co-designed, evidence-aligned products; use targeted grants and public-private funding to build durable capacity.
  • Variability. Design for the full range of ways people sense, process, and express.
  • Opportunity. Use AI to open doors: tutoring, access, on-ramps to work, entrepreneurship.
  • Time. Remove administrative drag so teachers can teach and students can learn.
  • Co-design. Pay lived-experience experts; audit datasets; test in real contexts.

PIVOT+C at a glance

LetterFocusExample first step (from paper)
PPrivacyJoin an AI policy network; apply child-rights and education privacy guidance in product data flows
IInvestmentUse innovation grants and public-private funds to scale co-designed tools
VVariabilityAlign features to UDL 3.0 and a learner-variability model
OOpportunityBuild multimodal, accessible interfaces that lower entry barriers
TTimeAutomate low-value tasks in IEPs, planning, and progress monitoring
+CCo-designCompensate youth and families; audit training data for representation gaps

Where the paper draws the line

  • Human in the loop stays. AI can draft, suggest, and monitor, but humans decide – especially in identification, placement, goals, and mediation.
  • IEPs are not templates. AI can speed admin work, but individualized means individualized.
  • Agency over automation. If a tool replaces student effort or teacher judgment wholesale, it drains the very muscles we’re trying to build.
  • Policy is part of design. Governance should not lag builds by years; design with policy partners now.

Readability notes for busy teams

  • Use the stakeholder maps. The symposium wasn’t a lecture; it was a cross-functional sprint. The participant distribution table above is a quick way to show “we built this with the right people in the room.”
  • Lift the PIVOT+C table into planning decks. It keeps discussions grounded and moves meetings from “what if” to “how next.”
  • Borrow the CAST playbook for UDL. Advisory boards + focus groups + literature synthesis + public draft + revision. It’s a replicable method for any district or vendor planning an AI rollout.

Next steps if you build, teach, study, or regulate

  • Developers. Start with paid co-design groups. Audit training data for representation gaps. Measure not only accuracy but also access, agency, and time saved. Align UI to UDL 3.0.
  • Researchers. Pair long-horizon studies with mixed methods; publish instruments and bias checks; define “life satisfaction” with the communities being measured.
  • Educators. Pilot AI where it gives minutes back right now – IEP updates, differentiated reading sets, feedback assistants – then re-spend that time on relationships and planning.
  • Policymakers. Convene cross-sector coalitions early; set privacy/transparency baselines; create targeted funds that privilege co-designed, evidence-aligned tools; don’t isolate K-12 from workforce and disability policy.

Final takeaway

“Design a future with no boundaries” isn’t a slogan here; it’s a set of choices. Pay people for what they know about their own lives. Engineer for variability from the first wireframe. Keep humans in the loop where judgment and dignity are at stake. Measure time saved as rigorously as test scores gained. And write policy as you build, not after the fact. Do those things, and AI stops being another layer of friction – and starts feeling like possibility.

FAQ AI + Learning Differences, designing inclusive AI for real classrooms

Below you’ll find clear, SEO-friendly questions that use long-tail keywords and straight answers rooted in the white paper.

1) What is the purpose of the “AI + Learning Differences” Stanford white paper for inclusive education stakeholders?

The white paper captures insights from a Stanford working symposium and hackathon to guide developers, educators, researchers, and policymakers in building AI that expands access, agency, and time for learners with diverse needs across K-12 and beyond. It summarizes nine discussion areas and condenses recommendations into the PIVOT+C framework.

2) Who wrote the “AI + Learning Differences” white paper and why does author expertise matter for special education AI?

Principal authors are Nneka J. McGee (J.D., Ed.D.), Elizabeth Kozleski (Ed.D.), Christopher J. Lemons (Ph.D.), and Isabelle C. Hau. Their combined backgrounds span classroom leadership, special education research, systems change, and cross-sector learning innovation, which grounds the report in practice, evidence, and policy.

3) What does the PIVOT+C framework mean in practical terms for AI in inclusive classrooms?

PIVOT+C stands for Privacy, Investment, Variability, Opportunity, Time + Co-design. It calls for protecting student data, funding evidence-aligned tools, designing for learner variability (UDL), opening opportunities through accessible AI, giving time back to teachers, and compensating lived-experience experts who co-design solutions.

4) How does the white paper define learner variability for AI-powered personalized learning in schools?

Learner variability means every student brings a unique mix of strengths and challenges; design should reflect this from the start. The paper highlights research framing variability as default and points to Digital Promise’s Learner Variability Project as a concrete model for practice.

5) How should schools apply Universal Design for Learning (UDL 3.0) when adopting AI for diverse learners?

Use UDL 3.0 to provide multiple means of engagement, representation, and expression, with a new emphasis on identity, barrier removal, and learner-centered language. CAST’s 2024 update shows how to manage change (advisory boards, 40 focus groups, systematic reviews, draft + feedback) – a roadmap districts can copy when they roll out AI.

6) What are the white paper’s recommendations for AI in IEP modernization without losing individualization?

Use AI to draft routine sections, organize data, speed progress monitoring, and pre-fill paperwork, while keeping educators and families in the loop. Avoid stock goal banks; IEPs must remain individualized and legally sound. The paper cites Polaris as a guided collaboration example.

7) How can AI support families navigating the IEP process in special education?

AI chatbots and assistive tools can guide families through referrals, evaluations, placement, and meetings, delivering just-in-time explanations while preserving privacy and human decision-making.

8) What does the report say about early identification of learning differences with AI in K-12 classrooms?

It promotes earlier, more accurate identification by pairing ambient data and routine classroom interactions with professional review to prevent over-labeling and tracking. Models must be trained on diverse data. The ROAR platform is featured as a validated, open-access reading screener used across U.S. schools.

9) What is ROAR (Rapid Online Assessment of Reading) and how does it integrate AI for equitable literacy screening?

ROAR delivers gamified, proctor-free reading assessments validated with 20,000+ students in 20+ states and is approved by California as a dyslexia screener. Next steps include adaptive testing (CAT) and generative item creation with a bias-reduction lens.

10) How does the white paper balance AI efficiency with the need to keep “humans in the loop” in special education?

It states that humans should remain central in identification and support decisions. AI can generate personalized materials and analysis, but educators and families decide interventions and placement.

11) What guidance does the white paper give on privacy, data protection, and ethics in AI for schools?

Treat privacy as foundational. Safeguard sensitive educational and health data, assess whether AI is necessary in each context, and evaluate tools on student outcomes, teacher efficiency, and effectiveness.

12) How can AI help teachers save time while improving differentiation for learners with IEPs and 504 plans?

Automate low-value admin tasks, pre-structure progress notes, surface evidence-based interventions, and dynamically adapt materials to IEP goals. Then reinvest saved time in planning, feedback, and collaboration.

13) What is the role of social-emotional learning (SEL) and well-being in AI tools for neurodiverse students?

Design AI to amplify reflection, relationship-building, and agency – not replace human connection. Avoid over-automation that reduces student effort or voice. The paper threads SEL into identification, instruction, and tool evaluation.

14) What examples does the white paper give for AI-enhanced reading comprehension support for students with IDD?

It spotlights Kai, which adapts an evidence-based comprehension intervention: pre-teach key knowledge, have students write 12-word “gist” statements by paragraph, and deliver immediate corrective feedback.

15) How should developers apply co-design with lived experience when building accessible AI for classrooms?

Compensate students, families, and educators with lived experience; audit datasets for representation; test in authentic contexts; and measure access, agency, and time saved – not only accuracy. Co-design is a quality standard, not a late add-on.

16) What does the white paper advise for researchers measuring AI’s impact on inclusive education and life satisfaction?

Collect data from broad, diverse user groups. Define life satisfaction with the community, and evaluate tools jointly with developers using mixed methods that capture classroom utility and learner outcomes.

17) How does the report connect K-12 inclusive AI with workforce accommodations and employment outcomes?

It frames accommodations by function, not brand-name tools, and points to entrepreneurship and inclusive hiring as pathways that multiply benefits for individuals and families. (This sits within the paper’s broader recommendations and stakeholder guidance.)

18) What long-tail steps can policymakers take to implement responsible AI for special education and disability rights in schools?

Convene cross-sector teams early, codify privacy and transparency baselines, fund co-designed pilots that align with UDL, and ensure legal protections keep pace with AI-enabled identification and services.

19) How should districts structure professional learning so teachers can adopt AI for inclusive lesson design without extra workload?

Build job-embedded learning: hands-on tinkering with tools that reduce paperwork, accelerate differentiation to IEP goals, and collapse tabs and dashboards into simpler flows teachers actually use daily.

20) What concrete change-management model does the paper recommend for rolling out UDL-aligned AI initiatives?

Borrow CAST’s UDL 3.0 process: set up advisory boards (including youth), run focus groups, synthesize research, release a public draft, collect feedback, and revise before scale-up.

21) What classroom-ready use cases does the paper endorse for responsible AI in inclusive learning environments?

  • Draft and maintain IEP artifacts while preserving individualization.
  • Screen reading risks with validated tools like ROAR, followed by human review.
  • Generate differentiated materials tied to IEP goals and UDL checkpoints.
  • Provide SEL prompts and coaching without replacing human relationships.

22) What risks does the white paper flag about AI-driven screening and placement for students with disabilities?

Risk of tracking, over-labeling, and stigmatization if systems focus on deficits, use narrow data, or skip human review. The paper urges strength-based assessment, varied data sources, and diverse training data.

23) How can school systems measure whether inclusive AI improves access, agency, and instructional time?

Evaluate tools on: minutes saved on admin work; quality of differentiated outputs aligned to UDL and IEPs; learner agency indicators (choice, voice, reflection); and outcome gains tied to validated assessments.

24) What did the Stanford hackathon contribute to practical AI solutions for learning differences?

It produced 22 prototypes, including winners such as Empower IEP (family-facing IEP summarization and multilingual supports), Maestra (AI conductor for classroom workflows), and FeelLink (scenario-based social skills). These illustrate co-designed, time-saving directions rather than final products.

25) How should edtech teams write data-handling policies for AI tools serving students with learning differences?

Be explicit about what data you collect and why, restrict access to sensitive education and health records, align with child-rights and education privacy rules, and explain consent and retention in plain language. Test policies with families and educators.

26) What long-tail keyword actions can school leaders take to implement UDL-aligned AI for dyslexia screening and literacy support?

Pilot ROAR for efficient screening with human oversight; pair results with targeted interventions and tools like Kai for comprehension; track time saved, progress-monitoring fidelity, and student engagement gains.

27) How does the white paper connect inclusive AI to interdependence and life satisfaction for neurodiverse learners?

It reframes success around interdependence – people, tools, and communities working together over a lifespan – and asks researchers to co-define and measure life satisfaction with affected learners.

28) What long-tail guidance does the report give developers on multimodal design for accessibility and learner variability?

Build multimodal interfaces, integrate UDL, test with diverse users, and report metrics beyond accuracy (access increases, time returned to teachers, agency indicators). Start with compensated co-design and dataset audits.

29) How can districts avoid “AI first, pedagogy later” when deploying classroom AI for special education and inclusion?

Lead with pedagogy (UDL and IEP needs), choose AI that serves those designs, and phase rollout through CAST-style advisory processes. Keep human review in identification and placement.

30) What is the single best next step (PIVOT+C step) for each stakeholder to kick off responsible inclusive AI?

  • Developers: Set up paid co-design sessions and a data-representation audit.
  • Researchers: Co-define outcome measures with users, including life-satisfaction constructs.
  • Educators: Pilot one AI workflow that saves time on IEP-linked planning.
  • Policymakers: Convene a privacy + transparency working group with families, educators, and vendors.

Become a Sponsor

Our website is the heart of the mission of WINSS – it’s where we share updates, publish research, highlight community impact, and connect with supporters around the world. To keep this essential platform running, updated, and accessible, we rely on the generosity of you, who believe in our work.

We offer the option to sponsor monthly, or just once choosing the amount of your choice. If you run a company, please contact us via info@winssolutions.org.

Select a Donation Option (USD)

Enter Donation Amount (USD)