Finding #1
The Effort Crisis Finding — Only 5 states earn a strong rating on student cognitive development
5 of 37 states
Only 5 of 37 scored jurisdictions (Massachusetts, Vermont, New Mexico, Utah, and West Virginia) earn a strong rating (8 or higher out of 10) on the Student Effort / Cognitive Development criterion, which measures whether a state's guidance protects what students should still do themselves before turning to AI. Vermont's January 2026 guidance is the only document in the study to name cognitive offloading directly. Most other states touch on student effort in passing, but do not treat it as a content standard. 62% of scored jurisdictions land at 6 or below on this criterion, making it the second-lowest-scoring of all 7 rubric categories. The guidance landscape treats AI as a tool to be allowed, restricted, or taught, without distinguishing which cognitive tasks AI should not replace.
Criterion: Student Effort / Cognitive Development (15% weight). Strong rating defined as a score of 8.0 or higher. Adapted from UNESCO Human Agency competency and TeachAI Principle #7.
Finding #2
The Policy Gap — 33 of 36 states issued voluntary guidance. Districts can ignore it.
Only 3 states
Among the 36 scored states, only three have enacted laws requiring schools to act: Ohio (HB 96, districts must adopt AI policies by July 1, 2026), Tennessee (SB 1711, mandatory district policies since March 2024), and Utah (HB 218 + HB 273, mandating AI literacy in a required digital skills course and integrating AI into K-12 CS standards). All three rank in the bottom half or middle of our study. Tennessee is the clearest illustration of the mandate paradox: the law compels every district to adopt a policy, but it does not specify what the policy must contain. The result is a compliance floor, not a content standard. Districts can meet the mandate by pasting boilerplate into a handbook without teaching AI literacy, protecting student thinking, or training a single teacher. Tennessee ranks among the bottom of the study precisely because the mandate drove adoption, not quality. Every other state's guidance is advisory. There is no enforcement mechanism — a district in Colorado, Georgia, or Michigan can read the state's AI guidance document and do nothing. The U.S. has no federal AI education standard, no minimum hours requirement, and no national accountability structure. † Idaho enacted SB 1227 (2026) mandating a statewide AI framework, but had not published guidance as of our scoring date — keeping it among the 14 unscored states. New Jersey's AI literacy bill (A4352/S2862) passed both chambers as of April 2026 but had not been signed into law.
Source: MultiState Insider legislative tracker, through April 9, 2026.
Finding #3
The Teacher Training Gap — 86% of early adopters offer real PD. Almost no one else does.
86% vs. <1.5%
Among the 183 publicly documented AI early-adopter districts in the CRPE Early Adopter Database, 86% offer sustained, multi-session professional development tied to implementation benchmarks. That sounds encouraging until you do the math: 183 districts is less than 1.5% of the approximately 13,000 U.S. school districts. The teacher training happening in the vanguard districts is not reaching the other 98.5%.
Source: CRPE 2025-26 Early Adopter Database. "Sustained, multi-session PD" defined per CRPE methodology (not one-time webinars).
Finding #4
The Student Gap — Districts track AI for staff. Rarely for students.
8% of districts
Within the CRPE deep-analysis subsample of 79 early adopters with full course and standards data, only 27% publicly share information about AI courses for students. Only 8% have adjusted learning standards to account for AI. Districts are building infrastructure and training teachers — but the evidence of that investment reaching students in structured, documented ways is thin.
Source: CRPE 2025-26 Early Adopter Database, subsample of 79 districts with full course/standards data.
Finding #5
The Surveillance Paradox — Schools use more AI to watch students than to teach them.
6M+ students monitored
AI-powered student surveillance is the dominant K-12 AI deployment in the United States — not instruction. Gaggle alone monitors roughly 6 million students across approximately 1,500 districts (Gaggle corporate data). GoGuardian was deployed by 7,000+ schools and districts as of 2021 (U.S. Senate Committee on the Judiciary investigation). Combined with Bark, Securly, and similar monitoring tools, AI-assisted student surveillance reaches tens of millions of students — a scale that dwarfs documented AI-for-instruction adoption. Schools are more comfortable deploying AI to watch students than to teach them.
Sources: Gaggle corporate disclosure; U.S. Senate Committee on the Judiciary, investigation into student surveillance tools, 2021.
Finding #6
The Bright Spots — Districts leading without waiting for their state.
183 documented
Iowa City Community School District built a required AI curriculum with no state guidance. Students in San Ramon Valley Unified (CA) built their own AI-powered study app. Students in Bullitt County (KY) built an anti-bullying AI tool. These 183 publicly documented early adopters show what's possible — but they also reveal how much depends on individual district leadership, budget, and initiative rather than systemic state policy.
Source: CRPE 2025-26 Early Adopter Database; district AI documentation verified through public sources.
Finding #7
The Stale Guidance Problem — Nearly half of states have not updated their primary AI guidance since 2024.
18 of 37 jurisdictions
The document dates in this study reflect when each state published its primary guidance document — not when we scored it. Our April 2026 verification sweep found that some states had issued meaningful updates (new versions, revised frameworks, supplementary toolkits) since original publication. After accounting for those updates: 4 jurisdictions have guidance from 2026 (Vermont, California, Kentucky, North Carolina). 15 have guidance from 2025. 17 still have primary documents unchanged since 2024. One state — Oregon — has not issued new guidance since 2023. That means 18 of 37 scored jurisdictions (49%) are operating on documents that predate GPT-4o, Claude 3.5, DeepSeek, and a full year of rapid AI adoption in classrooms. The technology kept moving. Nearly half the states did not.
Dates sourced from individual state DOE documents as catalogued by aiforeducation.io and verified through direct DOE site review (April 2026).