The first comprehensive analysis of AI education policy in every U.S. state. All 50 states graded. 183 districts documented. One uncomfortable conclusion.
Scored on a 7-criterion rubric adapted from the TeachAI Toolkit (Code.org, ISTE, Khan Academy, WEF), the UNESCO AI Competency Framework (2024), and the AI4K12 Five Big Ideas (AAAI + CSTA). Rated by Kevin J. Roberts, M.A. and Henry Dan, B.S. (Data Science). Current as of April 2026. Full methodology ↓
"Schools are using AI to watch students. Most haven't figured out how to teach them with it. That's the wrong order."— Kevin J. Roberts, America's AI Report Card, April 2026
Your child used AI on their homework last night. Their teacher has no framework for it. Their state has almost certainly never asked the most important question: what should students still do themselves before they turn to AI?
This is not a future problem. It is happening in every classroom in America right now, without guidance, without standards, and without anyone responsible for the answer.
Seven criteria were scored. These two ranked dead last. They are also the two that matter most to what AI is doing to students right now — and neither one is on most states' radar.
Average score across all 37 scored jurisdictions. How to grade students in an AI world — what counts, what doesn't, how to know the difference — is the single most neglected topic in all of American AI education policy. 29 of 37 jurisdictions (78%) score 6 or below on this dimension.
If a student submits an AI-written essay, what are they being graded on? No state has a real answer. That's not a gap in technology policy. That's a gap in what we believe education is for.
Does state guidance address what students should still do themselves before turning to AI? Does it ask what happens to a student's brain when they outsource thinking too early? 28 of 37 jurisdictions (76%) score 6 or below on this dimension.
Vermont is the only state whose published guidance (January 23, 2026, 50-page document) explicitly names "cognitive offloading" as a risk and provides developmental guardrails for it. The other 36 scored jurisdictions never asked the question. The Effort Crisis was written because no one was.
This criterion is adapted from UNESCO's "Human Agency" competency and TeachAI Principle #7 (Prioritize Human Connection).
Every other criterion averaged 6.27 or higher. These two didn't.
The dimensions states find easiest to address — whether to ban or teach AI, whether to mention equity — score significantly better than the dimensions that require the hardest conversations about what learning is actually for.
36 states published formal AI education guidance and were scored on a 7-criteria weighted rubric (1–10). 14 states have not published guidance and are categorized by tier. 36 + 14 = 50 US states. Puerto Rico is also included as a scored US territory. A score of 7.0+ means a state is actively teaching students to think with AI. Below 6.0 means guidance is primarily focused on restriction and compliance.
36 states scored + 14 states without guidance = 50 US states
Puerto Rico is also included above as a scored US territory. The 14 US states below have not published formal guidance.
⏳ Legislation Passed — Guidance Pending
Idaho — SB 1227 signed, guidance due July 2026 · Iowa — bill moving, deadline 2028 · Texas — Responsible AI Governance Act passed
🔄 Emerging Activity — No Published DOE Guidance
Arkansas, Florida, Illinois, Kansas, Maryland, Nebraska, New Hampshire, New York, South Carolina, South Dakota — task forces, city-level guidance, university frameworks, or coalition documents. No formal state DOE document.
⚠️ Safety Legislation Only — No Educational Framework
Pennsylvania — passed laws protecting students from AI deepfakes and harmful chatbots. No guidance on how to teach students to use AI.
The National Picture — All 50 States + Puerto Rico
All 50 States + Puerto Rico — Alphabetical
Find your state. Scored states show their rank and score. Unscored states show their status.
All 50 states + Puerto Rico. Scored states show their rank and score. ⏳ = legislation passed, guidance pending. 🔄 = emerging activity, no published DOE document. ⚠️ = safety legislation only, no educational framework.
Vermont ranks #1 because they're the only state to explicitly address cognitive offloading — the risk that students outsource thinking to AI before building the skills they need.
address what students should still do themselves
76% of states score 6 or below on this dimension — the 2nd lowest of all 7 criteria scored.
No federal framework governs what state AI guidance must include. Ohio (HB 96, July 2025) and Tennessee (SB 1711) are the only two states to enact legislation requiring school districts to adopt AI policies. Ohio's mandate takes effect July 1, 2026. New Jersey's Office of Innovation has not yet released formal, scorable guidance.
issued guidance voluntarily — districts can ignore it
86% of CRPE early-adopter districts offer teacher AI professional development (defined as sustained, multi-session PD tied to implementation benchmarks — not one-time webinars). The 183 documented adopters represent less than 1.5% of ~13,000 U.S. districts.
of US districts tracking AI strategies systematically
Within the CRPE deep-analysis subsample (79 of 183 adopters with full course/standards data), only 27% share information about AI courses for students. Only 8% have adjusted learning standards to account for AI.
of districts have adjusted learning standards for AI
AI-powered student surveillance is the dominant K-12 AI deployment — not instruction. Gaggle alone monitors roughly 6 million students across ~1,500 districts (Gaggle corporate data). GoGuardian was used by 7,000+ schools and districts as of 2021 (U.S. Senate investigation). Combined with Bark, Securly, and similar tools, AI-assisted monitoring reaches tens of millions of students — a scale that dwarfs AI-for-instruction adoption. Schools are more comfortable using AI to watch students than to teach them.
districts use Gaggle alone to monitor students, dwarfing AI-for-instruction adoption
Iowa City built required AI curriculum with no state guidance. Students at San Ramon Valley built their own AI study app. Bullitt County students built an anti-bullying AI tool.
leading while the other 99% figure it out
Home to Seckinger High School, the nation's first AI-themed high school. Built a K–12 AI Learning Continuum and is expanding districtwide. Two years ahead of most peers.
Iowa has no state AI guidance. Iowa City didn't wait. The school board required AI curriculum districtwide and added quarter-long AI electives for 7th and 8th graders.
All students grades 7–12 are required to take an AI ethics course. The most mandatory student AI education program identified in this study.
Built a secure platform where students access ChatGPT, Gemini, and Claude trained on Washington State guidelines, with student identifiers removed. 2025 AWS Champions Award.
Students built an anti-bullying AI counseling tool and a school-wide homework assistant. The most student-driven AI district in the CRPE database.
AI assistant Sofia answers parent questions in 100+ languages. Board officially declared SAUSD an AI Forward District in April 2024.
Launched the "Ed" AI chatbot in spring 2024. The company collapsed in June 2024. A whistleblower reported student data was misused. The largest district AI failure in the country.
NC ranks #4 nationally (7.7). CMS still bans ChatGPT on teacher devices and plans to lock it on take-home devices. They also deploy Evolv AI metal detectors and Gaggle surveillance.
Additional Districts Referenced in The Effort Crisis
The following districts were identified through independent research for The Effort Crisis and are not part of the CRPE Early Adopter Database used for this study's primary district analysis.
Mayor Michelle Wu announced BPS would become the first major city school district to require AI literacy for graduation. Backed by a million-dollar seed grant, the program trains an AI Ambassador at each high school and integrates AI literacy across subjects with a focus on ethics and critical thinking. A bold vision, though proposed alongside cuts to 265 classroom teachers.
One of the most practical AI frameworks in the country. Teachers label every assignment with a color: red means no AI, yellow means AI for specific purposes only, green means AI is encouraged. The first public high school to partner with OpenAI. Teachers know exactly what to expect. So do students.
Created a detailed AI framework covering professional development, student guidelines, and responsible use standards, with meaningful parent and teacher engagement built into the process from the start.
All 50 states were analyzed. 36 states and Puerto Rico had published formal AI education guidance and were scored on a 7-criteria weighted rubric (1–10). 14 states had published nothing and received a failing grade. All data sourced from publicly available documents compiled by aiforeducation.io, TeachAI, and individual state DOE websites. District data from CRPE's 2025–26 Early Adopter Database. Q2 2026 edition published April 2026. Next update: Q3 2026 (July).
Does guidance encourage students to use and learn with AI, or focus on restriction?
Does guidance develop student judgment about when and how to use AI?
Does guidance address what students should still do themselves?
Does guidance treat AI literacy as a curriculum goal with standards?
Does guidance include specific, mandatory professional development plans?
Does guidance address how assessment changes in the presence of AI?
Does guidance address digital divides and equitable access to AI tools?
As of April 2026, 14 states had not published any formal AI education guidance document from their state department of education. They were not scored because there was nothing to score. The absence itself is a finding — but not all absence looks the same.
Legislation Passed, Guidance Pending — Idaho, Iowa, Texas
These states passed laws requiring AI education policy. Idaho's SB 1227 mandates guidance by July 2026. Iowa's bill sets a 2028 district deadline. Texas passed the Responsible AI Governance Act with an advisory council. All three will be scored when formal guidance is published.
Emerging Activity — Arkansas, Florida, Illinois, Kansas, Maryland, Nebraska, New Hampshire, New York, South Carolina, South Dakota
These states have real activity underway — task forces, city-level guidance, university frameworks, or coalition documents — but none have published a formal state DOE document. New York's NYC guidance covers the country's largest district but not the state. Maryland's MSDE is actively developing guidance. Florida's University of Florida-led task force has published resources, but official DOE adoption is still pending.
Safety Legislation Only — Pennsylvania
Pennsylvania passed Act 125 criminalizing AI-generated deepfakes and chatbot safety legislation protecting minors. But there is no guidance for how students should use AI as a learning tool. The state has answered the question of how to protect students from AI. It has not answered the question of how to prepare them for it.
This study is updated quarterly. As states publish formal guidance documents, they will be scored and added to the rankings. The Q3 2026 update (July) will reflect guidance published through June 30, 2026.
Full dataset, rubric, and state-by-state breakdowns available upon request. Request the data →
Methodology
Transparency is not a footnote. Everything below is public on purpose — including the limitations.
Framework Citation — What We Adapted
The 7-criterion scoring rubric is adapted from three established frameworks:
Each of our 7 criteria maps to specific TeachAI principles and UNESCO competency domains. The full crosswalk is published in the Methodology sheet of our downloadable dataset.
Rubric Ownership & Limitations — What We Own
Scoring was conducted by a two-person team: Kevin J. Roberts, M.A. (KJR Academy / The AI Edge) and Henry Dan, B.S. in Data Science (The AI Edge). As a small-team study, we flag sample size of raters as a known limitation.
Mitigation:
Data Vintage — Current as of April 2026
All 37 jurisdictions were scored using a two-pass process. The first pass reviewed formal AI education guidance documents published through October 2025 (sourced from aiforeducation.io and state DOE sites). The second pass, a full landscape sweep conducted in April 2026, searched every jurisdiction for DOE policy updates, school board association policies, AI literacy legislation, and grant programs. Scores were adjusted where new evidence warranted it. 17 of 37 states received score changes based on the April 2026 sweep.
States with significant score changes after the April 2026 sweep:
Next update: Q3 2026 (July 2026). If your state or district has published new AI guidance, send it to kevin@kevinjroberts.net and it will be incorporated.
Global Context
The United States leads the world in AI innovation. It does not lead the world in AI education. That gap is growing.
200M+
Chinese students now receive mandatory AI education
As of September 2025, China requires AI instruction in all primary and secondary schools. Starting at age 6. Minimum 8 class hours per year. The U.S. has 37 states with guidance and 2 with mandates. China made it mandatory for 200 million students overnight.
27
EU countries now require AI literacy by law
Article 4 of the EU AI Act, effective February 2025, requires any organization deploying AI systems to ensure sufficient AI literacy of staff. This is a binding legal requirement across all 27 EU member states. Schools must demonstrate compliance. The U.S. has no federal equivalent.
Age 9
Singapore starts AI curriculum in primary school
Singapore integrates AI into its national curriculum starting at Primary 4 (age 9-10), with an ethics-first approach: Agency, Inclusivity, Fairness, Safety. Part of their EdTech Masterplan 2030. The U.S. has no national AI curriculum at any grade level.
All citizens
South Korea is making AI literacy universal
South Korea launched a nationwide AI education program in 2026 covering all citizens. Mandatory AI courses at all universities. AI as a required high school subject. The country is treating AI literacy as existential infrastructure, not an optional add-on.
The countries leading in AI education are not the ones with the most AI companies. They are the ones with the most deliberate, coordinated education strategy.
The OECD's Digital Education Outlook 2026 found that 36% of lower-secondary teachers across OECD countries used AI in their work, ranging from under 20% in France and Japan to 75% in Singapore and the UAE. The U.S. does not even have a reliable number because there is no national system tracking it.
Read the Full Global Comparison →Kevin J. Roberts, M.A., is the author of The Effort Crisis: What Happens When We Hand Students a Shortcut Before They Build the Skill (April 2026), the book that identified the cognitive offloading problem this study measures. He is an academic coach, AI literacy educator, and co-founder of The AI Edge, a summer intensive that teaches students in grades 8-12 to use AI as a thinking tool rather than a thinking replacement.
Roberts has spent two decades studying the intersection of technology, cognition, and student behavior. His graduate research focused on the neuroscience of screen-based behavior, including the cerebral impact of excessive screen use on developing brains. He has presented Grand Rounds at Children's Hospital of Michigan (twice), spoken at 50+ conferences across the UK and Europe for organizations including CHADD, ADDA, ADHD Europe, and ADDISS, and appeared on ABC's 20/20, BBC Radio, and NPR affiliates. He is the author of six books including Cyber Junkie, Get Off That Game Now, Movers, Dreamers, and Risk-Takers, and Schindler's Gift.
Henry Dan, B.S. (Data Science), is co-founder of The AI Edge and co-rater for this study. A former member of the Cambodian Math Olympiad Team and math coach, Dan brings a machine learning background to the scoring methodology. He is fluent in Khmer, English, and Mandarin.
What Makes This Study Different
This is the first study to score state AI education guidance on a weighted rubric, not just catalog it. The 7-criteria rubric is adapted from established frameworks (see Methodology below) and applied to all 37 U.S. jurisdictions with published guidance.
The cognitive offloading criterion — whether states address what students should still do themselves before turning to AI — is the element most closely tied to Roberts' research for The Effort Crisis. Only Vermont's guidance (January 2026) names it explicitly.
The surveillance paradox finding — districts deploying AI to monitor students while restricting AI in classrooms — emerges from cross-referencing state-level policy with verifiable aggregate deployment data (Gaggle, GoGuardian, Securly).
Press inquiries, interview requests, dataset access: kevin@kevinjroberts.net · 248-867-3591
If this research was useful, please share it.
The more people who see these rankings, the harder it becomes for states to ignore the question.
This study is updated quarterly. Next update: Q3 2026 (July).
Press inquiries, interviews, dataset access: kevin@kevinjroberts.net · 248-867-3591