NYC Public Schools' AI Guidelines: Glows and Grows
The nation's largest school district offers a fair start to answering the AI question for 75,000 teachers; what matters most is what the district does next.
There’s a line early in the new New York City Public Schools (NYCPS) Guidance on Artificial Intelligence that stopped me cold: “We build relationships no algorithm can replicate. We see growth before it appears in data. We see promise where numbers fall short.”
That’s a sentence written by someone who has been in a classroom. And it matters, because this document, released in March 2026 to govern how AI is used across the largest public school system in the country, will shape the daily reality of nearly a million students and 75,000 teachers. Whether it shapes that reality well depends on whether it lives up to moments like that one.
I’ve spent the past many months exploring the global landscape of AI education frameworks: from UNESCO’s competency frameworks to the OECD’s AILit initiative, from China’s national mandate to Finland’s trust-based model, from Digital Promise’s equity-centered literacy framework to AI4K12’s Five Big Ideas. I read the NYCPS document with all of that context in mind.
Before I share my impressions, let me underscore that the AI in K-12 space is both crowded and at times a bit frenetic. Keeping pace with every new framework or model or white paper is exceptionally challenging. What I offer is the best of what I have been able to see, understand, and wonder. What it comes to NYCPS, here is what I have found.
The governance is strong. Genuinely strong.
The Traffic Light framework (Red for prohibited uses, Yellow for professional judgment, Green for approved) is one of the clearest, most actionable risk frameworks I’ve seen at any level, anywhere. It names what AI will never be allowed to do before it names what AI can do: no automated decisions about student placement, discipline, or graduation; no AI-generated IEPs; no surveillance; no AI as substitute for counseling. Each prohibition is mapped to specific Chancellor’s Regulations and federal law, giving it legal teeth that many frameworks lack.
The document is also unusually honest. “We will not pretend to have answers we do not have,” it says. “We will not wait for certainty that may never come. And we will not let uncertainty become an excuse for inaction.” In a policy landscape full of documents that sometimes project more confidence than warranted, this epistemic humility is both refreshing and strategically wise. It positions the system as a learning organization, not an authority handing down tablets.
And the equity framing is structural, not decorative. The document begins with three specific students: the fourth grader whose reading lags behind her curiosity, the multilingual learner navigating two languages, the student with a disability whose classroom lacks the right tools. That’s not an abstraction. That’s a purposeful design constraint. The text consistently argues that the students who depend most on public schools are the ones most affected when its AI governance fails. And most empowered when it succeeds.
But governance without pedagogy falls short.
Here’s where I start to wonder. The NYCPS guidance is primarily a regulatory document. It tells educators what they’re allowed to do and what they’re not. What it doesn’t do, at least not yet, is tell them how to teach well in an AI-mediated environment, or help students become critical, thoughtful users of AI rather than supervised consumers of it.
Student use of AI sits in the Yellow category with strikingly thin guidance: “Students may use AI for research, exploration, and creative projects. Educator guidance, critical evaluation of outputs, and age-appropriate context are required.” For a system of nearly one million students who are already using AI extensively outside of school, not to mention the tens of thousands of teachers who are already using AI in their practice, that’s not enough. There’s no framework for what responsible student AI use looks like in practice, no scaffolding by developmental stage, and no guidance for how students should think critically about AI’s outputs, biases, and embedded assumptions.
There is also little or no grade-band specificity. A kindergarten teacher and a twelfth-grade AP teacher receive the same document. Compare Vermont’s framework, which specifies no chatbots in PreK–2, curriculum-embedded AI in 3–5, structured educational chatbots in 6–8, and broader fluency in 9–12. Or China’s tiered developmental approach. The NYCPS document acknowledges this gap and defers it to the June 2026 Playbook, along with the AI literacy curriculum, the bias review criteria, the academic integrity guidance, and the outcomes framework. That’s a lot of weight for one Playbook to carry in three months.
Most strikingly, the document makes no reference I observed to existing national or international AI education frameworks. Not UNESCO’s teacher competency framework. Not the OECD/EC AILit Framework that will shape the widely-regarded PISA 2029. Not Digital Promise’s AI Literacy Framework, with its insistence that understanding and evaluating AI must precede using it. Not AI4K12’s Five Big Ideas, which offer exactly the kind of grade-band progressions the document says it needs. Not aiEDU’s AI Readiness Framework, which has already synthesized all of these into a single architecture, complete with student competencies by grade band, educator competencies, and a school leader readiness rubric that addresses exactly the principal-as-instructional-leader role that determines whether policy becomes practice. NYCPS appears to be building from scratch when well-developed resources already exist.
What you see and what you don’t.
There’s a deeper problem, too. The entire framework assumes, like many other frameworks, that AI is something users choose to use. It’s a tool you evaluate, approve, and consciously bring into instruction. But increasingly, AI doesn’t arrive that way. It arrives sneakily inside platforms teachers and students are already using: adaptive features in textbook software, AI-generated recommendations in learning management systems, algorithmically synthesized answers in search engines, smart suggestions in Google Docs. None of these announce themselves as AI tools. They’re just the platform doing what it does. The governance process NYCPS described (called ERMA) is built for discrete procurement decisions: a school identifies a need, submits a request, a vendor gets reviewed. It isn’t built for the quiet creep of AI capabilities into already-approved products through a software update no one flags. And the document’s commitment to transparency and explainability, that families and educators should understand what AI tools do and why, depends on knowing that AI is operating in the first place. A parent can’t ask questions about an algorithm they don’t know exists. A teacher can’t exercise the professional judgment the document rightly demands if the AI is invisible to them.
No framework globally that I have seen has adequately solved this problem, but NYCPS might be well-positioned to be the first to name it: AI is not just a tool you adopt; it’s a functionality you’re probably already working with. And it is completely changing what it means to work with digital, well, anything. Notably, aiEDU’s framework comes closest to addressing this pedagogically: its 9–12 student competencies include evaluating AI tool use within larger technology ecosystems, identifying how AI tools connect with and impact other technologies students already use, and acting as a decision-maker around AI tool use rather than just a consumer. That’s the kind of critical capacity the NYCPS document gestures toward but doesn’t commit to.
So, what is the city to do?
NYC’s tension is familiar to anyone who has had the honor of working in this system. (I had such an honor as a teacher from 2003 to 2009; at the city offices from 2009 to 2011; and as a consultant for many years after.) Here’s how the tension tends to go. The central office wants coherence; schools want autonomy; pedagogy has historically been a local decision. So how does one balance local and administrative control? Visible uses of AI and invisible uses?
Maybe the answer is in the room (and beyond).
One answer is already sitting in every school: the Danielson Group’s Framework for Teaching (FFT). Every NYC teacher knows it. It’s how their teaching is officially described, observed, and evaluated. An AI-informed interpretation of FFT could define what good teaching looks like when AI is part of the environment, not by adding new mandates, but by clarifying existing standards. What does Domain 1 planning look like when AI assists material development? What does a Domain 2 culture of learning look like when cognitive offloading is a risk? What does Domain 3 intellectual engagement look like when students have AI at their fingertips? These are pedagogical questions, not compliance questions. And they honor teacher judgment.
The other answer is to draw on the international community. This one might strike you as odd, so let me explain. The number of students and teachers in NYC schools is about the same size as Ireland (yes, the country). When NYC school officials are looking for insight and inspiration, it is fitting to turn both within (other states and US districts) and without (education ministries in other countries). Not only is NYC massive in terms of size and people served in its schools, but mayoral control gives the chancellor similar authority as an education minister might have abroad. That means, for instance, that UNESCO’s three progression levels (Acquire, Deepen, Create) could structure professional development in NYC. Officials should look at other countries, including those who might be described as economic competitors.
The Playbook drafters at NYC should look closely at what aiEDU has built, which I have to imagine they are. The AI Readiness Framework, now in its second version, informed by Burning Glass Institute labor market data and grounded in cognitive science research on how AI affects learning, offers something NYCPS needs urgently: an integrative architecture that connects student competencies, educator competencies, and school leader readiness into a coherent system. Its three student domains (Know Your Basics, Be a Critical Thinker, Lead with the Human Advantage) provide the developmental scaffolding the NYCPS document lacks, organized by K–5, 6–8, and 9–12 grade bands. Its educator domains run in parallel, positioning teachers not as compliance followers but as professionals who model the critical thinking and human judgment their students need. And its school leader rubric, with progression levels from Demonstrate Commitment through Invest and Implement to Deepen and Iterate, gives principals a roadmap for the instructional leadership that makes or breaks implementation in a system where 1,800 schools each operate with significant autonomy.
aiEDU’s framework isn’t perfect. Its “Human Advantage” framing leans toward workforce readiness, and it doesn’t fully engage the critical literacy questions about whose knowledge and power AI systems encode. But it has done the synthesis work that NYCPS hasn’t yet attempted, drawing on UNESCO, Digital Promise, AI4K12, and the OECD/EC framework to produce something genuinely usable. The June Playbook doesn’t need to start from zero. Much of what it needs already exists.
In sum: governance is necessary, but what kind of pedagogy will it serve?
That’s an inspiration question. And it’s the one the NYCPS’s document—smart, honest, and necessary as it is—hasn’t fully asked.
Yet.
Perhaps June’s Playbook will start there.



