What's your opinion on AI in schools?
I've thought about it for twelve months, and now I've written about it.
Resolved: Teachers need to lead the way in AI instruction.
Across the past twelve months, I’ve often felt like Artificial Intelligence is all anyone – friends, colleagues, parents of students, people I meet at parties, bartenders, eyebrow technicians, medical assistants at the dentist’s office – wants to ask me about. As a result, it’s often a conversation that I punt: I don’t enjoy discussing ideas about which I’m ignorant, and I’m generally not excited to hear the opinions of ignorant people. The fact has been that most of us are ignorant about what constitutes Artificial Intelligence, how AI tools function, and how they are and will be used.
Unfortunately, from where I sit, that ignorance has terrifying consequences: it makes us vulnerable to fearmongering or thoughtlessly bullish on technologies that have serious consequences for our creativity and humanity. I’ve spent some time learning about new technologies (especially generative AI) from the educators and technologists with whom I’ve worked on my school’s policy draft but also from friends who work on developing AI at various levels or use AI in the workplace.
What’s your opinion on AI? In the field of education, I typically try to separate my private opinion and my personal opinion. Here, I’ll focus on my educational opinion, which is that we have served and will best serve our students by taking a position of technological optimism.
We know that large swaths of students use Artificial Intelligence tools to produce work, and as a result, their ideas of what “originality” means look different from previous generations. But ideas of technology, work, and originality have been in flux across human history. It’s ahistorical to see the boom in generative AI as the first appearance of AI technology; it’s also ahistorical to see debates over the nature of originality as recent or merely postmodern.
Artificial Intelligence isn’t just a scary chatbot who your preteen thinks is her first boyfriend. It’s not even just the technology that remembers what you bought from Whole Foods last time and suggests it to you for your nect online order. Microsoft Word’s spellcheck – in 1998 – was AI. Your Yahoo! search in 2005 used Artificial Intelligence to sort results by relevance. These applications of artificial intelligence were organic developments of technology’s increasingly efficient grind forward. When I was in high school, librarians and teachers never allowed students to use only online primary and reference sources to complete a History essay, and they certainly did not endorse automatic citation generators; now, students learn to collect notes and citations in Noodletools, which generates their citations for them from cradle to grave, and you’d be hard pressed to find a college student who’s even heard of microfilm. My parents ogled my sister’s first TI-89; now, my students use the online program Desmos to graph equations. Education’s relationship to technology is long and ongoing. Thousands of years ago, paper changed the way we educate, transforming aural systems to visual systems, but we don’t think about that. When we consider new technology, our inability to recognize history is embarassing and, at times, dangerous.
Suspending recency bias can also help us to think more creatively around the role of AI in generating written work. What, actually, is original work? In 1977, Roland Barthes declared that “the text is a tissue of quotations drawn from the innumerable centres of culture.” English-language literature and philosophy alone contain hundreds of years of shifting opinions on the issue of originality. What should not surprise us is that, with the exception of the Romantics and Walt Whitman, most thinkers across time agree that their knowledge and creations did not emerge from a single mind or muse but rather through their individual processing of a wide range of influences and experiences. We can imagine chatbots like ChatGPT as a technological supermachine to make the process Barthes describes more smooth and efficient.
As a humanist thinker, I’d have to insist that the smoothness and efficiency is the biggest problem that AI brings to education. Vygotsky’s Zone of Proximal Development, essential reading for all beginning educators, describes the fact that learning requires students to experience the flow of capitalizing on what they know even as they experience challenge that destabilizes them. What I experience when I watch students use AI – which they do constantly, in every area of their life – is that it gives the appearance of ease that is not only dangerous (from the adult perspective) but also just incredibly boring. At the same time, we have to appreciate that what a Chatbot creates is not trying to reject human consciousness and creativity but replicate and expand it. AI companies and stakeholders definitely do have nefarious intentions and prioritize profit; government stakes here are absolutely terrifying. But this technology is not inhuman; it exists in a long-standing universe of human-machine interactions that has historical precedent from which we can learn.
If we want students to inhabit their technological reality and also to learn optimally, then we need to guide their relationships to technology actively and openly.
This brings me back to the central stance: we have served and will best serve our students by taking a position of technological optimism. This position requires educators and schools to face three significant challenges and adapt to them very quickly:
We need developers to create secure and private options to educate while protecting the rights of individuals and minors. Private companies have already jumped on this profitable opportunity by partnering with colleges and universities – Duke’s account with OpenAI is a great example – but a single-school stream is also dangerous, because it could foster the kind of closed-loop thinking that we certainly don’t need more of in our current political climate. A bot that only learns from a specific subset of users – namely, young people and solidly left-leaning professors – could foster an AI echo chamber that bills itself as representative of a broad subset of human thought. A private and secure subsystem that protects intellectual property but represents a “innumerable centres of culture” is going to be hard to build.
We need all teachers to have a strong understand of what AI is and where it exists in the world. Most policies that are evidence-based or forward-thinking in education tend to be hamstrung by the diversity of private opinions that stakeholders (not just teachers, but also administrators, parents, and politicians) bring to the table. This heterodoxy can be useful in many cases. In this case, though, I would argue that private opinions need to take a backseat: whether educators want to or not, they need to know and be curious about the tools they allow and don’t allow. Efficient education will not live in the opinion articles or online MOOCs where we usually read about it: rather, evidence-based and historically accurate education in the form of audiobooks and pamphlets from trusted research institutions will remain the most efficient tools to get teachers the education they need and deserve as quickly as possible.
Teachers need to teach students where, when, and how AI tools are useful – and allow them where appropriate. We need to live in the world of AI with our kids because that world is reality – and the stuff kids don’t want to discuss is usually what they most need guidance in. Just like alcohol or sex, AI is something in which teenagers are intensely interested and extremely unlikely to discuss with adults in their lives if at all possible. Because the adults in their lives are so fixated on gating their access to AI tools, teenagers right now experience AI tools as illicit and even tittilating: what’s disallowed becomes even more tantalizing. Talking to students about AI tools – which I do at the beginning and throughout every assignment – usually feels like engaging them in a conversation about a sexually charged scene in a novel. Neither should feel taboo for a young person, honestly, but I would argue that we’re particularly doing students a disservice if we fail to initiate them into the adult world in which AI use will, if they are a healthy and flourishing part of human society, likely play a role in their life.
You can’t regulate your way to effective AI learning, at least not in 2025. We’ve spent a lot of time, in the education sphere and beyond, debating whether or not we can identify student work that used artificial intelligence tools. I won’t go into detail, because you can read about this everywhere. I don’t yet have a comprehensive guide to #3: Teaching students where, when, and how AI tools are useful. What I do have is a collection of ideas from my current teaching practice and a set of principles I hope to use to guide my work next year. I hope that they offer some inspiration and stir up some good conversation.
Introduce a clear, distinct, and robust AI policy for all student assignments.
Most schools have AI policies, but those large-scale policies are not all-encompassing; in fact, there is no way for those policies to be specific enough to give students clarity on the rules around their assessments. This both leaves schools vulnerable to litigious parents who would find loopholes and leaves students increasingly anxious about what would or would not constitute presence or lack of integrity.
If students are to study for a test, describe when and where they could or even should use a chatbot to ask helpful questions. Guide them through auto-generated study guides. Provide clear instructions about whether or not they may upload teacher slides to a bot to generate a student guide (pro tip: usually unethical; this is someone’s intellectual property).
If students are writing, articulate the role AI might play in brainstorming, outlining, composition, and revision. Spellcheck and Grammarly are AI tools; many AI tools cost money, and access to or allowance of those tools can perpetuate outcome differences across socioeconomic levels. Ebook technologies, which allow students to search for the page on which they found a quotation, have been common for the past ten years, however teachers do or don’t like them; those tools have made life easier for a long time. Banning AI, at this point, requires banning all computer writing, and acknowledging the breadth of AI tools illustrates your mastery and openness to introducing students to our technologial reality.
Then solicit student opinions about that policy before they begin their work.
A teacher should have a reason for every direction they give around the use of AI, and they should both expect and invite resistance to those restrictions. Healthy teenagers will resist and defy expectations, and accepting that healthy approach with a firm but welcoming stance will build trust between kids and the adults in their lives.
With older students, I encourage teachers to invite students into the creation and definition of the polity. As educators, we already know that co-creation of classroom policies and assignments empowers student voice and fuels students’ motivation to engage thoughtfully in their work. It’s just good teaching. I also found, this spring, that a conversation about AI use with a student who’d participated in such a conversation was radically different – more productive, thoughtful, and self-reflective – than a conversation with a student who’d navigated a vague policy and was drawn into that conversation for the first time when she was accused of cheating.
Allow and instruct students in the use of AI tools that are age-appropriate.
Elementary school students learn how to add and subtract on paper before we let them use a calculator: the same principle should apply to AI instruction. With 12th graders, I project and use Chatbots to generate paraphrases of entries in the Stanford Encyclopedia of Philosophy to read side-by-side with the real thing and clarify the history of terms like “race” or “reality” at an appropriate grade level that simplifies only enough to illustrate to them the complexity of the idea.
With eighth graders, I project a Grammarly reading of poorly punctuated sentences in order to augment real-life applications of grammar lessons. I then allow them to use the tool before they submit a final draft but ask them to acknowledge the source, along with any human helpers they use along the way, at the end of the work.
With 11th graders, I ask them to map out long-term assignments using a Chatbot while obscuring identifiable personal details, in order to introduce students to dos and don’ts of protecting personal information using generative AI.
I have to admit it: I don’t use much generative AI myself. I don’t find it interesting – just as I’ve never found Science Fiction engaging, can’t control the dim of my lightbulbs from my smartphone, and find Siri really creepy. As a teacher, I find those personal opinions irrelevant to the work I do on a daily basis, which is to prepare my students to enter the world in which they live engaged with its nuances, ready to thrive in it, and equipped to shape it for the better.
What do you think?