Artificial intelligence, commonly referred to as AI, is already shaping how we learn, think and solve problems. This technology, paired with other scientific breakthroughs and innovation in other fields, has allowed us to tackle some of society’s biggest challenges. Yet you may wonder: why are schools hesitating to let it into the classroom?
The opposition to integrating AI into classrooms often stems from legitimate concerns raised by educators with experience in academic integrity, equity and accessibility. AI and Large Language Models (LLM) such as ChatGPT have the potential to revolutionize student learning in many incredible ways. However, these machine models also possess the potential to disrupt the process of learning for students. It’s about finding the right balance.
These concerns are worth noting, as the integration of AI tools can facilitate plagiarism and academic dishonesty by allowing students to produce assignments or solve problems without critical thinking. The point of an assignment is to challenge students and develop their ability to face adversity– like a difficult math problem set. Educators worry this could erode foundational skills such as writing, problem solving and independent thought. Rightfully so do they have this claim, but it is about how we move on. The focus should be on facilitating safe learning using technology to empower students making their work more impactful– allowing technology to drive and become an asset.
This can be done by giving students more opportunities for certain assignments. Students would get to experiment and “play around” with the forefront of innovation. If we despise a tool and never integrate it into our workflow, we never get to experience the prowess and beauty it has. To truly understand the pros and cons of anything, it is learned best by using it. Schools have the opportunity to show students how to effectively, safely and ethically implement AI into their work. At the end of the day, everyone should want responsible AI use in schools, not bans. While in a safe environment, we can afford students that chance to really make great use of the technology. Instead of trivial inputs, students can explore ways to prompt the model for the response they’re looking for.
Sinead Bovell is known for her tech education background. As the “AI educator” for non-nerds, she has the ability to demystify artificial intelligence and make it accessible for everyone.
“The best and most effective way to fully prevent cheating is to assume students are using AI,” Bovell said.
She goes on to explain how testing for that knowledge in new and more challenging ways will become increasingly more important.
This can be hard for the educators reading this. How do we take this already developed curriculum and just add AI options into classroom assignments? The answer is that there is no easy way. When developing new in-class materials or planning lessons, we can use generative AI to its fullest extent, allowing educators to make lessons more engaging. It may be hard to wrap your head around how we can fully maximize the capabilities of AI, but the options are endless. We could even allow students to use AI in planning to develop a customized approach to their learning preferences.
It is time for a more comprehensive AI policy to guide Manheim Township. Pennsylvania has recently become a leader in implementing responsible and ethical generative AI practices under Gov. Shapiro’s administration. Following an executive order signed in 2023, the state established a governance framework for using AI tools within its agencies. This includes training for Commonwealth employees to integrate AI into their tasks effectively while addressing potential risks. This act shows a step toward accountability of AI for our state agencies, but it’s time we set a policy for schools. The same efforts should be taken by our district to build a more comprehensive approach to modernize student learning and delivery of content to empower students with the capabilities of AI. Through initiatives that reflect a broader strategy to balance AI’s potential with ethical considerations and privacy concerns, we position the district as a model for AI governance.
The truth is simple: AI detection software is nowhere near the point to be ready as the main source for detection. These tools may never be ready. As stated in the Manheim Township High School Handbook, “AI Detection software tools may be utilized to help identify the use of AI, but are not yet accurate enough to be relied upon as the “sole source of truth” regarding originality”. AI detection software is still evolving. Currently, these AI detection tools attempt to spot patterns (such as unusual phrasing, unnatural word choice or inconsistent tone) generated by these neural networks that power the models, but these detection tools are always a step behind the models as AI consistently continues to improve producing more natural and human-like content.
These large machine models are powered by the underlying algorithms, particularly neural networks. These models are always trying to improve their output (like text, images and more) by making it more realistic and resembling human expression.That is why even OpenAI, the creator of ChatGPT, removed their own AI detection service, as the product was not effective as a reliable tool for detecting content generated by other AI models or their own GPT 4 model. The truth is right in front of us. Education will be looking much different in the future. Assignments will start to shift. The way students and teachers approach learning or teaching will develop and embrace the positivity of artificial intelligence. This is why we must begin integrating AI into classrooms, both thoughtfully and intentionally, ensuring it becomes a powerful tool for growth.
Jamie Flanery • Jan 23, 2025 at 6:28 am
I am still not convinced, especially since generative AI has been known to “hallucinate” (make up often incorrect answers). We need to learn how to adapt with AI, sure, but we also need to learn how to detect it ourselves- things like politics are easily influenced by fake news, even without AI-generated pictures floating around. We also need to take into account how much energy generative AI takes up, and how that impacts our environment. AI is a powerful tool, but school should be more about critical thinking. AI integration will not help with that change at this point.
Miguel Evert • Jan 23, 2025 at 7:52 am
As a parent and member of the education community (both higher level and at the standard k-12), and a engineer in the AI industry I actually support this author’s approach as it is something new a new picture and insight.
Of course I think the reasoning and explanation is the importance and better part and that student journalist/writer nails it.
The concerns about AI hallucination, misinformation, and environmental impact are not reasons to ban AI from classrooms—they are exactly why it must be integrated. Yes, AI can “hallucinate” or produce incorrect answers, but so can humans. History is full of human errors in judgment, research, and decision-making, from flawed scientific studies to mistaken calculations and biases. The difference is that AI’s mistakes can be analyzed, understood, and corrected in real time, providing students with opportunities to develop critical skills like fact-checking, media literacy, and skepticism. By engaging with AI in classrooms, students learn to identify and critique its limitations while also recognizing and addressing their own. Shielding students from AI does not protect them from misinformation or error—it leaves them vulnerable to both.
Similarly, while AI’s environmental costs are valid concerns, avoidance solves nothing. Integrating AI into education empowers students to confront these challenges head-on, inspiring the development of more sustainable technologies. Far from being a threat to critical thinking, AI expands intellectual horizons, enabling students to simulate complex scenarios, analyze vast datasets, and explore creative solutions to real-world problems. AI is not a replacement for thought—it is a catalyst for deeper, more impactful learning. Finally, refusing to embrace AI in education risks making schools irrelevant in an AI-driven world. If the future will be shaped by AI, students need to understand it—not just how to use it, but how to question it, improve it, and use it ethically. The very challenges posed by AI are the ones that demand its presence in classrooms, ensuring students are prepared not just to adapt to the future but to lead it.