For my upcoming essay, I’ll explore how AI could transform K-12 classrooms by personalising learning and enhancing student engagement. AI is already making its way into the classroom, with teachers using it to simplify routine tasks, while students are learning to navigate it for their own use at home. We need to be proactive rather than reactive in adopting this new teaching tool. As educators, we should take the lead in determining how AI is utilised, rather than waiting for directives from companies, parents, or government bodies. AI is here to stay, so let’s discover how to leverage it to improve student learning and support teachers in their roles.
I will draw on Selwyn’s (2024) On the Limits of Artificial Intelligence (AI) in Education, which cautions against over-reliance on technology that could reduce complex educational interactions to algorithms, highlighting the importance of educators actively shaping AI’s role in classrooms. In contrast, Bozkurt et al.’s (2023) Speculative Futures on ChatGPT and Generative AI suggest that tools like ChatGPT could foster critical thinking and creativity among students. I aim to explore how AI can create personalised learning experiences, what training teachers will need to integrate AI while maintaining their agency, and the potential risks of overreliance and ethical concerns regarding student data. I have a lot more research to complete to answer these questions. I hope to gain insights into how AI can enhance students’ learning experiences without undermining the core values of teaching and the essential human connections involved, striking a balance between optimism and caution as I explore this imminent future.
References
Bozkurt, A., et al. (2023). Speculative Futures on ChatGPT and Generative AI: a Collective Reflection from the Educational Landscape. Asian Journal of Distance Education, 18(1).
Selwyn, N. (2024). On the Limits of Artificial Intelligence (AI) in Education. Nordisk tidsskrift for pedagogikk og kritikk. 10. 10.23865/ntpk.v10.6062.
Hi, Heidi. This will be good and is timely. Teaching responsible AI use is a challenge; I see overreliance and ethical issues continually.
I’ve had the mindset that my students deserve to have me personally and thoroughly assess their work. It as been an ethical intention for me. But do my students even want that? Would they prefer that I have ChatGPT assess their work? Which do they find a more reliable source: a grizzled 25-year industry veteran, or ChatGPT? This will be a good question to ask them.
For student-submitted work, we have traditional rules around plagiarism and contract cheating. In the modern tech era, it strikes me as inconsistent to consider it to be simply “using modern tools” when that plagiarism and contract cheating is delivered by ChatGPT instead of the guy down the hall.
How can we determine the extent to which AI was used for an assignment or project? What extent is acceptable and what is the tipping point between tech-assisting the work and tech-automating the work?
These are questions I have and they apply to both students and teachers.
I look forward to seeing what you come up with in your work.