Bias, assumptions, and stereotypes with AI and personalized learning

Our team was asked to select a technology and learning event to focus on for our critical inquiry analysis. We landed on the use of artificial intelligence in developing personalized learning strategies. The learning event we chose was a Coursera course called “Innovative Learning with ChatGPT”. The course description states that learners who enroll in this course will learn how to brainstorm lesson plans that integrate learner interests and needs, and how to personalize and customize educational materials for individual students.

In relation to this broader topic, for my own area of critical inquiry I will be looking into the issue of bias, assumptions, and stereotypes when using AI to develop personalized learning. Despite the encouraging overtones of courses such as the Coursera course our team found, one only has to do a simple Google Scholar search to see that personalized learning and AI continue to be the subject of critical reflection and inquiry – and for good reason. The limitations and shortcomings of data generated from language models such as ChatGPT can be numerous, including erroneous information, toxicity and bias, and manipulation of ideas (Hua and Jiang, 2023).

If we take these issues into account when thinking of developing personalized learning strategies, some alarming gaps are apparent. Take this text sample from the Coursera course, where the online instructor says about ChatGPT: “It’s not really thinking like humans do. It’s just really good at remembering all that information that it’s seen before” (Coursera, 2023). Who is the arbiter of the accuracy or impartiality of what data ChatGPT has seen? How can we be sure that the recommendations it will provide an instructor are in fact the best ones to use for a class full of nine-year old kids? ChatGPT doesn’t know anything about these children, except their age. What if the children are EAL (English as an Additional Language) learners? What if they are neurodiverse? While there is much that we have learned about what ChatGPT can do, it is nevertheless important to shine a light on what generative AI omits in developing personalized learning strategies. It is this with this lens that I intend to focus my research and reflection during this course.

References:

Hua, S., Jin, S., & Jiang, S. (2023). The Limitations and Ethical Considerations of ChatGPT. Data Intelligence, 1-38.

White, Jules. (n.d.). Innovative Learning with ChatGPT. Coursera. https://www.coursera.org/learn/chatgpt-innovative-teaching#reviews