I remember being at the Institute for Performance and Learning’s 2018 conference and attending a session with Dr. Stephen Murgatroyd from Contact North when I first heard about AI-enabled tutoring bots. Dr. Murgatroyd described using IBM’s technology to enhance the support provided to students in undergraduate programs, where schools like Queens University and University of California San Diego would have “Jill Watson” (the IBM tutor bot) support students in chat alongside actual Teaching Assistants. The AI-enabled tutoring technology would learn from the TAs’ answers over time and be able to mimic their performance, providing university students with access to answers around-the-clock without increasing the number of TAs available. Murgatroyd reported that course evaluation answers for Jill Watson were near identical to those of the TAs and – as of 2018 – they reported students did not “catch on” to the fact they were being supported by AI (Murgatroyd, 2018). Dr. Ashok Goel also presented at a Tedx talk on this topic in 2016 (Goel, 2016) if you’d like to learn more.
IBM’s Watson AI technology competing in (and winning) a game of Jeopardy! against the show’s now-host in 2011 (Image attributed to: CBC, 2016)
Dr. Murgatroyd further explained that data diversity is key to AI decisions being accurate. If underrepresented populations are not being included in the data set, their experiences and views will not be equally included in the AI’s output. For IBM’s model, if the TA population it observed did not have a diversity of experiences, perspectives, or opinions, those would not be represented in the AI-generated answers (Murgatroyd, 2018). More recently I’ve seen several instagram posts, tiktok videos, or news articles like this asking iterative AI tools to produce images of an “average” Canadian and almost all of them display white, heteronormative, typically-abled, non-indigenous people (except the Territories which appear to feature indigenous people in this article’s example) (Bradley, 2023). If the sample source AI technologies draw from lacks diversity, its results will lack diversity as well.
In a similar vein, if the source material referenced by AI technologies lacks diversity it’s likely that it also lacks accessibility for people with disabilities. This could fundamentally mean some learners or educators miss out on the potential benefits of AI tutors or chatbots because the technology is inaccessible by a screen reader, generated videos are un-captioned, or the language used is inaccessible to a person with autism who can not decode. Dr. Anne Gagné from Brock University writes about the frustration she experiences in Academic conferences on her blog, in the form of a poem:
When so many of us are out here raising awareness
With our invisible labour
Because it is important and it matters
Because we are losing students to lack of accessibility
Because we are losing opportunities for new colleagues
Because of lack of accessibility
Because we are losing incredible scholars for lack of accessibility
(Gagné, 2021)
If Dr. Gagné feels this frustration at the lack of accessibility accommodations within academic circles, how would students feel if they are promised a technology like AI tutor or chatbots meant to enhance their learning experience, only to encounter similar barriers?
Diversity and Accessibility are common topics in the DEI (diversity, equity, inclusion) spaces, and as I’ve demonstrated above there are still unresolved conflicts and opportunity for improvement in how AI tutor and chatbots are applied so as to not unintentionally (I hope?) exclude learners from being represented. I look forward to investigating this topic more in my individual project for this course and welcome your feedback on my proposal above.
References
Bradley, J. (2023, July 9). AI determines what the typical person looks like in each Canadian province. Western Standard. https://www.westernstandard.news/news/ai-determines-what-the-typical-person-looks-like-in-each-canadian-province/article_f58a48e0-1b46-11ee-8add-2f5ad65410aa.html
Gagné, A. (2021, May 30). Instructional Technology and Active Learning: Possibilities for Inclusive English Classrooms. Instructional Technology and Active Learning. https://allthingspedagogical.blogspot.com/2021/05/instructional-technology-and-active.html
Goel, A. (2016, November 1). A teaching assistant named Jill Watson. TEDxSanFrancisco. https://www.youtube.com/watch?v=WbCguICyfTA
Murgatroyd, Stephen. (2018, October 18). Artificial Intelligence (AI) and Online Learning – Challenge and Opportunity [Conference session]. Institute for Performance and Learning Annual Conference, Toronto, ON, Canada.
Hi, Andrea. Such interesting thinking here and I am glad to see you putting this lens on an AI tutor. I have a question for you about Murgatroyd’s note that students didn’t know they were being tutored by AI. How do you feel about that? I am a big believer in transparency and disclosure in AI use, and I wonder about the ethics of not letting students know they were having other-than-human interactions. I would be annoyed if I was that learner. But maybe I am an outlier?
The Federal guidance on AI recommends that no one should interact with a Federal body without knowing they are working with an AI: https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/guide-use-generative-ai.html
Hi Brenna,
Transparency and disclosure is something that weighs heavily on my mind also when it comes to AI, and especially in online remote learning where students and educators rarely or never meet, so trust must be built through actions and transparency over time. My perception of AI-generated content is influenced by the uncanny valley effect, or the sense of unease people may feel when encountering AI-generated content which seems slightly “off” making it difficult to determine if it was created by a human or not (Mori, 1970/2005, as cited in Wang et al., 2015, p. 394). When this occurs in a setting where the presence of AI-generated content isn’t disclosed to me, I personally find myself distracted by trying to determine what seems “off” about the content rather than the content itself. When I know ahead of time I find myself slightly more at ease and able to focus. I’m sure I’m not the only learner that feels this way, and don’t we owe our learners transparency?
References
Wang, S., Lilienfeld, S. O., & Rochat, P. (2015). The Uncanny Valley: Existence and Explanations. Review of General Psychology, 19(4), 393–407. https://doi.org/10.1037/gpr0000056
Hi Andrea – you’re asking some really key questions here about the extent to which AI tools can continue to perpetuate the biases that exist in society at large, and where they might be able to open up access. In simple terms I think about how easy it now is to have captions for video or ALT tags auto-generated, and whilst not 100% perfect, the barriers to supporting accessibility as standard practice are certainly being lowered. But a lot of this progress is reliant on good training data, as you identify. The Jill Watson example is a really good one as it’s an AI being carefully training on a very bounded and controlled data set, with a really clear and identifiable group of students in mind, and the outcomes were impressive. This suggests the potential is there in technical terms, but not without some cost and effort. I’m looking forward to seeing you dig further into the potential promise and pitfalls around representation, inclusion, and the accessibility of education where AI is concerned!