
Above image generated via Leonardo AI free public image generator using the prompt “artificial intelligence agent trying to use self awareness to make a decision in a learning environment”
This dive into AI agents being used in education has been eye opening, to say the least. My company makes little use of any artificial intelligence (AI) technology, in fact I can’t think of any AI agents being used outside of the avatar in Zoom – and that is mainly used by folks in Zoom classes who want to look as though they are engaged in class without actually showing their faces. With so little exposure to the actuality of AI agents being used in education, my initial assumptions around the topic were all framed by what I read or saw in the media. That said, I was under the incorrect impression that all AI that is used is similar to the advanced technology from movies such as the newest Indiana Jones which de-aged Harrison Ford, or household AI such as Siri and Alexa who while lacking a physical presence have a far-reaching knowledge base.
Imagine my surprise when we start to dig into AI agents and education! First, it was a serious challenge to locate a course that included AI agents versus simply discussing the technology. I had anticipated a plethora of options in the MOOC and free online platforms and this was certainly not the case. It seems that AI agents are not as widely used or accepted as I had assumed. Naturally, I dug in to get a sense of why…when for so many years I’ve been hearing about this state of the art technology, why isn’t it’s use widespread as promised?
Belik and Neufeld (2022) speak about three significant challenges when it comes to machine learning (ML) – the machine’s ability around generalization and how it’s able to connect one experience to another and use that previous knowledge to understand new concepts, the challenge of creativity which they call innovativeness, and lastly consciousness (or the lack there of!) which indicates that a machine lacks the self awareness of a human when making choices or deciding between options.
Another hurdle that those looking to employ AI agents in education face is the ‘uncanny valley’ theory. I will be covering this in detail within an upcoming group presentation, but the high-level idea from Mori (1970) is that human’s start with ambivalent feelings towards AI agents, and those feelings become more positive as the AI becomes more lifelike…. up to a point. Once that point is reached, due to a general feeling of creepiness and unease there is a reverse effect where humans revert beyond the initial ambivalence into active dislike. If that point is crossed and uncanny valley is felt by the learner, it would create a separate and significant barrier to learning inherent to the tech.
I’m excited to continue to research the AI agent topics and learn from my cohort as we discuss AI in education, and look forward to hearing varying opinions and thoughts that surround the topic!
References
Belik, I., & Neufeld, D. (2022). Why isn’t AI delivering?. LSE Business Review.Why isn’t AI delivering? – LSE Research Online
Mori, M. (1970). The uncanny valley: the original essay by Masahiro Mori. Ieee Spectrum, 6, 1-6.The Uncanny Valley: The Original Essay by Masahiro Mori – IEEE Spectrum (purdue.edu)
May 28, 2024 at 12:14 pm
So in Understanding Comics, Scott McCloud posits that the more generic a face, the more we relate to it — that each step more photorealistic is also an opportunity to feel more alienated or disconnected from a character. I have been thinking about this a lot since your presentation. The line between more lifelike and something we can’t relate to is so tight. I admit to having a strong uncanny valley reflex, and I also wonder about the personal/relational dimension to why some are more bothered than this than others.