Skip to content

Get Ready, Get Set, Go!

A Royal Roads University 2024 Virtual Symposium Retrospective – Allie Munro

I am now one week into the first credited course of the Master of Arts in Learning and Technology (MALAT) program at Royal Roads University (RRU). The first week brings together students starting their journey and those nearing program completion through a five-day virtual symposium. Faculty and students discuss current industry news and trends, research projects, and the MALAT program in general.

I focused on projects and discussions regarding artificial intelligence as I am interested in recent technological advances; however, after a beneficial and engaging conversation, MALAT Cross Cohort Social (Wilson et al.  2024), I followed their advice to open my mind to other topics. I will share with you discussions that emerged from AI presentations, but first, let me share how I, as a millennial (1981-1996),  strengthened my understanding of generational differences, most notably challenges and barriers brought on by being raised in a world with social media.

Safe Environment 

Before this week, a safe environment, to me, meant inclusivity, equity, active listening, critical thinking, and willingness to share. I did not realize why this was so important, which may seem obvious, but let me explain. Future generations now grow up in a world connected to social media. Elizabeth Childs highlights in Messier’s (2024) discussion on facilitating in hybrid learning environments an analogy provided by Dave Cormier (2023), who said, “It’s like my teenager is at a sleepover 24/7.” (as cited in Messiers, 2024, 1:01:40). This analogy is powerful! I recall sleep overs as a child, although I have many great memories, I was always happy to go home and have time to myself again; imagine a world where you could not. Elizabeth Childs states that as a result, those growing up in a world of social media became sensitive to what they say and do not say, essentially crafting a new version of themselves. I never considered this experience but am empathetic to the next generations. This understanding has taught me that without a closed and safe environment in which to conduct training, it is likely that participants will limit their engagement and subsequently reduce the quality of the learning environment.

Artificial Intelligence

There were multiple discussions and Academic Research Projects (ARPs) on Artificial Intelligence. After participating in the presentation on Using Generative AI in Centres for Teaching and Learning, I decided to watch a pre-recorded video on AI and Learning Design from 2023 to compare our insights on AI from last year. Although a year apart, both presentations have a common ground that AI should be embraced and not combated. There were many issues that I took away from both videos but would like to speak to the question of, How can we embrace AI while maintaining Academic integrity? Before I do, I would like to empathize with Dr Jenni Haymen in her response to a question regarding the biggest challenges in AI. Haymen (2024) states that she is concerned that the years of work with ChatGPT could be wiped away if a ruling says that any output is not fair use. This concern was a fantastic perspective, although obvious, as I did not consider all the historical use cases that would need to be considered if copyright rules changed, which is a real possibility.

How can we embrace AI while maintaining Academic integrity?

Combating technology is like playing the game Wac-a-Mole (Wikipedia contributors. N.d. Whac-a-Mole), as Clinte LaLonde states in a panel discussion on AI and Learning Design for Education. He explains it is an endless cycle trying to circumvent new technologies; once you make one obsolete, another one will take its place (LaLonde, 2023). Instead LaLonde poses the question, How can we create assessments which allow us to avoid workarounds as new technologies emerge? This way of thinking is echoed by Keith Webset when discussing AI-checking tools and their validity, “what are you going to do other than say some other black box has accused you of using this other black box” (LaLonde, 2023, 20:20). I agree with Websets continued thoughts where he states, although it may be correct, this will not build a positive student-instructor relationship.

I posed this question to Michael Whytes on the 2024 ARP Padlet project Generative AI in Higher Education for Student Success. Michael shared the same belief and quoted Thompsons River University, which makes the same proposal as Lalonde: find a way to tailor the assessment to embrace AI rather than combat it (Whytes, 2024).

The 2024 Virtual Symposium was an eye-opening experience for me. Many others shared my point of view but slightly different perspectives, such as the rationale of safe learning environments versus the meaning of them. I hold close to the recommendation from RRU 2023 Cohort, which is to keep an open mind and expect change (Wilson et al.  2024). I am already embracing topics and issues I have yet to consider, equally if not more exciting than AI.

References

Blinkist. (2023, August 9). Taking life in stride: Top 10 inspiring one-step-at-a-time quotes to motivate your journey. https://www.blinkist.com/magazine/posts/taking-life-stride-top-10-inspiring-one-step-time-quotes-motivate-journey 

Haymen J. (2024, April 11). Using Generative AI in Centres for Teaching and Learning – approaches, challenges, and opportunities [Webinar]. https://mediaspace.royalroads.ca/media/Instructional+Designers+using+Generative+Ai+April+11+2024/0_ks3alslr

LaLonde C. (2023, March 7). AI and Learning Design [Webinar]
https://www.youtube.com/watch?v=IFrAs59sDHI

Messiers, Stephanie (2024, April 10). S Messier April 10 2024 [Webinar]. https://mediaspace.royalroads.ca/media/S+Messier+April+10+2024/0_ixlzukec

OpenAI. (2024). University student reflecting on their first week at university [Digital image]. https://chat.openai.com/

Whytes M. (2024, April).  Generative AI in higher education for student success. Retrieved April 14, 2024, from https://rru.padlet.org/set_admin/malat-applied-research-projects-2024-flvp9bl6q768m2f2

Wikipedia contributors. (n.d.). Whac-A-Mole. Retrieved April 14, 2024, from https://en.wikipedia.org/wiki/Whac-A-Mole

Wilson D., Wong T., Coyle R., Logan E., Whytes M., Kent T., Hardi Leah, Meghan, Michal. (2024, April 10). MALAT Cross Cohort Social, 2024, April 10 [Webinar]. https://mediaspace.royalroads.ca/media/Cross+Cohort+Conversation+April+11+2024/0_bsn8vgw3

Published inLRNT 521

8 Comments

  1. Vince Vince

    Really nice, professional level work! I have to agree with this statement as well “AI-checking tools and their validity, “what are you going to do other than say some other black box has accused you of using this other black box”. With the inclusion of AI within educational frameworks there really needs to be a paradigm shift in how students are assessed in their learning. “Did you do this work or did someone help you”? Answer: Everyone helped me! – V

    • Thank you very much Vince. Before hearing from the presenters like Keith, I thought academia would be all for AI checkers. I am glad that they see through the inherit problems with these tools; not only accuracy but hampering relationships as well. ChatGPT and other AI tools are here to stay, Just like with any other technology disruption, we should embrace and react accordingly appose to fighting it.

  2. Chris Chris

    Excellent reflection Allie! I share many of your perspectives, particularly about the “24/7 sleepover” that our children live in and the issues of the black box nature of AI. One detail I think would be a good addition to your proposed question is “How can we embrace AI while maintaining ethical standards and Academic integrity?” I recently read a disturbing article based on a TIME investigation into how training data for OpenAI was sanitized during 2021-2023. It involved exploiting Kenyan workers to comb through some of the worst content on the internet to help train the LLMs to identify negative content better. It’s a difficult and uncomfortable article to read: https://time.com/6247678/openai-chatgpt-kenya-workers/. I think it’s essential that as AI evolves we understand what harms are being created and work to minimize them.

    • Oh wow Chris, that article was very disturbing! I just finished reading and need time to digest, perhaps its a topic we can discuss on Zoom as I would love to hear your thoughts. I understand the need but there must be a better way for OpenAI to “sanitize” its pretrained data than brute force.

      You posed a great question! Until I read that article I would have though of ethics of using AI in regards to fair use and copyright, not the avoidance of physiological harm on the vulnerable.

      This program is opening my eyes!

      Thank you Chris!

  3. Cindy Cindy

    Great reflections on AI and Safe Space. At work we so often isolate/ limit “psychological safe space” to mean that we need to create environments and a culture of better listeners, and inclusivity, but your learning and reflection in this posts extends the conversation to learning environments and the the “presence” of the learner – which version of them will show up? what do we need to consider as designers in the methods, media and environments that we create to ensure “Safety”. Looking forward to extending this conversation with you and the team back at work!

    • Thank you very much Cindy for joining my blog. I am excited that my writing can transition to workplace for thoughtful discussions. This perspective of safe spaces will be critical in ensuring the success of our Kaizen events while practicing LEAN Six Sigma.

      I welcome any and all comments, please set up a RSS feed or visit from time to time. I am sure I will reach out directly on any hot topics or thoughts.

  4. Russ Wilde Russ Wilde

    Nice work, Allie.

    One of my interests – both professional and personal – is how ideas and beliefs change over time. I am especially interested in how technologies shape our perceptions of reality and cause us to reconsider what we find “acceptable.”

    In the case of AI and education, I am interested to look back in ten years to see not only how we adapted our practices (such as assessments) to deal with ubiquitous generative AI, but also how notion of academic integrity itself might have shifted during that time. Will we use the same definition of academic integrity we do now 10, 20, 100 years in the future?

    • Allie Allie

      Great mindset, Russ. This thought is one of the many topics I wanted to consider in this program. Regarding AI as a disruptive technology, how does it relate to other technologies in the past, such as computers, the Internet, mobile devices, social media, Web 2.0, and so on? I venture to think there are many patterns and stages that we can observe and help calm the fears of those who are resistant.

Leave a Reply to Vince Cancel reply

Your email address will not be published. Required fields are marked *