Category Archives: LRNT 523

Building Ethical Futures: A Vision for AI in K12 Education

Published by Joan Oladunjoye on the 26th October 2024

At the edge of a bustling city, amidst the noise of honking cars and the ever-present hum of digital activity, Layla Park finished her cup of tea and set it down next to her laptop. It wasn’t just another workday; it was the day her team would reveal the culmination of years of effort: an AI-driven learning platform designed specifically for K12 computer science education. Layla wasn’t just proud of the project; she was determined to ensure that it set a new standard for ethical technology integration in schools.

The year was 2030, and AI had already become a transformative force in education, personalizing lessons for individual students, predicting learning outcomes, and even suggesting tailored pathways based on a student’s learning style. Yet, Layla knew all too well the risks associated with this rapid evolution. As her fingers hovered over the keyboard, she reflected on the path that had brought her to this pivotal moment. Layla had always been fascinated by the power of AI to transform learning, but she also knew that with great power came great responsibility. In 2030, AI was ubiquitous in classrooms, but not all its effects were positive. Poorly designed AI systems had alienated students, compromised their privacy, and even contributed to environmental damage through the sheer amount of energy required to power the necessary data centers.

Her goal was different. She wanted to create an AI system that did more than just spit out personalized lesson plans. Layla envisioned a system that empowered students to think critically, question the systems around them, and build real-world problem-solving skills. As she fine-tuned the final details of the platform, Layla remembered the words of Díaz and Nussbaum (2024), who had inspired much of her team’s work. The Pedagogical Centered AI (PCAI) framework they developed emphasized that AI should always serve human teachers and learners, not the other way around. Layla’s platform would put control in the hands of educators, allowing them to guide AI-driven lessons without letting the technology overshadow their role in the classroom.

The challenge, however, was enormous. One of the biggest hurdles Layla faced was ensuring that AI didn’t just perpetuate existing educational inequalities. Bozkurt et al. (2023) had warned of this, showing how AI could further entrench the gap between well-funded schools and those in underprivileged areas. AI required not just devices but high-quality internet access, and both were luxuries many students lacked. Layla knew that her platform had to be accessible to all students, regardless of where they lived or what resources they had. To meet this challenge, her team worked tirelessly to design AI tools that could run smoothly on older devices and adapt to lower bandwidth settings. It was a logistical nightmare at times, but it was a critical step in ensuring that students in rural areas or underserved urban communities could have the same quality of learning experience as their peers in better-connected schools. Equality of access was a non-negotiable goal for Layla and her team.

The inequalities in AI’s reach were not the only concern on Layla’s mind. Environmental sustainability was another pressing issue. The proliferation of AI-driven educational platforms had led to a skyrocketing demand for data storage and processing, which consumed vast amounts of energy. As Selwyn (2021) had pointed out in a landmark study, the energy demands of AI technologies, especially those reliant on constant data collection and processing, were threatening to overwhelm global energy supplies and contribute to climate change. Layla was acutely aware of these risks. Early in the design process, she had made it clear to her team that they needed to focus on reducing the platform’s carbon footprint. Through rigorous testing and refinement, they managed to develop algorithms that were more energy-efficient than conventional models, minimizing the platform’s environmental impact. It wasn’t a perfect solution, but Layla believed it was a significant step in the right direction.

But there was another ethical dilemma Layla had wrestled with from the beginning: privacy. As schools increasingly relied on AI to monitor student behaviour, track performance, and predict future outcomes, Layla was deeply concerned about the potential for misuse. While these tools offered valuable insights to teachers, they also posed significant risks if used irresponsibly. Selwyn et al. (2020) had raised alarms about schools turning into surveillance spaces, where students felt as though they were constantly being watched. Layla was determined that her platform wouldn’t contribute to that dystopian vision. Instead, her team focused on creating an AI system that respected students’ privacy while still providing actionable insights to educators. Data collection would be minimized, and any information gathered would be anonymized wherever possible, ensuring that students felt safe and trusted in their learning environment.

Teachers, Layla knew, would play a pivotal role in ensuring that AI didn’t undermine student autonomy. The professional development of educators was essential to making sure that AI was used not just as a crutch, but as a tool that enriched the learning experience. Sun et al. (2022) emphasized the importance of equipping teachers with the necessary skills to navigate AI-driven tools, fostering a culture where educators could confidently guide their students through AI-enhanced lessons without feeling displaced by the technology. Layla’s platform integrated training modules that would allow teachers to become active facilitators of the technology, rather than passive users. These modules were designed not only to familiarize teachers with the platform but also to inspire them to use AI in ways that fostered critical thinking, creativity, and collaboration.

As the platform neared its launch, Layla couldn’t help but feel a mix of excitement and apprehension. The future of AI in education was exhilarating, but it also needed to be tempered with caution, thoughtfulness, and humanity. In her vision for 2030 and beyond, AI wouldn’t replace teachers or automate education. Instead, it would enhance learning experiences, foster creativity, and help students become the thinkers and innovators of tomorrow; all while being mindful of ethical, societal, and environmental impacts.

Looking even further into the future, Layla imagined a world where AI didn’t just personalize learning but actively promoted inclusivity and collaboration across cultural and economic divides. As Macgilchrist et al. (2020) proposed, AI had the potential to support collective problem-solving by integrating diverse knowledge systems, creating more inclusive and collaborative learning environments. The concept of a “decolonized AI” that Roberts (2023) had advocated for was especially close to Layla’s heart. She wanted her platform to be a tool not just for learning but for promoting social justice, ensuring that technology didn’t reinforce existing biases but instead helped to dismantle them.

With a deep breath, Layla hit the final key to confirm the platform’s upload. Her team’s work wasn’t about the next flashy tech innovation; it was about building a sustainable, equitable, and ethically sound future for education. And that, she thought as she closed her laptop, was the kind of progress worth fighting for.

References

Bozkurt, A., et al. (2023). Speculative futures on ChatGPT and generative AI. Asian Journal of Distance Education, 18(1).

Díaz, B., & Nussbaum, M. (2024). Artificial intelligence for teaching and learning in schools: The need for pedagogical intelligence. Computers & Education, 105071.

Macgilchrist, F., Allert, H., & Bruch, A. (2020). Students and society in the 2020s: Three future ‘histories’ of education and technology. Learning, Media and Technology, 45(1), 76-89.

Roberts, J. S. (2023). Decolonizing AI ethics: Indigenous AI reflections. Accel.AI.

Selwyn, N. (2021). Ed-tech within limits: Anticipating educational technology in times of environmental crisis. E-Learning and Digital Media.

Selwyn, N., Pangrazio, L., Nemorin, S., & Perrotta, C. (2020). What might the school of 2030 be like? An exercise in social science fiction. Learning, Media and Technology, 45(1), 90–106.

Sun, T., Strobel, J., Kim, C., Gao, Y., & Luo, W. (2022). Enhancing K-12 teachers’ AI teaching competency: A TPACK-based professional development program. Journal of Educational Computing Research, 60(7), 1824-1845.

Ethical Integration of AI in K12 Computer Science: Opportunities and Challenges

Published by Joan Oladunjoye 13th October 2024

The integration of AI into the K12 computer science curriculum in Canada offers promising opportunities, yet presents significant challenges. AI can personalize learning, allowing students to engage with computer science, particularly coding, at their own pace. This democratizes education by providing equitable access, especially in underfunded schools, while enabling educators to focus on fostering higher-level problem-solving and creativity (Bozkurt et al., 2023). However, a cautious approach is needed to avoid potential pitfalls.

The readings stress the importance of ethical considerations in AI implementation. Selwyn (2024) highlights the risk of over-reliance on AI’s statistical models, which can oversimplify learning and reinforce biases, particularly affecting marginalized students. This raises concerns about perpetuating educational inequities through unchecked AI use.

In contrast, Roberts (2023) advocates for a decolonized AI ethics approach, promoting inclusivity and the integration of Indigenous knowledge systems. Such an approach would ensure that AI fosters equitable educational outcomes, rather than reinforcing existing colonial structures.

While the future of AI in K12 education appears promising, it must be guided by ethical principles that prioritize equity and inclusivity. This careful approach could lead to more balanced and just educational practice.

References:

Bozkurt, A., et al. (2023). Speculative Futures on ChatGPT and Generative AI. Asian Journal of Distance Education, 18(1).

Roberts, J. S. (2023). Decolonizing AI Ethics: Indigenous AI Reflections. Accel.AI.

Selwyn, N. (2024). On the Limits of AI in Education. Nordisk tidsskrift for pedagogikk og kritikk, 10, 3–14.

The Media Debate in the Age of AI: Clark vs. Kozma Revisited

This Post was co authored with Lauren Chum. 21st of September 2024

In recent years, the rapid advancement of AI technologies has reignited debates about the role of media in education. From generative AI models to AI-powered assistants, educational technology is often considered game-changing. The real question is, are these tools genuinely revolutionary, or are they merely delivering instructional content more efficiently? To explore this, we revisit the debate of Richard E. Clark and Robert B. Kozma in 1994, applying their perspectives to two contemporary examples of techno-deterministic thinking. We aim to understand how Clark and Kozma might respond to these claims and what their views could mean for today’s educational landscape.

In the original debate between Clark and Kozma, Clark asserted that media are mere vehicles for delivering instruction, having no direct influence on learning outcomes. He argued that what matters is the instructional method, not the medium itself. For instance, switching from textbooks to video lectures or AI tools would not necessarily change the learning outcomes; the instruction and pedagogy design makes the difference—not the medium (Clark, 1994, p. 26). Conversely, Kozma (1994, p. 18) contended that different media offer unique affordances that can enhance learning experiences. For example, a video provides visual cues that text cannot, and AI tools can offer real-time feedback, potentially reshaping how students learn.

Considering these perspectives, we examined two current examples reflecting the ongoing hype around educational technology.

Chat-GPT-4 

The first example is from the article “Using GPT-4 to Improve Learning in Brazil”. In this article, OpenAI claims that GPT-4, a generative AI model, revolutionizes learning in Brazil by providing personalized tutoring at scale (OpenAI, n.d.). The article highlights the AI’s ability to adapt to individual learners’ needs, offering explanations, feedback, and tailored learning materials. According to OpenAI, this innovation is poised to improve learning outcomes dramatically. 

We can speculate on their responses after reviewing Clark and Kozma’s arguments. Clark would likely assert, “It’s the method, not the medium.” He would argue that despite the enthusiasm surrounding GPT-4, the underlying instructional design will ultimately determine whether students benefit. For Clark, GPT-4 is merely another tool parallel to a textbook or a video (Clark, 1994, pp. 21-22). If the pedagogical approach remains unchanged, swapping GPT-4 would not significantly impact outcomes. Clark would caution against assuming that AI leads to better learning outcomes, emphasizing instead the importance of evaluating teaching strategies (Clark, 1994, p. 29).

Kozma sees GPT-4 as a transformative tool that showcases how media can influence learning. He might argue that GPT—4’s real—time adaptability—offering personalized feedback and tailored content—introduces new affordances that traditional methods cannot match (Kozma, 1994, p. 11). AI has the potential to transform learning by making it more interactive, engaging, and responsive to students’ needs (Kozma, 1994, p. 12).

Microsoft Copilot

Our second example is from the article “Microsoft Copilot for Education.” Microsoft’s Copilot, an AI-powered assistant, is integrated into various software programs to assist learners by generating documents, summarizing information, and providing real-time guidance during tasks. The article portrays Copilot as a “game-changer” for student productivity and creativity, asserting that this technology will significantly enhance learning outcomes (Microsoft, 2024). 


Clark would likely respond by suggesting that “the hype is overblown.” He would express skepticism about Copilot being a “game-changer,” arguing that its mere presence does not necessarily lead to better learning. Like other technology, Copilot is simply a medium for delivering content (Clark, 1994, p. 26). If educators continue using traditional teaching methods such as lectures, worksheets, and tests, Copilot will be another way to support those methods. The tool is secondary to Clark’s instructional design and pedagogical approach.

Kozma’s argument supports the idea that different media, including AI tools, can enhance learning by providing unique affordances. He would likely argue that tools like Copilot promote creativity rather than limit it. According to Kozma, AI assistants offer new opportunities for learning by automating mechanical tasks (such as document formatting or information summarization), which allows students to focus on higher-order cognitive functions like creativity, analysis, and critical thinking. Kozma would see AI tools as enabling, not constraining, creativity by freeing students from routine tasks and providing interactive, real-time feedback that enhances the learning experience (Kozma, 1994, p. 13). 

Both examples highlight a recurring theme in educational technology: techno-determinism, the belief that technology alone can drive educational transformation. Kozma might cautiously endorse this view, while Clark would vigorously critique it. As educators and technologists, it is crucial to remain mindful of overstating the impact of media on learning. Whether using AI tools, digital platforms, or traditional textbooks, these tools are only as effective as the instructional design behind them.

In conclusion, the Clark-Kozma debate remains highly relevant as educational technology continues to evolve with innovations like AI. Understanding the limitations and affordances of media helps educators make informed decisions about integrating technology into their teaching. While AI tools like GPT-4 and Microsoft Copilot offer exciting possibilities, they will not improve learning independently. Effective learning requires thoughtful instructional design, not just cutting-edge technology. By revisiting the Clark-Kozma debate, we hope to encourage critical thinking about the role of media in education and to prompt a more nuanced consideration of the claims made by educational technology advocates. As the debate continues, educators must balance embracing new technologies and staying grounded in sound pedagogical practices.

References

Clark, R. E. (1994). Media will never influence learning. Educational Technology Research and Development, 42(2), 21-29.

Kozma, R. B. (1994). Will media influence learning: Reframing the debate. Educational Technology Research and Development, 42(2), 7-19.

Microsoft. (2024, February 7). Delivering Copilot for everyone. Microsoft. https://blogs.microsoft.com/blog/2024/02/07/delivering-copilot-for-everyone/

OpenAI. (n.d.). Using GPT-4 to improve learning in Brazil. OpenAI. https://openai.com/index/arco-education/

George Siemens: Shaping Education in the Digital Age

Published by Joan Oladunjoye on the 14th of September 2024

George Siemens is a prominent figure in educational technology, best known for developing the theory of connectivism, a fundamental framework for understanding learning in the digital age (Rudolph et al., 2020, p. 109). He is also a pioneer of Massive Open Online Courses (MOOCs) and has significantly impacted digital technology’s role in education (Rudolph et al., 2020, p. 115). Siemens believes that traditional educational models must be reevaluated to meet the evolving needs of today’s students (Siemens, 2005).

In addition to his theoretical contributions, Siemens has served as a college instructor and author and has been widely interviewed about his educational ideologies. His work continues to influence modern educational practices, particularly in online learning and digital pedagogy, where he advocates for using technology to foster more dynamic, learner-centered environments (Rudolph et al., 2020, p. 113).

I am particularly drawn to Siemens’ work due to his insights into how learning occurs in the digital age, especially through his theory of connectivism. This theory, which suggests that learning takes place across networks of people, tools, and shared information, is increasingly relevant with the rise of online education and social learning platforms. His focus on networked learning resonates with my interest in creating adaptive, flexible spaces where students engage meaningfully with content, peers, and digital resources, both in traditional classrooms and online.

Siemens’ work on MOOCs is also significant to my professional practice, as these platforms democratize education and make it accessible on a global scale. I aim to leverage these ideas to create collaborative, technology-enhanced environments that foster inclusive learning opportunities. In addition to his written work, Siemens has also shared his ideas in various multimedia formats. In a YouTube video titled Overview of Connectivism, he highlights how platforms like Twitter facilitate knowledge sharing, integrating technology and social systems into human knowledge, thus enhancing our overall capacity to understand (Siemens, 2014). This resonates with my belief in the power of digital tools and social platforms to foster collaboration, provide access to diverse perspectives, and promote continuous knowledge building.

Overall, Siemens’ contributions to educational technology, particularly through connectivism and MOOCs, make him a pivotal figure in shaping the future of learning in the digital age.

References

Rudolph, J., Siemens, G., & Tan, S. (2020). “As human beings, we cannot not learn”: An interview with Professor George Siemens on connectivism, MOOCs and learning analytics. Journal of Applied Learning & Teaching, 3(1). https://doi.org/10.37074/jalt.2020.3.1.15

Siemens, G. (2005). Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning, 2(1). https://www.itdl.org/Journal/Jan_05/article01.htm

Siemens, G. (2014, January 22). Overview of connectivism [Video]. YouTube. https://youtu.be/yx5VHpaW8sQ

Balancing Educational Technology in Special Needs and Mainstream Classrooms: Reflections on Blogs and E-Portfolios.

Published by Joan Oladunjoye on the 6th of September 2024

I have experience teaching computer science in both public high schools and private schools, including special needs schools that cater to students on the autism spectrum. One key lesson I learned from reading “The Use of Blogs in Education (2003)” is the value of blogs as an educational tool.

Blogs provide students with a platform to express their thoughts, reflect on their learning, and take ownership of their academic journey. This is especially beneficial in special needs environments, where students have diverse learning styles and needs. Blogs enable differentiated instruction, allowing students to engage with content at their own pace and focus on topics that interest them.

In my experience teaching students with autism, I’ve seen how critical it is to accommodate different learning styles. Blogs offer students who may be less comfortable with traditional classroom discussions a space to explore and analyse subjects on their own terms. For students on the autism spectrum, blogs provide a structured yet flexible medium for communication and self-expression, helping build digital literacy and social skills in a supportive environment. Blogs effectively bridge formal and informal learning, enabling students to explore computer science concepts creatively while improving their writing and critical thinking skills.

On the other hand, a conflicting lesson arises from Weller’s 2008 work on “The Adoption of E-Portfolios“. While e-portfolios offer a comprehensive way to assess student skills, they can be challenging to implement and maintain. In a special needs school for example, the technical complexity and time required to manage e-portfolios may be overwhelming for both students and teachers, particularly when students already struggle with organisation and time management.

Moreover, the focus on digital portfolios may conflict with the need for more traditional, manageable forms of assessment that are easier to use in a busy classroom. This tension highlights the challenge of balancing innovative educational technologies with the practical realities of teaching, especially when working with diverse student populations who may not benefit equally from such tools.

References: Weller, M. (2020). 25 years of ed tech. Athabasca University Press.

Reflection on the ‘Historical Amnesia of Ed-Tech’

Published by Joan Oladunjoye on the 30th of August 2024

Weller’s concepts of “historical amnesia” and the “year-zero mentality” resonate with the shift from analog to digital technologies, such as Bulletin Board Systems (BBS) in the early 1990s. Despite their revolutionary potential, BBS faced technical barriers and user resistance, illustrating Weller’s critique that new technologies often ignore past lessons (Weller, 2020, Chapter 1). The struggles with BBS, including student frustration (Mason & Kaye, 1989, as cited in Weller, 2020), highlight the importance of building on past successes and failures rather than dismissing them.

One argument I find compelling yet challenging is Weller’s support for the slow adoption of educational technologies. His emphasis on a measured, evidence-based approach is vital for stability, but it may overlook contexts where rapid adaptation is crucial, such as during global crises like the COVID-19 pandemic (Weller, 2020, Chapter 6). This tension between innovation and stability underscores the complexity of integrating technology in education. Weller’s caution against rushing toward disruption is necessary but could benefit from acknowledging situations where speed is essential.

If I were to write a similar book, I would start the history of educational technology earlier than 1994, recognising the foundational impact of earlier media like radio and television. These technologies laid the groundwork for the digital tools we use today, and acknowledging this broader historical context enriches our understanding of the field. Weller (2020, Chapter 1) rightly warns that focusing solely on recent developments risks losing valuable lessons from the past.

Conclusively, Weller’s work offers a critical lens on educational technology, urging professionals to balance innovation with tradition. While his arguments are compelling, they challenge us to think critically about when to prioritise stability over rapid innovation.

References:

Alrazni, A. (2024, April 8). The Journey of Educational Technology: A brief history. Medium. https://medium.com/@ahmad_alrazni/the-journey-of-educational-technology-a-brief-history 9d9a2d5e5714#:~:text=The%2020th%20century%20witnessed%20the,avenues%20for%20teaching%20and%20learning. 

Weller, M. (2020). 25 years of ed tech. Athabasca University Press.