The theoretical framework (TF) I initially planned to use for my research was the Technology Acceptance Model (TAM). TAM seemed to be an appropriate choice because it afforded a focus on the learner and their behaviour with regard to their learning experience (Davis, 1989). However, as I dug in further it TAM did not really align with what I am trying to ascertain with my research which is how might the opportunity to practice high-risk technical tasks in simulated virtual environments impact learner competency.
After perusal of the different models, I have decided to apply Activity Theory as my learning framework. My research will focus on technical task training in simulated learning environments, which is activity-based learning. I hope to investigate how design in a virtual environment can lead learners to increased levels of competency and as this framework outlines in order to design for activities, one must examine not just the activity itself but the intentionality behind engaging in the activity, the kinds of individuals who are interacting in said activity, the context in which the activity transpires and goals for the learner. (Jonassen & Rohrer-Murphy, 1999)
References
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–339. Retrieved from http://www.jstor.org.ezproxy.royalroads.ca/stable/249008?seq=1#page_scan_tab_contents
Jonassen, D., & Rohrer-Murphy, L. (1999). Activity theory as a framework for designing constructivist learning environments. Educational Technology Research and Development, 47(1), 61-79 Retrieved from https://doi-org.ezproxy.royalroads.ca/10.1007/BF02299477
RaceRocks makes technology-enabled learning environments primarily for military applications. My teams are responsible for designing and developing curriculum as well as digital learning products and environments for our clients. We create this curriculum and these learning products but we do not deliver these learning experiences. As such, our designers face a unique challenge designing for facilitation that they will not be present to conduct.
Most of the curriculum we design is geared towards military personnel and most recently at junior rank naval trades. Many of the students in these programs are new to the armed forces and have a mixed educational background. They join with a developed sense of self and understanding of their abilities and outlooks by which they see the world. Before engaging in blended content developed by RaceRocks students would have completed basic training and some introductory courseware regarding their role in the services.
The Community of Inquiry (CoI) model is a worthwhile way to provide an educational experience, through the integration of three essential elements; cognitive presence, social presence, and teaching presence (Garrison, Anderson, & Archer, 2000, p.88).
Cognitive presence is giving the opportunity for the learners to incorporate the material. This is accomplished through the understanding and validating of new information, this can also be known as “valve control” (Bull, 2013). Cognitive presence gives instructors the ability to focus learners attention on specific learning outcomes.
The social presence element is based on building a strong environment for learners to feel welcomed, reached, heard, and a part of something. Bull refers to these roles as being “a party host” or a “social butterfly”(Bull, 2013). These roles ring true for military instructors as well, in order to provide instruction and facilitate an online course, they need to be able to reach and engage learners.
Teaching presence is the ability “to design and integrate the cognitive and social elements for educational purposes” (Garrison, Anderson, & Archer, 2000, p.92). This element considers the design, facilitation, and support provided to learners. When considering Bull’s Eight Roles of an Effective Teacher, the roles of “Learning Coach”, “Tour Guide” and “Mirror” stand out for our application as the facilitators is usually a military instructor, letting the students relate to the presence and capability of the instructor (Bull, 2013).
The components of CoI help the blended learning environment we create are effective for fostering community for our clients and there learners.
References:
Bull, B. (2013). Eight Roles of an Effective Online Teacher. Faculty Focus.
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical Inquiry in a Text-Based Environment: Computer Conferencing in Higher Education. The Internet and Higher Education, 2(2), 87–105. https://doi.org/10.1016/S1096-7516(00)00016-6
Vaughan, N., Cleveland-James, M., & Garrison, D. R. (2013). Teaching in Blended Learning Environments—Creating and Sustaining Communities of Inquiry. AU Press, Athabasca University.http://www.irrodl.org/index.php/irrodl/article/view/751
Weller, M. (2020). 25 Years of Ed Tech. Athabasca University Press. https://doi.org/10.15215/aupress/9781771993050.01
3 Initial thoughts about digital facilitation are:
1) It is FUN. In 2021 we have seen how much of life can be orchestrated through a conferencing app. We have seen educators get better and better at using these tools, incorporating more multimedia, and creating impactful interactions for students. There are more and more tools hitting the market, like Miro or gather.io which allow educators to really make learning in a digital environment engaging and fun.
2) It is more chill. In many cases digital facilitation places the learner and instructor in environments they have set up to support their learning; comfortable, coffee in hand, ready. Students can take part anywhere, any location, any time zone. With the formalities of a traditional school, environment removed students can be in their sweat pants or have a baby on their knee, ready to learn.
3) Accessibility. If a student has individual accommodations for accessibility they can access those without classmates having to be aware. Need a screen reader, a mobility device, an interpreter, digital facilitation can allow some students with disabilities access to learning in their personalized environment.
2 Main questions that come to mind are:
1) How can we tell if our digital facilitation is meeting the student’s needs?
2) How often should we as facilitators in digital environments switch technology?
1 Metaphor
Facilitating in a digital environment is like surfing, you can plan your wave and practice your technique but you can’t control the ocean.
Learning technologies are ever-changing. Organizations must continually invest labour and finances to take advantage of technologies’ affordances to meet the needs of their students or clients. Unfortunately, many projects created to implement these changes do not succeed. Deliverables are often late, over budget, missing key features, or are never used (Watt, 2014, Figure 2.1). We designed the Learning Innovation Toolkit (LIT) to help implement new learning technologies within an organization more effectively and efficiently.
Follow the link below to visit the toolkit website
Leading change in digital learning environments depends on the strength of the leadership involved in the process. To gain insight from a more diverse perspective set, I interviewed two leaders from two separate areas of my work. I interviewed Dan Bourdage, COO of RaceRocks 3D and a NTDC(P) Commander (Cmdr.) at the Royal Canadian Navy who asked that he not be named. The culmination of these interviews and the course readings to date lead me to the creation of seven steps to leading change in digital learning environments.
Step 1 – Is change needed?
At the beginning of the change process, assessing the need for change is essential. If the proposed change could be organizationally disruptive, a logical explanation for its adoption will be required to facilitate engagement. That being said, not all change has to fix a problem, change for the sake of innovation or momentum like that of the insurrection model is also valid. (Al-Haddad & Kotnour, 2015) However, being able to articulate why the change is important is key to gaining buy-in.
Step 2 – What are the expected outcomes?
Both the interview subjects stated clearly understanding the expectations of the change when creating the plan was imperative to its success.
Step 3 – Are we ready?
Operational readiness is a term the Navy employs frequently. It translates as well to organizational readiness as “implementation is often a ‘team sport’”. (Weiner, 2009)
Step 4 – What’s the Plan?
Ensuring any documentation and communications are clear. “Always perform a sanity check” (D. Bourdage, personal communication, Feb 17, 2020) with an expert if one is available or an objective colleague if one is not. However, planning to plan can become a vicious cycle. Both interview subjects agreed with Weiner’s findings that there comes the point where regardless of the remaining questions, confidence needs to show and be communicated effectively. (Weiner, 2009)
Step 4 – How are the staff?
The Cmdr. shared insights into people management in the armed forces. While transactional leadership is the expected methodology, orders given, orders followed. Other approaches fair more successfully in times of change. With the Navy moving into the Future Naval Training Strategy, adoption of digital environments is required and challenging for a military service whom has been late adopters of many technologies. Approaching those who struggle with the change from an adaptive or reflective leadership stance is more likely to improve engagement. (Khan, 2017)
Step 6 – Go Time!
The plan has been created, its time to go. During the interview, Dan said that an “implantation plans will live or die by its communication strategy” (D. Bourdage, personal communication, Feb 17, 2020) this viewpoint aligned closely to stated by Biech. (Biech, 2009)
Step 7 – What did we learn?
Reflection after the implementation on what went well and what could have been improved upon was a point stressed by Dan. His belief is that even though each change is different, we as leaders can only learn from them if we look back in a timely manner to self-reflect and analyze where we could have improved.
Conclusion
As a result of the course readings and the interviews conducted, I believe that numerous theories and models can and should be applied strategically to manage change for digital learning environments. Leading change not merely about the execution of a plan but the shared vision of what the plan can bring.
Al-Haddad, S. and Kotnour, T. (2015), “Integrating the organizational change literature: a model for successful change”, Journal of Organizational Change Management, Vol. 28 No. 2, pp. 234-262. https://doi-org.ezproxy.royalroads.ca/10.1108/JOCM-11-2013-0215
Biech, E. (2007). Models for Change. In Thriving Through Change: A Leader’s Practical Guide to Change Mastery. Alexandria, VA: ASTD [Retrieved from Skillsoft e-book database]
Canada. Department of National Defence. A-PD-050-000/AG-003, Royal Canadian Navy Future Naval Training Strategy. Ottawa: DND Canada, 2015.
Castelli, P. (2016). Reflective leadership review: a framework for improving organisational performance. Journal of Management Development, 35(2), 217-236
Khan, N. (2017). Adaptive or Transactional Leadership in Current Higher Education: A Brief Comparison. The International Review of Research in Open and Distributed Learning, 18(3). https://doi.org/10.19173/irrodl.v18i3.3294
Kotter, J. (1996). Leading change (Professional development collection). Boston, Mass.: Harvard Business School Press.
Weiner, B.J. (2009). A theory of organizational readiness for change. Implementation Sci 4, 67 https://doi.org/10.1186/1748-5908-4-67
Please follow the below link to Christina and Leigha’s co-authored post on Virtual On-The-Job Training. Please provide your thoughts and feedback there.
I am in complete agreement with Etchells and his colleagues that digital environments are part of 21st-century life and as such part of the lives of children today.
As a mother of two children ages 11 and 9, I have allowed them more digital access than many of my peers. I believe that it is the content and not the screen that should be monitored. There is harmful media out there and I do not want my kids consuming that. So the rule at home is, the kids have nearly unlimited access to screen time if they are actively creating on their devices, not simply consuming media.
I monitor and limit how much time they can be on Netflix or YouTube, but I do not limit the amount of time they can be animating on flip-a-clip, reading ebooks, building games on Scratch or other such applications. They are learning, building art and developing computational thinking skills.
Until there is evidence-based research that says it’s the screen that is the issue, I won’t be limited my kids from creating in digital environments.
The history of the Shareable Content Object Reference Model (SCORM), spans the last 22 years of the learning industry, from its initial creation, through widespread and continued adoption, and lastly the realization of SCORM’s limitations and the creation of a successor. In this paper SCORM is viewed at several points in the model’s timeline and through the lenses of different use cases, with the goal to provide a diverse overview on the history of SCORM.
SCORM began its journey in 1997 through the United States presidential Executive Order 13111 (Papazoglakis, 2013, p. 3). The White House Office of Science and Technology and Department of Defense created the Advanced Distributed Learning (ADL) initiative to work together to create standards for eLearning. Part of this initiative included the creation the Shareable Content Object Reference Model (SCORM), which released its 1.0 version in 1999. The goal of which was to create a single adoptable reference model for learning content objects that any Learning Management System (LMS) could understand. This would have the result of reducing costs and improving efficiency by making learning content objects reusable and sharable. The ADL team worked with academic groups and the eLearning industry to create this protocol.
This standardization was a great step forward, however the methodologies chosen within SCORM were not entirely new ideas. Several authors note that the protocols within SCORM were largely based on the aviation industries’ 1989 AICC runtime environment.
The creation of SCORM was welcomed enthusiastically. Authors writing within the first 5 years of SCORM’s existence expressed excitement about what the adoption of a standardized model means for the reusability of eLearning content. With Shackelford going as far as to say that “SCORM promises to bring together the best of current standards and provide common ground for eLearning in the future” (Shackelford, 2002, p. 3). The possibilities for diverse learning activities and the future of eLearning seem obtainable now that SCORM has created a method for learning and performance tracking. The excitement was shared beyond a single industry; Shackelford speaking to the application for academia and Curda, S. & Curda, L. speaking to the military application.
Throughout the mid-2000s SCORM compliant LMSs were incorporated into industry on a grand scale. With major universities, corporations and government entities moving more learning into distributed learning models. SCORM was incredibly successful in meeting the goals of its creation, making eLearning content objects accessible, durable, interoperable and reusable. (Papazoglakis, 2013, p. 17).
However, even as SCORM-compliant learning was dominating the landscape, learning designers were noting limitations. Authors viewing SCORM in the early 2000s could not have known that ten years later mobile applications would dominate most industries, learning included. SCORM could not compete in this new playing field. Murray, K., & Silvers, Papazoglakis and Lanjuan all noting that the run-time environment which SCORM is based is not conducive to mobile or web-based learning environments.
In addition to the technical limitations and despite that viewpoint that “Advanced Distributed Learning is by its nature inherently learner-centered.” (Curda, S. & Curda, L., 2003, p. 10) SCORM did not fulfil this ideal, because SCORM in practice, was centred on the content, not the learner. (Murray, K. & Silvers, A., 2013, p. 50). Authors writing a decade later agree that learning is not a linear process that happens in one place, or that can be tracked simply with test scores and completion status’. The complexity of learner experience could not be captured within the confines of the SCORM protocol, as only limited interactions are trackable, and the learning objects must be placed in an LMS and assigned to a learner.
As mentioned previously, consensus was that SCORMs runtime environment, was not going to be able to grow to adapt to the known limitations, the final official version of SCORM, 2004 4th edition was released in 2009. Research into the successor to SCORM began in 2011.
The new protocol needed to be able to track interactions outside of a web-package or an LMS, it also needed to be flexible and able to track many different types of activities in a customizable way. Experience API or xAPI launched its first version in 2013 after two years of versioning under the title of Tin Can API. Unlike the LMS bound SCORM, “the xAPI helps systems express streams of activity information related to what people are doing, very specifically, with all manner of technologies.” (Murray, K. & Silvers, A., 2013, p. 51). This essentially allows for the learners experience to be tracked from wherever they are interacting, be that interaction with a formal learning object, or a website help page, or a video. These items and others no longer need to be confined within a learning management system. In alignment with adult learning theory, xAPI gives learners control over how they learn and provides feedback to learners on their achievements across the complete spectrum of their learning. Learner data is tracked using Actor-Verb-Object statements, “I(actor) did(verb) that(object)” (Papazoglakis, 2013, p. 23). These statements can be added to anything a learner may interact with, and the data is collected in a Learning Record Store (LRS) which can then report that information to a LMS for inclusion on a learners profile.
Today, SCORM’s functionality is limited but it’s adoption remains wide. Large corporations, universities and the military continue to use SCORM as the standard method for tracking learner completion of distributed learning items. While SCORM only allows for limited data points, it performs that limited functions well and without any complication. These facts may see SCORM live to see future use in new and existing markets. Lanjuan proposes to introduce SCORM compliance into eLearning in China, a market thatPhoto credit unsplash.com/@clemhlrdt did not adopt the protocol in the 2000s. With the understanding that SCORM is not ideal for mobile learning an eLearning standard is still needed in China and SCORM has the greatest documented support. However, as another author points out “SCORMifying content is a costly process that demands effort and time” (Papazoglakis, 2013, p. 20). China may realize that adopting this protocol is not cost effective in mobile applications.
The views synthesized here shows that SCORM continues to meet the mandate of its creation. It’s future use case may be limited in the progressively mobile world, however SCORM is still widely in use today. It should be noted that none of the papers reviewed in this synthesis looked at the state of SCORM or xAPI in 2019. This is an item that requires additional research for inclusion. SCORM was developed as a collaboration between the US military, academic and eLearning industries. While SCORM and xAPI are currently in use, the future of eLearning standards continues to evolve with the same parties working together for advancement.
References
Curda, S., & Curda, L. (2003). Advanced distributed learning: A paradigm shift for military education. Quarterly Review of Distance Education, 4(1), 1-14.
Lanjuan, R. (2017). The realization and test research of standardized mobile learning courseware based on SCORM. Matec Web of Conferences, 139. doi:10.1051/matecconf/201713900113
Murray, K., & Silvers, A. (2013). A SCORM evolution. Training & Development,67(4), 48-53. Retrieved from https://royalroads.on.worldcat.org/oclc/5347926730
Papazoglakis, P. P. (2013). The past, present and future of SCORM. Academy of Economic Studies. Economy Informatics, 13(1), 16-26.
Shackelford, B. (2002). A SCORM odyssey. Training & Development, 56(8), 30-35.
The claim of no-learning benefit has been made and substantiated by Clark (1986). He acknowledges that media has economic benefits but not learning benefits. His theory on research and data is collected throughout many different research projects. He analyzed research that started in the 1960s and was tracked all the way up to the 1980s, but the data did not indicate how different teachers instructed.Clark (1986) also mentioned that authentic problems or tasks seem to be the most effective influence on learning. Since he believed that the media had no learning benefits, he stressed that a moratorium on further research dealing with media’s influence on learning was necessary (Clark, 1983).
Contrary to Clark’s (1986) research, the article “The Influence of Technology in the Education Industry” Dr Eliatamby (2018) says use of technology is, at its very core, blended learning. At its simplest, blended learning is “the integration of classroom face-to-face learning experiences with online learning experiences” (Garrison and Kanuka, 2004, p. 96). The use of blended learning creates space for students to actively participate in the interplay between their learning environment and their own cognitive processes (Kozma, 1994). Use of technology also allows for learning on the job or real-world learning to take place, or better generalization of student learning to real-world contexts (Kozma, 1994). This is supercritical in the age of industry 4.0.
In her article for Campus Technology, Reynard (2019) states the importance of understanding that how students’ think and learn has changed due to ongoing use of technology and talks about the integration of technology into design for learning. She falls firmly on the side of Kozma (1994) in advocating that course design should be done interdisciplinarily, setting out contextual problem-solving tasks for students, with an emphasis on the process of learning as opposed to the product (p.21). Use of technology in design for learning is not just about a method of delivering the information to the students, but also building utility with technology. Learning has to leave students equipped for the workplace, with skills that “involve thinking and processing information, including possible diversions of thought, redirection of focus and the integration of new ideas and trends,” and the ability to function within the technological world that they will be working in (Reynard, 2019).
In line with Eliatamby’s take on Technology and its role in learning Dalto (2018) adds that incorporating technology into a blended learning environment boosts learner retention. Dalto touches on technological applications such a mobile learning, AR, VR and 3D simulated environments. Clark (1994) argued that “. . . the usual uses of a medium do not limit the methods or content it is capable of presenting”, but his argument does not consider immersive environments that did not exist at the time of his writing. These new technologies also allow for freedom of instruction did not Clark did not take into account, these technologies “. . . provide[s] the ability to train in situations that would otherwise be too dangerous or expensive in real life.” (Dalto, 2018. p.5)
As Hastings and Tracey suggested in 2005 and even more applicable now media capabilities have changed dramatically over the last generation and the focus of the conversation should not be if, but how media affect learning. “Computers have unique, non replicable capabilities and therefore can support instructional methods that other media cannot” (Hastings and Tracey, 2005). The most important thing about the debate is to acknowledge that the instructional methods and the delivery medium must be aligned to facilitate learning.
Another consideration is raised by Watters in a recent blog post. Commenting on the function of computers in education, Watters quotes Weizenbaum (1995), “It is much nicer, it is much more comfortable, to have some device, say the computer, with which to flood the schools, and then to sit back and say, “You see, we are doing something about it, we are helping,” than to confront ugly social realities” (2019, para. 10). Indeed, based on Watter’s blog about Sesame Street moving from PBS to HBO in 2015 and then in October, 2019 to HPO Max echoes Weizenbaum’s observation in 1995 as this move results in restricting access due to socio-economic barriers. It could be argued that Sesame Street has moved so far from their original goal which was to, “…create a show for public (not commercial) television that would develop school readiness of viewers age 3 to 5, with particular emphasis on the needs of low-income children and children of color” (2019, para. 11) that it would appear Sesame Street has ‘sold out’. The implication being that they sold out in favour of higher profit rather than remaining accessible to its original, marginalised audience. Instead, the programming is available to only those who have the means to pay for it.
It is possible that Clark would agree that Weisenbaum is correct in his observation that computers could be used as a superficial solution to a much deeper problem. Whereas, Kozma might suggest that educators must consider media’s impact on educational outcomes while also exploring the far-reaching impacts as technology continues to advance. Regardless, the question of whether media will, or will not, influence learning is also about the accessibility of media.
References
Clark, R. E. (1994). Media will never influence learning. Educational Technology Research and Development, 42(2), 21-29. Retrieved from Potential_in_Higher_Education
Eliatamby, M. (2018, July 02).The Influence of Technology in the Education Industry [blog post] (2018, July 02). Retrieved from https://theknowledgereview.com/the-influence-of-technology-in-the-education-industry
Garrison & Kanuka (2004). The Internet and Higher Education. Retrieved from https://www.researchgate.net/publication/222863721_Blended_Learning_Uncovering_Its_Transformative_
Hastings, N.B. & Tracey, M.W. Does media affect learning: Where are we now? TECHTRENDS TECH TRENDS (2005) 49: 28. https://doi.org/10.1007/BF02773968
Kozma, R. B. (1994). Will media influence learning: Reframing the debate. Educational Technology Research and Development, 42(2), 7-19.
Reynard, Ruth (2019) Why Integrated Instruction is a Must For Today’s Tech Enabled Learning [blog post]. Retrieved from https://campustechnology.com/articles/2019/05/29/why-integrated-instruction-is-a-must-for-todays-tech-enabled-learning.aspx
The Influence of Technology in the Education Industry [blog post]. Retrieved from https://theknowledgereview.com/the-influence-of-technology-in-the-education-industry
Dalto, J. (2018). Ar, vr and 3-d can make workers better. Ise ; Industrial and Systems Engineering at Work, 50(9), 42-47. Retrieved from https://royalroads.on.worldcat.org/oclc/7862472750
Watters, A. (2019, October 04). Hewn, no. 324. [blog post]. Retrieved from https://hewn.substack.com/p/hewn-no-324