One of the cornerstone readings I will use to explore and focus my academic endeavours is Fowler’s article on virtual reality and pedagogy (2015). This article focuses on using two affordances, representational fidelity, the ability for the program to deliver experiences to a multitude of senses, and learner interactions, the richness of the different interactions resulting in an embodiment experience (Dalgarno & Lee, 2010). However, I question the necessity of representational fidelity as a vital affordance in virtual reality experiences. For example, a person who experiences virtual reality does not need a visually stimulating world to become immersed in the simulation. To this person, it is more important that the artifical world behaves as expected. High fidelity graphics and haptic-based touch feedback undoubtingly can add to the experience, but the core affordance is the ability to create such a virtual reality that the user feels they are there (Southgate, 2020, p. 121); this leads me into my second avenue, the pedagogical implications of using virtual reality as an educational tool.
Fowler asserts that by marrying Dalgarno and Lee’s learning affordances with a pedagogical framework, the learning specifications can be addressed in a model called the enhanced model of learning in 3D virtual learning environments (2015). He used Mayes and Fowler’s framework (1999), which is akin to the revised version of Bloom’s taxonomy, to map specific stages of learning with their technological affordances in virtual reality (Fowler, 2015, p. 418). However, such an approach sterilizes the learner’s experience to specific learning specifications and does not take into account the diverse nature of learning and the shifts that virtual reality may afford. I view this approach as more of a stepping stone to give stability to the hesitant and unsure, to create a sense of normality in the chaos of change for would-be innovators to grasp as they navigate the emerging technology.
References
Dalgarno, B., & Lee, M. J. W. (2010). What are the learning affordances of 3-D virtual environments? British Journal of Educational Technology, 41(1), 10–32. https://doi.org/10.1111/j.1467-8535.2009.01038.x
Fowler, C. (2015). Virtual reality and learning: Where is the pedagogy? British Journal of Educational Technology, 46(2), 412–422. https://doi.org/10.1111/bjet.12135
Mayes, J., & Fowler, C. (1999). Learning technology and usability: A framework for understanding courseware. Interacting with Computers, 11(5), 485–497. https://doi.org/10.1016/S0953-5438(98)00065-4
Southgate, E. (2020). Virtual reality in curriculum and pedagogy: Evidence from secondary classrooms. Routledge.
Excellent use of this course assignment to further support your thesis, Mike.
My daughter (age 9) played our Oculus Quest 2 for the first time the other day. As soon as she realized she was in a simulated environment (just the home screen) she was fully immersed, paying close attention to every detail on the screen, and even closer attention to interactive objects. To me, this supports the notion that VR does not have to include the most advanced, state-of-the-art graphics or capabilities to fully immerse students. Further, my daughter’s fascination with the simplest aspects of the Quest 2 home screen also suggests that VR is potentially an easy sell with students, a thought of which only helps your cause of promoting VR adoption in education institutions (still a very challenging road though).
Does any of your research to date include hard stats on learning efficacy via VR? How is learning efficacy being measured (e.g. assessment vs qualitative means)? I can’t imagine the cost of VR is a viable reason not to adopt the technology considering how affordable headsets are these days – plus with the Quest headsets fully wireless, even gaming towers are no longer needed for excellent VR experiences.
-Jonathan
Jon,
From my readings, there is evidence of learning, but once you start to explore the idea of efficacy, you need to explore assessment…, and that is where it gets complicated. The concept of assessment is often tied up in traditional pen and paper means of assessment. Even though this type of assessment is accepted in most academic worlds, I would argue it is not a realistic assessment form. When we think of assessment, it is about sterilizing a student’s learning to a simple number and standardizing it with the group while ignoring the context and human elements of the learning. When dealing with VR, it becomes hard not to include the experience itself. Part of that experience is the hard-to-define human elements sanitized by the traditional process. Thus I would argue that for authentic assessment to occur, current assessment practices need to change – assessment needs to be modernized to not only include the pen and paper sterilized information, but the skills, critical thinking, and human elements that determine real-life circumstances before we can accurate determine learning efficacy in VR.
Agreed RE. assessment practices need updating.
In the VR realm, wouldn’t assessment be largely influenced by the field of study? For instance, engineering students could be assessed on their ability to produce or reproduce a critical build of some sort, while medical students assessed on their ability to perform a variety of practical, hands-on tasks, etc.
The problem, I would assume, pertains to measuring learning outcomes in more subjective, theoretical subject matters, like the arts. I remember way back you mentioning it is near impossible, at least with current technology, to accurately measure learning criteria such as effort, motivation, etc. For this, all we have is qualitative measurement strategies.
Out of curiosity, do you have a specific subject matter you’re looking to integrate into VR?
Jon,
Yep, I do agree that summative or product-based assessment dramatically depends on the field of study. However, we must look deeper at what learning is – it is not a final product but a complex cognitive process. We often quantify learning for ease of understanding; it gives students a single number or letter to represent their achievement and a way of simplifying comparison between peers. Again, though, does this truly represent the learning process? I guess we could argue it is a good representation of mastery of the material, but the application or “doing” portion of this assessment has always left a bitter taste in my mouth. Even when completing a critical project, this, at best, is a snapshot of learning. Yes, it is better than nothing, but it should not be deemed the definite means to define understanding, learning, application, or even mastery of subject matter – yet, that is exactly what we do. There always seems to be a disconnect in education, or perhaps it is an assumption that by learning the materials, you innately know how to apply them in a given context. However, the material and context we often use are wholly divorced from what is actually happening in the field. I am not saying I have any answers as you astutely pointed out that we cannot measure these intangible elements.
As for me, I am focusing on the K – 12 education, which has much less deviation in assessment. Thus, the subject matter has much less influence on the use and assessment of the technology/learning because of the standardized nature of the assessment (to the point it is not a valid indicator of learning with such approaches). Specifically for my thesis, I plan to create a science-based learning activity as it is the only technology that can cheaply and effectively achieve emersion to explore many intangible scientific concepts.