Post by Dugg Steary and Todd Pezer
Design thinking is a human-centred iterative process utilized to gain empathy for the view of others to a complex problem. This process is employed prior to the search for solutions to best ensure extraordinary results. Design thinking has been successfully applied by organizations to improve products, services, processes, or education. Our team utilized the learning tools and design thinking process from Stanford University d.school (Stanford, 2016) to investigate common challenges within our respective organizations and to develop a prototype solution that would be useful and meaningful for each team member.
Point-of-view
Our team consisted of a pilot educator from within the airline industry and a paramedic educator from a community college. Despite varied competency-based objectives and standards within each organization, both educators identified common challenges during the early stages of the design thinking process.
Two challenges resonated with each educator in both organizations. Specifically, is the learning material being delivered to students sufficiently relevant and enticing to maintain their interest, and to what level are students engaged with the artefacts or education resources? After addressing these key components, our team hopes to have a better understanding of the student level of engagement and will then repeat the design thinking process to develop solutions for improvement.
Prototype Solution
For each delivery model within the respective organizations, including online, blended, and in-class modalities, our team has speculated that a disconnect exists between student engagement and the educator’s perspective on the students’ level of engagement.
To identify the level of student engagement, the team will develop a multiple-item Likert formative and summative scale assessment (Gliem, 2003) and incorporate this assessment within, and at the conclusion of, each learning module. This assessment will be embedded within the learning materials, for example, Captivate, Blackboard, Moodle, or other LMS module. This assessment will be a mandatory completion element by the student before proceeding through larger modules and at the conclusion of each learning module. Educator perspectives on student engagement within the module will be measured using a similar assessment tool.
A combination report consisting of a negotiated score determined by both the educator and student assessments will be generated for each learning module. It is anticipated that these reports will provide valuable insight into the level of student engagement in comparison to the educator’s perspective of engagement. Additionally, it is proposed that trends of engagement patterns will be gleaned from the in-situ assessments. Specifically, it is anticipated that patterns will emerge outlining the type and quality of materials that showed increased student engagement during the learning module.
Possible Challenges
Our team identified that level of engagement was the first step in a larger initiative. Once the initial findings are assessed, our team plans to expand the prototype to identify the level of engagement with identified individual students and compare this with other education metrics including attendance, test scores, competency attainment, etc. Additional one-on-one interviews related to students’ level of engagement within learning modules is anticipated to be beneficial, but is not within the scope of this prototype.
Our team identified that the timely collection and distribution of the combination report will allow educators to self-reflect and adapt subsequent learning modules to improve student engagement. Providing training to improve student engagement would be beneficial for educators within each organization. Comprehensive training on the use of formative and summative assessments would need to be provided for each educator and student to ensure compliance and accuracy of completion.
Thank you for reviewing our design thinking process summary. We look forward to your feedback and insight.
References
Gliem, J. A., & Gliem, R. R. (2003). Calculating, Interpreting, and Reporting Cronbach’s Alpha Reliability Coefficient for Likert-Type Scales. In Midwest Research-to-Practice Conference in Adult, Continuing, and Community Education, (pp. 1–88). Columbus, OH. Retrieved from https://scholarworks.iupui.edu/
Mattelmäki, T., Vaajakallio, K., & Koskinen, I. (2014). What Happened to Empathic Design? Design Issues, 30(1), 67–77. Retrieved from http://10.0.4.138/DESI_a_00249
Stanford University Institute of Design. (2016). A virtual crash course in design thinking. Retrieved from http://dschool.stanford.edu/dgift/
Stanford University Institute of Design. (2016). The Virtual Crash Course Playbook. Retrieved from http://dschool.stanford.edu/dgift/
Tran, N. (2016). Design Thinking Playbook. Retrieved from https://dschool.stanford.edu/resources/design-thinking-playbook-from-design-tech-high-school
December 3, 2017 at 1:00 pm
Hello Dugg and Todd.
First off, I enjoyed your succinct introduction of the tool and why and how you used the design thinking process in this activity. As I read through your post, I felt supported in understanding your process.
Your speculation that educators may have different perspectives around student engagement than do students was one that I had not considered that deeply before. I thought your idea to include an engagement assessment throughout and at the end of each module was outstanding, especially when coupled with an on-going review of the educator’s perspectives. I wonder if you have determined the options you’d choose to include in the Likert scale? Teasing out engagement seems challenging to me, so I wonder if the scale options would correlate to satisfaction, value or something entirely different?
In your possible challenges you note that comprehensive training on the use of the assessments would be required to ensure compliance and accuracy of completion. As participation in the assessments will be mandatory, do you anticipate that your design would provide this training at the beginning of a learning intervention? Should a student provide wholly positive feedback, or feedback that was deemed inconsistent, would it be important for the educator follow up and provide additional context with the goal of helping the assessments become more meaningful for the students?
Thank you for sharing your practical prototype solution and for encouraging me to consider the disconnect between student and educator perspectives of student engagement.
December 8, 2017 at 9:57 pm
Thanks Karen, expect more info on our ‘part-b’ of this assignment, but here’s a quick reply.
A question similar to your first (part A and B) was posed earlier and it looks like that needs to be fleshed out as we continue to develop. The questions on the scale would be determined on a course-by-course basis, or at least platform specific (on-line/blended/in class) question banks would be established. The goal of the questions would be to steer the instructor or ID to specific categories (4 in our first draft) of improvement categories, to help ensure that the data drives only appropriate intervention. As a result, if the questions can’t guide specific improvement, we need better questions!
As for you your question regarding the training required, we would offer a short session to explain the process. Having been in courses (both in-class and currently online) where every module ends with an evaluation report, they become an unobtrusive part of the landscape in short order.
I also wonder how meaningful the feedback would be across the student population. I’m sure some would answer anything to get the paperwork over with, and inconsistency would be an issue unless we find some way to….engage them in the engagement measuring process! We’ll be sure to factor your input into our developments from here on, thanks!
December 3, 2017 at 2:24 pm
Dugg and Todd, I found your post very interesting and a great way to begin to design a prototype to encourage intellectual risk taking. Considering an assessment beforehand would be an excellent way for both instructors and learners to get at the heart of how to improve the current state. A few questions arose while I was reading your post, first I wonder what these assessments would look like? What types of questions would be asked? and how would engagement be assessed? Would it be the number of times a student accessed a learning module or resource? or would it be gauged another way? I would be interested to know what questions would be asked on these assessments as they would need to differ slightly in terms of how the instructors see engagement versus how the students believe they are engaged. Great post and I look forward to seeing your response.
December 8, 2017 at 9:28 pm
Thanks for great, thought provoking input!
Part of the reason we decided to propose this process was to answer the exact questions you ask. I suspect that the term ‘engagement’ is so broad that when we approach a course, doing the work of defining engagement on a course by course basis would provide the designers and instructors with some great dialog. When the course is big enough, and/or resources allow, specific research (focus groups, interviews, etc.) may help determine the assessment metrics we hold the course against. Lot’s to think about, thank you for taking the time to read, and provide us with valuable direction to continue developing!
December 5, 2017 at 8:42 pm
Hi Todd and Dugg. You have identified an interesting problem that may contribute to failures in curriculum design and delivery and an interesting solution. I’m curious about how the results of the assessments would be used. Currently, my students fill Key Performance Indicator (KPI) surveys at the end of every course. As a professor, I see these results year to year, but I have to take initiative to compare between years and deduce causes. Many instructors are not data analysts and, with sample sizes on the course by course scale, may make incorrect inferences. The smaller sample size could be an issue if student’s answers are potentially identifying and there may be a power dynamic that would affect the responses.
I wonder, too, if using class time to do additional surveys might, in the short term, negatively impact engagement. I know that we have a challenge getting engagement on the KPI surveys themselves. I have observed that, when students have reason to believe that their feedback will make a difference, their engagement is higher. However, if they feel it is just another bureaucratic function of student life, they may not engage fully in the engagement survey. In my own classroom, I have provided an example of student feedback that led to curriculum changes and I always see a few eyebrows raised. This is merely anecdotal evidence; you may need to include a survey of the survey experience for more reliable data.
December 8, 2017 at 9:17 pm
Hi Mary,
As usual, you raise very good points! The goal of the process would be to concurrently develop an interpretation tool, and solution process (likely a flowchart for quick reference) to prevent the exact problem you describe: instructors gather this data and it either disappears, or adds work. Our method provides a path to follow any time there is low engagement or a gap in the engagement score reported by the instructor and student.
As for the impact that constant feedback has, I’ve been a student in programs recently where there were constant evaluation reports due (Kirkpatrick) and when the forms were well designed they were minimally intrusive and became just part of the program. Our goal would be to introduce the concept early to normalize it throughout the course, regardless of the way it was delivered.
Excellent food for thought for us to develop the concept further, thanks!
December 6, 2017 at 7:15 am
Hello Dugg and Todd,
You came up with a really different result than the others I’ve read, and I like where you’re going with it. Understanding the gap between perceived (instructors) engagement vs actual (students) is an important step in starting to improve any learning process. I’m curious to know if you thought about or discussed what you might consider if you found there to be a large gap. For instance, if your students don’t feel like any real engagement was taking place, what might you look at to improve the learning experience?
December 8, 2017 at 9:01 pm
Thanks Adam.
From my perspective, the information from the surveys would serve two short term goals, then as more and more data is assembled new discoveries would likely follow. The two short term issues to be explored are:
1 – Does high engagement actually correlate to better learning outcomes in every course, or is it’s contribution dependent on other factors? In essence, does it always matter how engaged they are?
2 – When there is a gap, what can we learn? From this perspective the lager the gap, the better. We need to understand the student perspective on engagement to better assess how it contributes, and how to improve it.
When poor engagement exists, or a gap is exposed, our model would direct the designers to one of four (so far) categories of contributing factors with the intention of guiding appropriate improvements. Admittedly, we expect the learning curve to be steep when the data comes in so the response plan would need to be flexible.
Excellent questions, and exactly what we need to keep refining the plan, thanks!