The History of Programming Education: An Evolving Narrative of Why and How to Learn to Program

The modern world has become increasingly computerized. Microprocessor computers emerged in the ’60s and ’70s, moving gradually from universities into workplaces, then more rapidly into schools, into homes, and eventually into consumers’ pockets. Mirroring the emergence of computers has been the increasing need to program these devices. However, computer programming is widely considered a hard skill to learn (Mendelsohn, Green, & Brna, 1990; Guzdial, 2002; Kelleher & Pausch, 2005). To overcome this difficulty—both perceptual and technical—researchers and educators developed new educational programming languages to introduce computer code to people of all ages (Mendelsohn et al., 1990). The history of programming education is a narrative of answering two central questions: “Why learn to program?” and “how to learn to program?” Over the past six decades, many solutions have been created to answer the latter question of how to help novices learn programming concepts (Brusilovsky, Calabrese, Hvorecky, Kouchnirenko, & Miller, 1997; Kelleher & Pausch, 2005; Bau, Gray, Kelleher, Sheldon, & Turbak, 2017). However, the question of “why learn to program,” which was central to early research in programming education (Mendelsohn et al., 1990), has shifted in prominence over time. In our increasingly digital world, there has been “a global push to broaden participation in computer science” (Bau et al., 2017, p. 72). Yet, before focusing on how to make programming accessible to everyone, “one of the first questions that must be answered is why novices need to program” (Kelleher & Pausch, 2005, p. 84). The evolving relationship between these two questions, “why learn to program” and “how to learn to program,” has impacted the development of programming education over the past six decades.

The earliest educational programming languages of the ’60s and ’70s sought to introduce learners to the cognitively rewarding world of logic and problem-solving. Seymour Papert, an educator and computer scientist at the Massachusetts Institute of Technology (MIT), believed that learning to program was a way for students to express themselves and “[debug] their own thinking” (Guzdial, 2002, p. 3). The idea that “a programming language is … a medium that creates new ways of dealing with existing knowledge” was shared by several researchers at the time (Mendelsohn et al., 1990, p. 179). By focusing on the cognitive benefits of programming, the mechanics of how to program were secondary to the reason for learning to program. Mendelsohn et al., in researching the early uses of educational programming languages, frame the question of “why learn to program?” as a contrast between the goals of “programming to learn, or learning to program” (p. 179). Logo, the first educational programming language developed by Papert, Feuzeig, and Solomon, was an example of programming to learn: it helped children make cognitive connections between computer code and problem-solving situations, and could be used to “explore a wide variety of topics from mathematics and science to language and music” (Kelleher & Pausch, 2005, p. 113). Through this perspective, the question of “why learn to program?” was a driving force for early educational programming languages, superseding the question of “how to learn to program.” In the following decades, as computers became more central to our lives and our society, their increasing economic importance shifted the emphasis placed on these two questions.

In the ’80s and ’90s, the narrative for learning to program changed from a cognitive experience to an economic imperative, and approaches to programming education became more visual and more varied. Programming was no longer “an activity practiced only by the few who had access to the still-rare machines” (Guzdial, 2002, p. 2). With computers now in homes, schools, and workplaces, there was a new interest in “making programming accessible to a larger number of people” (Kelleher & Pausch, 2005, p. 83). New careers in technology and software development placed increased importance on the tools available to teach programming concepts. New techniques, such as mini-languages, created a simplified syntax for the express purpose of introducing novices to programming (Brusilovsky et al., 1997). The question of “why learn to program?” was still present, but a more prominent focus was placed on not only “how to learn to program,” but how these learning-languages transferred to the general-purpose languages of the computer industry (Mendelsohn et al., 1990; Brusilovsky et al., 1997; Guizdal, 2002). Through a systematic study of over fifty programming languages for novices, Kelleher and Pausch (2005) identified two primary objectives: those that “teach programming for its own sake” (p. 84), and those that “empower their users to create interesting programs” (p. 112). With programming becoming economically important for new careers and new avenues of research, the initial idea of programming as a “mental gymnasium” (Mendelsohn et al., 1990, p. 175) fell out of favour, replaced with the increasing need to learn programming for the sake of programming. With the proliferation of the Internet over the next two decades, the focus on how to program became even more prevalent.

The worldwide explosion of the Internet since the ’90s has increased interest in making programming accessible to everyone, and the question of “how to learn to program” has taken centre stage. Endeavours such as’s Hour of Code have created hundreds of apps and activities to introduce programming concepts to students around the world (Bau et al., 2017). Scratch, developed by the MIT Media Lab, and a spiritual successor to Logo, represents a block-based language; the latest evolution of how to learn to program. Block-based languages aim to lower the barriers to programming by offering a graphical syntax—reducing the need to memorize programming functions—as well as by offering the ability to experiment with code and remix it on-screen (Kelleher & Pausch, 2005; Bau et al., 2017). Research shows that these block-based languages do make programming easier to learn (Bau et al., 2017). However, the answer to the original question, “why learn to program,” has now become: because computers are everywhere. There is a global push to teach computer science concepts in schools, and some researchers suggest that “programming is still not nearly as widely learned as it should be” (Bau et al., 2017, p. 78). Nevertheless, as educational apps proliferate in classrooms around the world, it is essential to look back on the history of programming education and consider the purpose of these apps. Are schools teaching programming to offer cognitively rewarding activities to expand their students’ understanding of the world, or are they teaching programming intending to produce future programmers?

The history of programming education since the ’60s has demonstrated incredible ingenuity in answering the question of “how to learn to program.” Yet, throughout these evolving tools and techniques, continuing to ask the question of “why learn to program?” is equally imperative. This question, which motivated the creation of the first educational programming languages, seeks to understand the cognitive benefits of learning about logic and computational thinking. Several researchers and educators who have studied educational programming languages agree that the purpose for learning to program is an area that requires additional research (Mendelsohn et al., 1990; Guzdial, 2002; Bau et al., 2017). In the future narrative of programming education, the questions of “why learn to program?” and “how to learn to program?” must go hand-in-hand. We would be doing a grave disservice to future learners by asking one question without the other.


Bau, D., Gray, J., Kelleher, C., Sheldon, J., & Turbak, F. (2017). Learnable programming: Blocks and beyond. Communications of the ACM, 60(6), 72–80.

Brusilovsky, P., Calabrese, E., Hvorecky, J., Kouchnirenko, A., & Miller, P. (1997). Mini-languages: a way to learn programming principles. Education and Information Technologies, 2(1), 65–83.

Guzdial, M. (2004). Programming environments for novices. Computer Science Education Research, 127–154.

History of computing. (n.d.). In Wikipedia. Retrieved October 9, 2019, from

Kelleher, C., & Pausch, R. (2005). Lowering the barriers to programming: A taxonomy of programming environments and languages for novice programmers. ACM Computing Surveys, 37(2), 83–137.

Mendelsohn, P., Green, T. R. G., & Brna, P. (1990). Programming languages in education: The search for an easy start. Psychology of Programming, pp. 175–200.


Photo by Clément H on Unsplash

How Media Affects Learning

Shared post between Laren Helfer, Sandra Kuipers, Kathy Moore, Mark Regan

Clark (1994) and Kozma (1994) see opposite sides of the issue regarding if and how media influences learning.  As a team, we were tasked with looking at what is happening in the field to see if or how media affects learning.  Here are four articles we found with our thoughts on the great debate between Clark and Kozma.

3 Ways Big Data is Changing Education Forever

Big data refers to large volumes of data bytes, which can be mined for information to provide a company with valuable, and otherwise inaccessible pieces of information about their customers. In 3 Ways Big Data is Changing Education Forever, Das (2019) describes how the affordances of big data can be applied to, and are impacting education. The nature of bytes existing as digital pieces of information, renders the impacts discussed by Das as relevant to education which has been delivered across a digital platform. Instruction delivered via traditional means would not generate bytes of information to analyze. If the digital platform (perhaps an LMS or a website) is understood to be the media of the instructional delivery, it would mean that it is the media itself, or the way by which the instruction is delivered and not the design of the instruction delivered by the media that is impacting education. That is, if the media was changed to a non-digital mode of delivery, any potential impacts of big data could not be realized. This is contrary to the Clark’s (1994) position that media does not influence learning; that it is merely a vehicle for delivering content, and that it is the design of the content that impacts learning.

Das (2019) points out that assessment and feedback are integral components of the learning process. When content is delivered via a digital media platform, big data can be used to illuminate elements about how a learner interacts with the content (e.g., how many times they return to certain pages, how long they view pages, how long it takes to answer questions, etc.). The analysis of this data can be applied to instructional design. The instructor can either provide the analysis as feedback to the student, modify subsequent instruction to better address learning needs, or even design automatic modifications into the software so that the digital course itself can modify the instruction to suit the individual learning needs it identifies. The bytes of data analyzed which enable these insights and interventions could not be obtained if the content was not delivered digitally. Therefore, digital media would be necessary to influence learning in exactly this way.

Clark (1994) challenges would-be critics of his arguments to consider; when media is being used instructionally, if there are any attributes of that “media that are not replaceable by a different set of media and attributes to achieve similar learning results for any given student and learning task” (p. 22). The potential of big data to afford enhanced assessment and feedback opportunities, relies on the attribute of digital media that it has the capacity to generate bytes of data. While this does not require only one specific type of software or platform be used to deliver content, it does implicate the choice of media as being an integral component as to whether or not the learning opportunities afforded by big data could be realized.

The Influences of Technology and Media on Learning Process

In this article, the author seeks to explain the general concepts behind the pros and cons of media usages on learning. The article begins through reflection by explaining that technology is omni-present in many facets of learning and that the modern technology we see today, including computers and tablets, are changing the roles of both teachers and learners (Mufarroahah, 2016, para. 1). The article does justice to the dichotomy presented by Clark and Kozma. Kozma (1994) has made the case that media and learning are in a positive relationship, giving more opportunities for not only the learning environment itself, but the teaching process as well. Clark (1994) has taken a position that “there are no learning advantages from using technology and media in the learning process” (Mufarrohah, 2016, para. 3). The article in its conclusion is telling, in terms of what side the author leans in the great media debate. The author has sought to show the positive learning effects media in general can give the education community. Examples were presented such as Reeves’ (1998) cognitive tools reflection and beyond traditional teaching norms reflection, both of which point to the positive effects to which Kozma makes a case in his arguments. The author overall has presented both sides in an appropriate and fair manner, but leans to the side of Kozma  that media enhances the learning process and that there exists a positive relationship between them.

Make Personalized Learning a Reality for your Students

In this article, Microsoft presents a vision of personalized learning through collaboration tools, artificial intelligence, and immersive mixed reality. Images of touch-screen devices and colourful overlays of educational content embellish this message. Microsoft suggests that, for students to learn and thrive, they need the latest technologies: that these technologies “can transform a classroom” (Microsoft, 2019, para. 12) and “stimulate learning” (para. 10). The message conveyed is that personalization requires technology. Microsoft suggests that personalization “can be challenging for a teacher” (para. 8): why not solve these problems with artificial intelligence and machine learning? The article’s argument is backed with a glossy PDF of research by Microsoft and McKinsey, presenting data and infographics about the importance of social-emotional skills and critical thinking in future workplaces. Yet, this argument breaks down when critiqued against Clark’s (1994) argument of media vs. method. Do social-emotional skills and critical thinking require OneNote and Microsoft PowerPoint? Clark cautions that “we continue to invest heavily in expensive media in the hope that they will produce gains in learning” (para. 18). However, at the heart of learning is the method of instruction, and the method should not be confounded with the medium. Clark (1994) argues that “all methods required for learning can be delivered by a variety of media and media attributes” (para. 16). With Clark’s argument in mind, one shouldn’t discount educational technology either, yet it should be approached with a critical eye. McLuhan (1964) famously suggested that “the medium is the message,” which Kozma (1994) maintains and Clark disputes. As educators and technologists decide where they align in The Great Media Debate, it’s also important to ask: When does the message itself become lost behind the shiny touch-screen wifi-enabled augmented-reality medium?

Université de Montréal Opens Quebec’s First Virtual Reality Optometry Lab in Partnership with FYidoctors | Visique

This article introduces a new technology that the University of Montreal and FYidoctors | Visique are using to better the education of optometrists.  The media behind the technology is a simulation lab that provides students with experience in a virtual reality environment. The media allows students to work with real patient scenarios, but in the security of a simulated environment, where there is no risk to patient care.  Working in the lab provides students with the learning opportunity to experience everything from common to rare pathologies, allowing them to gain enough experience to be prepared to work on live patients.

The concept behind the lab goes against Clark’s (1994) position that media does not enhance learning.   Clark states “…computer simulation was used to teach students some skills required to fly a plane…people learned to fly planes before computers were developed and therefore the media attributes required to learn were obviously neither exclusive to computers nor necessary for learning to fly” (Clark, 1994, p. 11); however, just because learning once occurred without media does not mean that it cannot occur.  The media discussed in this article provides students with a learning experience that was not otherwise available, meaning that without this media their education would be missing a vital practical component. While optometrists did always receive the required education for the job, this media advances their learning, resulting in better optometrists. If the use of media enhances learning, then there is a strong relationship between the two.  As Kozma states, “[media will] advance the development of our field and contribute to the restructuring of schools and the improvement of education and training” (Kozma, 1994, p. 23), this concept makes media more than a learning tool, but a method critical to learning, which is applied by the simulation lab by the University of Montreal and FYidoctors | Visique.



Clark, R. E. (1994). Media will never influence learning. Educational Technology Research and Development, 42(2), 21-29.

Kozma, R. B. (1994). Will media influence learning: Reframing the debate. Educational Technology Research and Development, 42(2), 7-19.

Microsoft. (2019, May 2). Make personalized learning a reality for your students. Retrieved from

Mufarrohah, St. (2016, December 09). The influences of technology and media on learning processes [Blog Post]. Retrieved from

Reeves, T.C. (1998). The impact of media and technology in schools. The Journal of Art and Design Education, 4, 58-63. Retrieved from

Université de Montréal Opens Quebec’s First Virtual Reality Optometry Lab in Partnership with FYidoctors | Visique. (2019, October 3). Cision. Retrieved from


Photo by Giu Vicente on Unsplash

Explorations in Paneer and a Web of Life-long Learning

By Lisa Gates and Sandra Kuipers

At first blush, looking up a recipe for paneer (a soft cottage cheese) seems like a simple task, yielding straightforward results. While finding a good paneer recipe is easy, the task is more complex and involving than simply learning how it is made. The internet is abundantly full of information: recipes, regionality, commonality with other cuisines’ soft cheeses, and the history and etymology of paneer, making it a great example of a topic for life-long learning.

To explore the idea of abundant content online, we picked the topic of “how to make paneer”. We’re both passionate cooks, and paneer is something neither of us had made before and were both interested to learn more about. I (Sandra) love to make curries, but living in Asia it’s difficult to buy dairy products. Paneer is a “rich source of high quality animal protein, fat, minerals and vitamins” (Khan & Pal, 2011), so learning to make paneer would be a great way to add a healthy source of protein to my vegetarian curries. Paneer is delicious on its own and is often used as an ingredient in other dishes. Many of the initial recipes revealed have similar ingredients and methods, and a quick look at Wikipedia (“Paneer,” 2019) will show that there are many kinds of fresh cheeses that would be similar, if not the same as, paneer but from different places throughout the world.

Inspired by the availability of recipes, I (Lisa) decided to gather the ingredients and make a batch of paneer for dinner. Making paneer ended up taking much less time than looking for information about it did. Exploring paneer had me looking at a map of India to better understand parts of the country that my students are from, to find regionally specific recipes. I chose a recipe from Punjab that I may bring to a class potluck. Taking the learning and making it relevant to my life, with real world application and emphasis on learner construction (taking information and making one’s own meaning), including the shift from theoretical to practical experience (Ertmer & Newby, 1993) plants this exercise firmly as Constructivist in nature.

In the case of making paneer, online instructional content appears particularly well suited for short procedural tasks, such as a cooking recipe. Paneer can be made in 30 mins to 1 hour, something we didn’t know before starting this activity. The short duration of the learning process, as well as relatively few steps involved, suggests that using an online source of instruction would likely have a high degree of success. We wondered if longer more involved learning process may not see the same level of success, given the possibility of missing a step, or misunderstanding an instruction.

Our research into how to make paneer suggests that the availability of content online is a boon for life-long learning. Weller (2011) emphasizes that “learners need to be able to learn throughout their lives and to be able to learn about very niche subjects” (p. 228). In the case of learning how to make paneer, the abundance of content online makes it easy for someone interested in expanding their culinary repertoire to learn a new cooking process. They could be a professional looking to continuously improve their craft, or an individual interested in replicating their favourite dish. In each case, the availability of content outside of a formal learning setting enables individuals to engage in “innovative explorations, experimentations, and purposeful tinkerings” (Seely-Brown & Adler, 2008, as cited in Weller, 2011). These opportunities for informal exploration support the pursuit of life-long learning by providing just-in-time instructional content.

The knowledge of how to make paneer could be thought of as human knowledge, rather than academic knowledge or corporate knowledge. It is thought to originate in the Kusana and Saka Satavahana periods AD 75-300 (Khan & Pal, 2011), and may have begun as an oral body of knowledge, passed from family to family. The wide availability of recipes for how to make paneer online reflect this human origin: there is no copyright or patent that could be applied to this knowledge. We would confidently label this as “abundant content” based on Weller’s (2011) characteristics of a “pedagogy of abundance” (p. 229): content is free, abundant, and varied; sharing is easy and socially based; and content is user-generated. However, and abundance of content doesn’t guarantee success in learning.

Abundant content online can also be overwhelming. Weller (2011) expresses that an “excessive abundance constitutes a challenge” (p. 234), and requires different teaching and learning strategies. Learners facing an abundance of content need the skills to search and evaluate the material they find, such as general digital literacy skills and the ability to gauge the relevance of information found in searches. Basic digital literacy skills involve navigating the online environment, including the generation of relevant keywords for searches. Information evaluation, while not particularly challenging in the search of paneer recipes, can prove extremely important in other realms such as learning about science, geopolitical issues, or other life-long learning topics. The ability to discern real, well researched, peer-reviewed information can be paramount to one’s ability to navigate and understand the real world recognizing and avoiding the rabbit holes of conspiracy theories and junk science. Anderson and Dron (2014) emphasize that “there is a concern that ‘popular’ is not necessarily equal to ‘useful’”. They state:

Content is often curated, mashed-up, re-presented, and constructed or assembled by those in the network. This is a wonderful resource when seen as a co-constructed and emergent pattern of knowledge-building, but without the editorial control that a teacher or guide in a group provides, it can lead to network-think, a filter bubble in which social capital rather than pedagogy becomes the guiding principle. (p.140)

In our exploration of abundant content, we were easily able to find recipes for how to make paneer, and were even successful in creating a batch of paneer from scratch. However, throughout this exploration, we remain conscious of the different types of knowledge available online, and the possible pitfalls of abundant content. Some learning, such as short recipes and step-by-step instructions, may be better suited to online instruction than other types of learning. Our findings in this activity suggest that it’s important to understand Weller’s (2011) “pedagogy of abundance” (p. 229) when approaching learning online, and not make the assumption that abundant content automatically leads to successful learning.


Anderson, T., & Dron, J. (2014). Teaching Crowds: Learning and Social Media.

Ertmer, P., & Newby, T. (2013). Behaviorism, Cognitivism, Constructivism: Comparing critical features from an instructional design perspective. Performance Improvement Quarterly, 26(2), 43-71.

Khan, S. U., & Pal, M. A. (2011). Paneer production: A review. Journal of Food Science and Technology, 48(6), 645–660.

Weller, M. (2011). A pedagogy of abundance. Spanish Journal of Pedagogy, 249, 223–236.


Additional Information Sources

Understanding Learning through Constructivism

We are born into a complex world. It is a world governed by physical laws, social norms, societal expectations, cultural traditions, and family dynamics. Understanding this world is an equally complex process. Many of these rules are not black and white, and they are not spelled out in a handbook presented to each new member of our species. Learning is a fundamentally human process of unravelling the nuanced, interconnected, often confusing threads that make up our world. Through this process of navigating and unravelling complexity, we construct meaning. This meaning-making process is at the heart of constructivism, which emphasizes that “humans create meaning as opposed to acquiring it” (Ertmer & Newby, 2013, p. 55). I believe constructivism offers an invaluable approach to understanding how we learn and how we can share knowledge.

The feedback loop between what we experience and what we know is how we build mental models of the complex world around us. Piaget (1936) presents this process of constructing meaning as the theory of cognitive development, which forms the underpinnings of constructivism. When we’re young, we touch and probe the world around us, and construct our understanding based on how it reacts to our sticky fingers and inquisitive senses. As we get older, we touch and probe the world through interactions on a cognitive and social level. We ask questions, challenge assumptions, and construct cause-and-effect relationships in our understanding. However, rather than progressing developmentally through cognitive stages, Egan (1997) suggests that this progression occurs through the acquisition of cognitive tools: Somatic, Mythic, Romantic, Philosophic, and Ironic. As individuals construct meaning, they progress from a big, bold, black-and-white understanding of the world towards a more fine-grained and nuanced understanding of the many shades of grey in-between. This progression enables learners to continuously revise and interpret their knowledge by applying different cognitive tools to their experiences.

Each step of the learning process is reinforced through the many experiences that inform our ideas, and although these experiences may be similar, the meaning each learner constructs in their mind is unique. Ertmer and Newby (2013) express that “learners do not transfer knowledge from the external world into their memories; rather they build personal interpretations of the world based on individual experiences and interactions” (p. 55). Their exploration of constructivism as it relates to instructional design suggests that meaningful learning activities need to be rooted in real-world contexts, and that learning needs to be an active experience rather than a passive consumption of facts. This aspect of constructivism reinforces the belief that an instructional designer cannot transfer content to students through lessons, but can create situations in which students’ experiences allow them to solve problems and construct their own understanding.

A constructivist approach is particularly powerful when applied to the learning activities I design for my computer science classes. Jonassen, as cited in Merrill (2002), expresses the need for students to “learn domain content in order to solve the problem, rather than solving the problem as an application of learning” (p. 55). Rather than teaching variables, conditionals, loops, and arrays in a linear theory-driven approach, I can design more authentic opportunities to learn by creating real-world problems to solve. In my lessons, students aren’t given a step-by-step process to solve a problem. Instead, they work with a set of coding tools, their own understanding, and access to resources to add new concepts to their repertoire. Programming is highly feedback-oriented: try something, see if it works, debug, refine the approach, and continue experimenting until a solution is reached. Programming is also a highly creative process, and constructivism is well suited to “deal with complex and ill-structured problems” (Ertmer & Newby, 2013, p. 57). This problem-solving approach allows students to increasingly branch out from their familiar set of skills by tackling problems that require new perspectives and new skills. In a rapidly-evolving world where the technologies we use a decade from now may not exist yet, it will be essential for students to be able to approach problems where both the processes and the skills required to find a solution are unknown to them.



Egan, K. (1997). The educated mind: How cognitive tools shape our understanding. University of Chicago Press.

Ertmer, P. A., & Newby, T. J. (2013). Behaviorism, Cognitivism, Constructivism: Comparing critical features from an instructional design perspective. Performance Improvement Quarterly, 26(2), 43–71. Retrieved from

Merrill, M. D. (2002). First principles of instruction. Educational Technology Research and Development, 50(3), 43–59.

Piaget, J. (1936). Origins of intelligence in the child. London: Routledge & Kegan Paul.



Photo by Jason Leung on Unsplash

Cynthia Solomon: A Pioneer of Computer Science Education

Dr. Cynthia Solomon, as a computer scientist and an educator, has helped millions of children discover a love of computer science, and her contributions to education will continue to have an impact on future generations. Her work to make programming concepts accessible to children has furthered the field of educational technology through her research, writing, programming, teaching, consulting, and speaking.

Early in her career, Solomon discovered a passion for introducing children to computer science through activities and metaphors. She collaborated with Seymour Papert and Wally Feurzeig to develop the Logo programming language, which was the first programming language designed specifically for children. As a visual programming language, Logo enables learners to explore procedural thinking through graphical representations, most famously represented as a turtle drawing a line. Logo has been a fundamental predecessor to modern visual programming languages such as Scratch, which has helped over 40 million children explore and understand programming concepts. Logo’s turtle robots have also inspired a whole new generation of procedural programming apps and robots.

Solomon holds a bachelor’s in history from Radcliffe College, a master’s in computer science from Boston University, and a Ph.D. in education from Harvard University. Her first book, Computer Environments for Children, published in 1988, is a prominent piece of early literate on computers in education. Throughout her career, Solomon collaborated closely with Seymour Papert, Marvin Minsky, Margaret Minsky, and the Massachusetts Institute of Technology (MIT) Artificial Intelligence Lab. While working for MIT, Solomon led the Atari Cambridge Research Laboratory as they designed a “PlayStation of the future” (Infosys Foundation, 2017, para. 13). Solomon’s impact on educational technology continues through speaking engagements and events, such as the inaugural lecture at CrossRoads 2018, as well as through her involvement in the Constructing Modern Knowledge institute and the One Laptop per Child Foundation.

Solomon believes in “transmitting theory into practice” (Solomon, 1988, p. 1). In her career, she not only authored several books and papers while working at eminent research institutes, but she also worked hands-on with students in elementary and secondary schools to teach programming concepts. In an interview about her work with Seymour Papert (Stanford University, 2013), Solomon reminisces about riding unicycles, juggling, and balancing on Bongo Boards. Papert and Solomon sought to spark children’s imagination and understanding by finding procedural activities to help them understand computer science concepts like debugging.

As a newly-minted computer science teacher, I was fascinated to learn about Solomon’s impact on educational technology. I share in her belief that learning complex programming concepts can be a fun, hands-on, and active experience. As I design lessons and learning activities for my classes, Solomon’s passion for teaching computer science inspires me to continuously look for new ways to involve and engage my students.

Interesting Links


Infosys Foundation (2017, December 6). Q & A with Dr. Cynthia Solomon [Blog post]. Retrieved September 13, 2019, from

Solomon, C. (1988). Computer environments for children: A reflection on theories of learning and education. MIT press.

Stanford University (2013, June 27). Cynthia Solomon on Seymour Papert [Video]. Retrieved from


Photo by stem.T4L on Unsplash

Learning from the Past, Looking to the Future

The combined history that Reiser (2001) and Weller (2018) present highlights many advancements in educational technology, along with plenty of dead-ends and failures. However, I believe these failures are necessarily productive ones. To find approaches that work, educators and technologists need to be willing to experiment, and to accept that not all ideas will succeed—or should succeed. The successes, along with the failures, offer many lessons for future endeavours to learn from.

A lesson from the past that struck me as particularly poignant in Reiser’s (2001) article was the importance of not using technology just to teach technology skills. He noted that as computers were introduced, their application was “far from innovative” (p. 60) and often used to perpetuate computer-related skills. In my experience working in a K-12 school, I’ve seen this trend as well. For example, a lesson might focus on teaching Adobe Photoshop skills, rather than aiming to teach broader concepts of colour theory, typography, and aesthetics. Teaching a particular app as the end-point gives students a narrow application of their learning potential. As I design learning activities for my computer science course, I plan to keep this lesson in mind: beyond teaching a specific programming language, which will go in and out of fashion and varies based on the goal, I aim to develop activities that foster problem solving and programmatic thinking skills regardless of the presence or absence of technology.

In Twenty Years of Edtech, Weller (2018) suggests that blogging is “full of potential” (p. 39) and is “an ideal educational technology” (p. 48). This is a lesson I feel still applies to business and higher education, yet it is in conflict with the reality I’ve seen in my day-to-day work in K-12 education. From what I’ve experienced, students are searching and turning to blogs as informational artifacts, but I see them increasingly less interested in authoring their own blog posts. With the prevalence of WeChat and WhatsApp, students are often engaging in closed systems of communication—able to broadcast quickly to a large number of predetermined people, and less often broadcasting their words publicly on the internet. If they do broadcast, it tends to be in a social media format: Instagram, YouTube, and Twitter. Is this a shift in the way future generations will communicate online? In my school, an effort was made to create and promote classroom blogs as well as student-authored blogs, yet the endeavour rapidly lost momentum. Was this a systemic failure, or endemic of the users’ dwindling interest? These results lead me to wonder about the age demographic behind the majority of blogs on the internet. Are the upcoming generations as interested in blogging as the generation that came before them? What will the future histories of Edtech say about the importance of blogging in education?

Weller’s closing sentiment for Twenty Years of Edtech was that “nothing much has changed, and many edtech developments have failed to have significant impact” (p. 48). Counter-intuitively, these failed developments make me optimistic about the future of edtech. The more we fail, the more we have tried. The technologies Weller highlighted—failures and otherwise—were both increasing in scope and in diversity. I am sure there will be plenty of missteps and unsuccessful technologies in the future, yet each one of these has the potential to lead to new ideas, or at very least epitomize the lessons we must to continue to learn from the past.



Reiser, R. A. (2001). A history of instructional design and technology: Part I: A history of instructional media. Educational Technology, Research and Development, 49(1), 53-64. Retrieved from

Weller, M. (2018). Twenty Years of Edtech. EDUCAUSE Review, 53(4), 34–48. Retrieved from


Photo by Samuel Zeller on Unsplash

How have humans technologized education?

To explore the history of educational technology, I first set out to define what technology is. My initial internet searches revealed that technology has a variety of dictionary and encyclopedia definitions. Beyond the basic denotations of technology as a “collection of techniques, skills, methods, and processes” (“Technology,” n.d.), I wanted a more human perspective, so I expanded my search to include videos and blog posts. Kevin Kelly, in a 2010 TEDxAmsterdam talk, shared his insights into what technology means, and presented a definition that I found relatable: “Technology is anything useful invented by a mind” (Kelly, 2010). Human Technologies, developed as a core subject at the International College Hong Kong (ICHK), expands on this definition to include cognitive, material, social, spiritual, and somatic technologies (ICHK, n.d.). For example, breathing may be innate to humans, but CPR, Lamaze, and meditation are each somatic technologies.

What does this mean for educational technology? Education, when seen as the “the process of facilitating learning” (“Education,” n.d.), doesn’t necessarily require tools and devices. Yet, throughout human history, we have increasingly applied our ingenuity to change the nature of education. Thinking of technology as not just a set of tools but also as a set of processes, I wondered: How have humans technologized education?

The Socratic method, from the classical Greek period, represents an example of early cognitive and social technologies in education. By developing a formalized method for debate, Socrates and his contemporaries created an educational technology that uses structured discourse as a way to facilitate learning. This form of debate inspired Plato’s Republic and led to the first concepts of universities (Smith, 1997).

The highly-criticized factory model of schooling—involving rigid systems and standardization—is a way of technologizing education to facilitate its application on a mass scale. Rows of desks, chalkboards, and school bells are each examples of material technologies, yet there are also social technologies at work: the teacher-student power structure of a classroom, and the timetable-oriented structure of a school day. Through my research on the topic, I was intrigued to find Watters (2015) offers a critique of how terms like industrialized education are often used “not so much to explain the history of education, as to try to shape its future” (para. 22).

The certification process is another way humans have technologized education. Receiving credentials through formal schooling can be seen as a social technology, one used to delineate a person’s knowledge into discreet units of education. These credentials are then recognized as interchangeable by many facets of society, which creates a “plug-and-play” form of education (Brown & Tannock, 2009).

Exploring education technology—not just as material tools, but also as somatic, social, and cognitive processes—presented an opportunity to consider how education itself is a technology. Is upgrading to “Education 2.0” (Vagelatos, Foskolos, & Komninos, 2010) such a new idea, or are we already many versions into this technological development?



Brown, P., & Tannock, S. (2009). Education, meritocracy and the global war for talent. Journal of Education Policy, 24(4), 377–392.

Education. (n.d.). In Wikipedia. Retrieved September 6, 2019,

International College Hong Kong. (n.d.). Human Technologies | ICHK. Retrieved September 7, 2019, from

Smith, M. K. (1997). Plato on education | Retrieved September 7, 2019, from

Technology. (n.d.). In Wikipedia. Retrieved September 6, 2019,

TEDxAmsterdam. (2010). Kevin Kelly: Technology’s epic story | TED Talk. Retrieved September 8, 2019, from

Vagelatos, A. T., Foskolos, F. K., & Komninos, T. P. (2010). Education 2.0: Bringing innovation to the classroom. Proceedings – 14th Panhellenic Conference on Informatics, PCI 2010, (September), 201–204.

Watters, A. (2015). The Invented History of “The Factory Model of Education.” Retrieved September 6, 2019, from



Photo by Giammarco Boscaro on Unsplash