Image copyright TJ Rak. https://tjrak.work/about
For this assignment, I focused on the brilliant and talented Dr. Joy Adowaa Buolamwini. Although Buolamwini’s work does not specifically fall within the education and technology field, it will (hopefully) shape the future of technologies being introduced to our sector. Her research and advocacy are critical to combat ongoing biases and exclusion in tech development, specifically AI and algorithm design. As Weller (2020) noted in Chapter 25 and his conclusion in 25 Years of Ed Tech, tech-facilitated dystopic scenarios are already occurring (pp. 172-174), and it is the responsibility of educators and learners to critically evaluate the values and unintended harms of the technologies we adopt (p. 189).
As to why I chose Dr. Buolamwini for this post, discovering her work was part of a profound change in my awareness that I stumbled into in 2019. Before that year, it would be fair to say that I was a naïve techo-optimist who believed that with enough time and resources, unimpeded technological innovation would solve humanity’s greatest challenges. I hadn’t understood the ideas of embedded values and biases in technology or considered the perverse incentive structures driving the techno-optimist progress narrative.
About Dr. Bouloamwini
Joy Adowaa Buolamwini was born in Edmonton to Ghanaian parents, while her father completed his PhD at the University of Alberta (Carnegie Corporation of New York [Carnegie], 2020). Her childhood was spent in Ghana before moving to the United States at four (Carnegie, 2020). As part of her undergraduate work at Georgia Tech, Buolamwini programmed a robot to play peek-a-boo, but found that the robot had difficulty recognizing her face (Carnegie, 2020). She discovered that the AI-based facial detection software had difficulty recognizing dark-skinned faces (Carnegie, 2020).
Building on her undergraduate findings, Buolamwini attained graduate degrees from Oxford and MIT (Poet of Code, n.d). Her master’s and doctoral theses for MIT focused on identifying the issue of algorithmic bias against people of colour (Buolamwini, 2017) and identifying the critical importance of ongoing systematic auditing of algorithmic systems for bias and quantifying actual harm caused to excoded individuals (Buolamwini, 2022). In her work, she coined the phrase “coded gaze” (Buolamwini, 2022, p.10) to describe the problematic behaviour caused by prototyping and training algorithms using narrow data sets, particularly white models and attributes.
In a 2024 Forbes article, Buolamwini explains the importance of understanding encoded biases in technologies, particularly AI:
AI can harm people significantly over time through perpetuating structural violence, exhibiting bias and lacking accountability in decision-making processes. AI systems can perpetuate structural violence by denying individuals access to essential services like healthcare, housing, and employment. This denial of fundamental needs can cause irreparable harm to individuals for generations.
In addition to her scholarship, Buolamwini founded the Algorithmic Justice League (AJL) to advocate for change and to influence policymakers to enact legislation to regulate the tech industry. Furthermore, she has created an influential TED talk, a Netflix documentary, multiple editorial videos and articles, and a book entitled Unmasking AI.
Finally, Dr. Buolamwini describes herself as “a poet of code on a mission to show compassion through computation” (Carnegie, 2022). As a powerful humanist, artist, computer scientist, scholar and advocate, Dr. Buolamwini’s work will likely positively influence the technologies adopted in education and society in the coming years.
Dr. Buolwamwini Resources
- Master’s thesis: https://dspace.mit.edu/handle/1721.1/114068
- Doctoral thesis: https://dspace.mit.edu/handle/1721.1/143396
- Algorithmic Justice League: https://www.ajl.org/
- Poet of Code: https://poetofcode.com/?home
- YouTube channel: https://www.youtube.com/@JoyBuolamwini-PoetofCode
- TED Talk: https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms
- Coded Bias Netflix documentary: https://www.netflix.com/title/81328723
- Unmasking AI book: https://www.penguinrandomhouse.com/books/670356/unmasking-ai-by-joy-buolamwini/
References
Abeywardena, P. (2024, July 25). Facing up to the threat of AI. Forbes. https://www.forbes.com/sites/pennyabeywardena/2024/07/25/facing-up-to-the-threat-of-ai/
Buolamwini, J. (2017). Gender shades: intersectional phenotypic and demographic evaluation of face datasets and gender classifiers [Masters thesis, MIT]. MIT Libraries. https://dspace.mit.edu/handle/1721.1/114068
Buolamwini, J. (2022, February). Facing the coded gaze with evocative audits and algorithmic audits [Doctoral thesis, MIT]. MIT Libraries. https://dspace.mit.edu/handle/1721.1/143396
Buolamwini, J. (n.d). About – Poet of Code. Poet of Code. https://poetofcode.com/about/
Carnegie Corporation of New York (2020). 2020 great immigrants: Joy Buolamwini. https://www.carnegie.org/awards/honoree/joy-buolamwini/
Weller, M. (2020). 25 years of ed tech. AU Press. https://doi.org/10.15215/aupress/9781771993050.01

Be First to Comment