Skip to content

Would understanding algorithmic biases and behaviours affect social media use? (long version)

Image by Jon Tyson on Unsplash.

When working on activity 5 for unit 1 of LRNT522, I ended up writing more than made sense for a padlet post. Here is the long version of the post including more details about the research project design and a test ChatGPT prompt:

It is well established that the behaviours of social media algorithms continue to negatively affect public discourse through the erosion of the epistemic commons (Consilience Project, 2021). Stark et al.’s 2020 report Are Algorithms a Threat to Democracy? The Rise of Intermediates: A Challenge for Public Discourse identifies multiple issues with algorithmic behaviour, including gatekeeping, microtargeting, and information filtering, which have created filter bubbles, echo chambers and amplified incivility and hate speech, resulting in political polarization, fragmentation and distorted opinions.

In considering these concepts, I wonder if there is a role for education to help restore the epistemic commons through social media literacy focused on describing how algorithms function and why they create distorted perceptions of reality. The underlying question would be, will it change their behaviour and perspective if people understand how and why they are being manipulated?

From a research design perspective, I would create an e-learning module on algorithmic bias, the attention economy and its impact on human perception. The module would be designed with language consistent with a Flesch Reading ease score of 70.0-60.0. Depending on the scale of the resources available for the project, sampling may be limited to a specific age cohort or geographic location. The project could attempt a large-scale, diverse sampling across Canada if resources were available. Ideally, a mixed methods approach would be applied, including long-form surveys with questions about a polarized topic and social media use/experience. Narrow scope pre- and post-evaluations would be administered to assess knowledge and understanding of the content. A follow-up long-form survey could be administered after a period to investigate whether the participants felt that the education affected how they use social media and consume digital media.

So, my research question would be:

Does providing education about algorithmic biases and behaviour affect polarization?

ChatGPT 3.5

Out of curiosity, I used the following prompt in ChatGPT to see what it might suggest, and it wasn’t far from my original question or project design.

Prompt used: propose a research question to evaluate the impact of education about algorithmic bias on public discourse?

ChatGPT’s response was:

Certainly! Here’s a research question that focuses on the impact of education about algorithmic bias on public discourse:

Research Question: How does education about algorithmic bias affect public awareness, perception, and engagement in discussions related to ethical considerations in digital platforms and public discourse?

This question could be approached by conducting surveys or interviews before and after educational interventions to assess changes in awareness and attitudes toward algorithmic bias. Additionally, analyzing online discourse and media coverage surrounding algorithmic bias before and after educational initiatives could provide insights into shifts in public discourse and understanding. (Open AI, 2024)

References

OpenAI. (2024). ChatGPT 3.5 (June 30 version) [Large language model]. https://chat.openai.com/chat

Stark, B., Stegmann, M. A., Magin, M., & Jürgens, P. (2020, May 26). Are algorithms a threat to democracy? The rise of intermediaries: A challenge for public discourse. Governing Platforms Project. https://www.epc.eu/content/PDF/2020/Governing_Platforms_v3.pdf

The Consilience Project. (2023, February 27). Democracy and the epistemic commons. https://consilienceproject.org/democracy-and-the-epistemic-commons/

Published inLRNT 522

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *