Parasocial Bonds and Algorithmic Influence on YouTube

As I began this research journey into the intersection of parasocial interaction and algorithmic influence on YouTube-based learning, I expected to uncover nuances in content delivery and learner engagement. What I didn’t anticipate was how quickly the inquiry would expand, raising broader questions about trust, visibility, and equity on platforms like YouTube.

Earlier in my inquiry, I leaned more on foundational texts from Selwyn (2010) and Fawns (2022) which helped me to view platforms like YouTube not just as tools, but as larger systems with its own set of entangled values, ideologies, and commercial operations. Further research led me to studies by Bucher (2018), Selwyn (2022), and Bishop (2019), which deepened my understanding of algorithmic curation as both a technical and political force. These articles unpacked how recommendation systems are not just responsive to user behaviour, but actively shape that behaviour by favouring more engaging or profitable content. Simultaneously, readings on parasocial interactions by Labrecque (2014), Chen (2016) and Beautemps & Bresges (2022) brought to light the emotional dimensions of online learning, showing how perceived attachment with content creators could subtly influence learners’ trust, retention, and motivation.

One of the first tensions I encountered was in understanding parasocial interactions not just as social phenomena, but as pedagogical experiences. I questioned whether emotional connections enhanced engagement, or potentially distorted perceptions of credibility. Through my readings and content analysis of popular channels on YouTube like English with Emma, Organic Chem Tutor, and Khan Academy, I started to see how parasocial presence could serve as a proxy for trust. These highly popular content creators with millions of subscribers repeatedly had high engagement in comparison to smaller creators with similiar content in their videos. I wondered whether learners were naturally gravitating toward these creators due to familiarity and trust, or if YouTube’s algorithm were playing a decisive role in making them more visible. This observation led me to explore the concept of algorithmic agency, examining how algorithms not only curate content but also shape user behaviour and learning pathways. Bucher (2018) describes algorithms as not only technical processes but agents of power that shape human behaviour in subtle but profound ways. In my research of YouTube, I examined how the recommendation engine reinforces engagement loops that prioritize emotional resonance and quality production over accessibility or pedagogy. Consequently, I found that YouTube is less about helping learners find what they need, and more about predicting what will keep them watching. And that has real implications for what kind of content gets seen, who gets heard, and what voices are left out.

The more emotionally compelling a creator is, the more likely they are to trigger high watch times and return visits. The algorithm reads these signals and promotes their content further. In turn, learners encounter more of this creator’s videos, strengthening that parasocial bond. This feedback loop creates a powerful structure where visibility, trust, and engagement are produced through predictive design. Yet this is where questions about equity deepened. I began to question whether YouTube as a platform is as accessible as it seems and whether it genuinely supports diverse learning. A study by Haroon et al. (2023) was really enlightening in this regard. By creating 100,000 simulated user accounts, the researchers discovered that YouTube’s algorithm tends to recommend content aligning with a user’s existing ideological leanings. This pattern suggests that YouTube’s recommendation system may inadvertently create echo chambers, limiting exposure to diverse perspectives and potentially skewing the learning experience. Recognizing this, I began to reflect on the broader implications of whether platforms have a responsibility or not in ensuring that their algorithms promote a more balanced and inclusive educational environment.

Things became more complex when I considered the economic side of YouTube. Many creators earn money through ads, sponsorships, or paid memberships, which means their content isn’t shaped only by what’s helpful to learners, but also by what performs well. Decisions about tone, visuals, and how often videos are released may be influenced by the pressures of keeping an audience and making a living. While monetization isn’t inherently bad, it adds tension around who gets promoted and who stays invisible. Consequently, the system may favour those who are more marketable, not necessarily more educational.

In the end, my journey so far has been less about finding clear answers and more about asking better questions. How do emotional bonds and algorithmic systems work together to shape trust in learning? What happens when engagement becomes the main measure of educational success? And how can learners, educators, and designers become more aware of the invisible systems that influence what and how we learn? As I wrap up my research, I aim to shift my critical inquiry to uncover these questions in my final paper. In particular, I will focus on how trust, visibility, and equity are shaped in digital environments where performance data and predictive analytics guide what content gets seen and valued.

References

Beautemps, J., & Bresges, A. (2022). The influence of the parasocial relationship on the learning motivation in self-regulated learning with YouTube videos. Frontiers in Education, 7, 1021798. https://doi.org/10.3389/feduc.2022.1021798

Bishop, S. (2019). Managing visibility on YouTube through algorithmic gossip. New Media & Society, 21(11–12), 2589–2606. https://doi.org/10.1177/1461444819854731

Bucher, T. (2018). If…Then: Algorithmic Power and Politics. Oxford University Press. https://doi.org/10.1093/oso/9780190493028.001.0001

Chen, C. (2014). Forming digital self and parasocial relationships on YouTube. Journal of Consumer Culture, 16(1), 232–254. https://doi.org/10.1177/1469540514521081

Fawns, T. (2022). An entangled pedagogy: Looking beyond the pedagogy–technology dichotomy. Postdigital Science and Education, 4(3), 711–728. https://doi.org/10.1007/s42438-022-00302-7

Haroon, M., et al. (2023). Auditing YouTube’s recommendation system for ideologically congenial, extreme, and problematic recommendations. Proceedings of the National Academy of Sciences, 120(10), e2213020120. https://doi.org/10.1073/pnas.2213020120

Labrecque, L. (2014). Fostering Consumer–Brand Relationships in Social Media Environments: The role of Parasocial Interaction. Journal of Interactive Marketing, 28(2), 134–148. https://doi.org/10.1016/j.intmar.2013.12.003

Selwyn, N. (2010). Looking beyond learning: notes towards the critical study of educational technology. Journal of Computer Assisted Learning, 26(1), 65–73. https://doi.org/10.1111/j.1365-2729.2009.00338.x

Selwyn, N. (2022). Making sense of the digital automation of education. Postdigital Science and Education, 4(3), 287–297. https://doi.org/10.1007/s42438-022-00362-9

Photo licensed from Envato.

One thought on “Parasocial Bonds and Algorithmic Influence on YouTube

  1. Asha,

    Thank you for this rich and thought-provoking reflection. I was especially struck by how you weave together the emotional and algorithmic dimensions of YouTube-based learning, two forces that are often considered separately but clearly operate in tandem, as your post demonstrates so well.

    Your exploration of parasocial interaction as not just a social dynamic but a pedagogical mechanism really resonated with me. It raises important questions about the role of trust in learning, particularly when that trust is shaped by familiarity, visibility, or production value rather than pedagogical substance. The way you describe parasocial presence becoming a proxy for credibility aligns with my own observations in ESL learning spaces, where highly polished content can mask a lack of linguistic nuance or cultural responsiveness.

    Your reference to Bucher’s (2018) notion of algorithmic agency deepens the critique, it’s no longer just about what learners choose, but about how choice itself is shaped. The “feedback loop” you describe between algorithmic visibility and emotional engagement is an incredibly important insight, especially when considering equity. It begs the question: Who is being systematically excluded from visibility, not due to lack of value, but because they don’t fit the platform’s profitability logic?

    I also appreciated how you didn’t villainize monetization, but rather pointed out the tensions it creates. This is such a needed nuance. As you note, content shaped by monetization isn’t inherently bad, but when financial viability depends on maintaining algorithmic favour, pedagogical priorities can shift or even erode.

    Your final questions are powerful and timely. What happens when trust is manufactured by metrics? When educational legitimacy is tied to engagement data? These are questions that don’t just affect learners, they challenge all of us involved in designing, curating, or participating in digital education.

    Thank you for sharing your evolving inquiry, it’s clear your research is not only academically grounded but also deeply reflective and socially relevant.

Leave a Reply

Your email address will not be published. Required fields are marked *