Algorithmic radicalization

From Wikipedia, the free encyclopedia

Algorithmic radicalization (or radicalization pipeline) is the concept that algorithms on popular social media sites such as YouTube and Facebook drive users toward progressively more extreme content over time, leading to them developing radicalized extremist political views. Algorithms record user interactions, from likes/dislikes to amount of time spent on posts, to generate endless media aimed to keep users engaged. Through echo chamber channels, the consumer is driven to be more polarized through preferences in media and self-confirmation.[1][2][3][4]

Algorithmic radicalization remains a controversial phenomenon as it is often not in the best interest of social media companies to remove echo chamber channels.[5][6] Though social media companies have admitted to algorithmic radicalization's existence, it remains unclear how each will manage this growing threat.

Facebook's Allegations[edit]

In an August 2019 internal memo leaked in 2021, Facebook has admitted that "the mechanics of our platforms are not neutral",[7][8] concluding that in order to reach maximum profits, optimization for engagement is necessary. In order to increase engagement, algorithms have found that hate, misinformation, and politics are instrumental for app activity.[9]  As referenced in the memo, "The more incendiary the material, the more it keeps users engaged, the more it is boosted by the algorithm."[7] According to a 2018 study, "false rumors spread faster and wider than true information... They found falsehoods are 70% more likely to be retweeted on Twitter than the truth, and reach their first 1,500 people six times faster. This effect is more pronounced with political news than other categories."[10]

TikTok Algorithms[edit]

TikTok is an app that recommends videos to your 'For You Page' (FYP), making every users' page different. With the nature of the algorithm behind the app, TikTok's FYP has been linked to showing more explicit and radical videos over time based on your previous interactions on the app.[11] Since TikTok's inception, the app has been scrutinized for misinformation and hate speech as those forms of media usually generate more interactions to the algorithm.[12]

As of 2022, TikTok's head of US Security has put out a statement that "81,518,334 videos were removed globally between April - June for violating our Community Guidelines or Terms of Service" to cut back on hate speech, harassment, and misinformation.[13]


An infographic from the United States Department of Homeland Security's "If You See Something, Say Something" campaign. The campaign is a national initiative to raise awareness to homegrown terrorism and terrorism-related crime.

The U.S. department of Justice defines 'Lone-wolf' (self) terrorism as "someone who acts alone in a terrorist attack without the help or encouragement of a government or a terrorist organization".[14] Through social media outlets on the internet, 'Lone-wolf' terrorism has been on the rise, being linked to algorithmic radicalization.[15] Through echo-chambers on the internet, viewpoints typically seen as radical were accepted and quickly adopted by other extremists.[16] These viewpoints are encouraged by forums, group chats, and social media to reinforce their beliefs.[17]

References in Media[edit]

The Social Dilemma[edit]

"The Social Dilemma" was a 2020 docudrama about how algorithms behind social media enables addiction, while possessing abilities to manipulate people's views, emotions, and behavior to spread conspiracy theories and disinformation. The film repeatedly uses buzz words such as 'echo chambers' and 'fake news' to prove psychological manipulation on social media, therefore leading to political manipulation. In the film, we follow Ben as he falls deeper into a social media addiction as the algorithm found that his social media page has a 62.3% chance of long-term engagement. This leads into more videos on the recommended feed for Ben and he eventually becomes more immersed into propaganda and conspiracy theories, becoming more polarized with each video.

Possible Solutions[edit]

Section 230[edit]

In the Communications Decency Act of 1996, section 230 states that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider".[18] Section 230 protects the media from liabilities or being sued of third-party content, such as illegal activity from a user.[18]  However, this approach reduces a company's incentive to remove harmful content or misinformation.  This loophole has allowed social media companies to maximize profits through pushing radical content without legal risks.[19]

See also[edit]


  1. ^ "What is a Social Media Echo Chamber? | Stan Richards School of Advertising". Retrieved 2022-11-02.
  2. ^ "The Websites Sustaining Britain's Far-Right Influencers". bellingcat. 2021-02-24. Retrieved 2021-03-10.
  3. ^ Camargo, Chico Q. "YouTube's algorithms might radicalise people – but the real problem is we've no idea how they work". The Conversation. Retrieved 2021-03-10.
  4. ^ E&T editorial staff (2020-05-27). "Facebook did not act on own evidence of algorithm-driven extremism". Retrieved 2021-03-10.
  5. ^ "How Can Social Media Firms Tackle Hate Speech?". Knowledge at Wharton. Retrieved 2022-11-22.
  6. ^ "Internet Association - We Are The Voice Of The Internet Economy. | Internet Association". 2021-12-17. Archived from the original on 2021-12-17. Retrieved 2022-11-22.
  7. ^ a b "Disinformation, Radicalization, and Algorithmic Amplification: What Steps Can Congress Take?". Just Security. 2022-02-07. Retrieved 2022-11-02.
  8. ^ Isaac, Mike (2021-10-25). "Facebook Wrestles With the Features It Used to Define Social Networking". The New York Times. ISSN 0362-4331. Retrieved 2022-11-02.
  9. ^ Little, Olivia. "TikTok is prompting users to follow far-right extremist accounts". Media Matters for America. Retrieved 2022-11-02.
  10. ^ "Study: False news spreads faster than the truth". MIT Sloan. Retrieved 2022-11-02.
  11. ^ "TikTok's algorithm leads users from transphobic videos to far-right rabbit holes". Media Matters for America. Retrieved 2022-11-22.
  12. ^ Little, Olivia. "Seemingly harmless conspiracy theory accounts on TikTok are pushing far-right propaganda and TikTok is prompting users to follow them". Media Matters for America. Retrieved 2022-11-22.
  13. ^ "Our continued fight against hate and harassment". Newsroom | TikTok. 2019-08-16. Retrieved 2022-11-22.
  14. ^ "Lone Wolf Terrorism in America | Office of Justice Programs". Retrieved 2022-11-02.
  15. ^ Alfano, Mark; Carter, J. Adam; Cheong, Marc (2018). "Technological Seduction and Self-Radicalization". Journal of the American Philosophical Association. 4 (3): 298–322. doi:10.1017/apa.2018.27. ISSN 2053-4477. S2CID 150119516.
  16. ^ Dubois, Elizabeth; Blank, Grant (2018-05-04). "The echo chamber is overstated: the moderating effect of political interest and diverse media". Information, Communication & Society. 21 (5): 729–745. doi:10.1080/1369118X.2018.1428656. ISSN 1369-118X. S2CID 149369522.
  17. ^ Sunstein, Cass R. (2009-05-13). Going to Extremes: How Like Minds Unite and Divide. Oxford University Press. ISBN 978-0-19-979314-3.
  18. ^ a b "47 U.S. Code § 230 - Protection for private blocking and screening of offensive material". LII / Legal Information Institute. Retrieved 2022-11-02.
  19. ^ Smith, Michael D.; Alstyne, Marshall Van (2021-08-12). "It's Time to Update Section 230". Harvard Business Review. ISSN 0017-8012. Retrieved 2022-11-02.