User:Hirsutism/Algorithm (social media)

From Wikipedia, the free encyclopedia
improve references, obviously

Should the scope of this article be expanded to include search engine algorithms, mortgage approval algorithms, online advertising, hire/fire decisions, and more? A large percentage of this article could cover ALL social impacts of artificial intelligence algorithms. Or should the algorithmic bias article handle those other cases?

Should this be a separate article, or should it be a section of the algorithm article? If it stays a separate article, and expands in scope, what name should it take, and how should we limit its scope, to keep it from covering just about any algorithm? Also, if this stays a separate article, there should at least be a summary paragraph in the algorithm article, no?

Social media algorithms can decide to reorder or hide content.

A social media algorithm is a computer algorithm that determines whether and in what order[1] social media content is displayed.[2][3] Social media algorithms are designed to maximize user engagement by trying to predict what content a user would prefer to see,[4] as the more time someone spends on the site, the more ad revenue is earned. Algorithms may improve via machine learning or via manual A/B testing.[5]

Social media algorithms are usually closed-source,[6][7] as they are considered trade secrets that confer a competitive advantage. They are also used to limit the reach of spam, in a sort of arms race that would make it harder to battle spam if the algorithms were open source.[8] The lack of algorithmic transparency has become an issue in the press, with some calling for regulation of algorithms.[9][10]

Social impacts[edit]

There are many social impacts of algorithms, ranging from allegations that social media algorithms are intentionally designed to be addicting,[11] to allegations that they are biased towards various group based on race, gender, and political leaning.[12][13][14][15]

There is also a phenomenon called a filter bubble, in which the social media algorithm is so good at predicting what a user wants to see that their view of things becomes fairly limited. They see little to no contrasting viewpoints, and their own viewpoint is only reinforced.[16] Taken to the limit, it's possible that this leads to algorithmic radicalization, where someone's views become progressively more extreme.[17]

Other AI algorithms[edit]

only keep this section if we decide to expand the scope of the article

There are other artificial intelligence algorithms that have similar social impacts.

In mortgages and lending, AI algorithms are used to decide whether to lend to someone and what insurance rate to set.[18][non-primary source needed]

Search engine algorithms decide whether and in what order to list search results. They have a similar mandate to limit the impact of spam.

Other uses of AI include hiring and firing, policing, insurance, and healthcare.[19][20]

See also[edit]


  1. ^
  2. ^
  3. ^
  4. ^
  5. ^
  6. ^
  7. ^
  8. ^
  9. ^ Dickey, Megan Rose (30 April 2017). "Algorithmic Accountability". TechCrunch. Retrieved 4 September 2017.
  10. ^ "Algorithms have gotten out of control. It's time to regulate them". 3 April 2019. Retrieved 22 March 2020.
  11. ^
  12. ^
  13. ^
  14. ^
  15. ^
  16. ^
  17. ^ "Study of YouTube comments finds evidence of radicalization effect". TechCrunch. Retrieved 2021-03-10.
  18. ^ Bartlett, Robert; Morse, Adair; Stanton, Richard; Wallace, Nancy (June 2019). "Consumer-Lending Discrimination in the FinTech Era". NBER Working Paper No. 25943. doi:10.3386/w25943.
  19. ^
  20. ^

Category:Computer programming

Category:Social media

Category:Computing and society