Dynamics of recommender systems at societal scale

Variational approximations to high modernism

July 3, 2023 — January 23, 2025

classification
collective knowledge
confidentiality
culture
economics
ethics
faster pussycat
game theory
how do science
incentive mechanisms
innovation
language
machine learning
mind
neural nets
NLP
sociology
stringology
technology
UI
wonk
Figure 1: Recommended for you

Placeholder for notes on the large-scale effects of societies whose attention is steered by recommender systems. A case of collective dynamics in the attention economy with notoriously tricky alignment.

1 Matthew effects

“The rich get richer”. Baby’s first pathological recommendation problem. It is well known and repeatedly discovered that naive recommenders make popular things more popular in a way that is not necessarily quality-based. That is not necessarily a problem in itself (maybe there is value in choosing few things rather than many things.) But also it seems to happen when it is not what we want. For example, in academia we worry about topocracy.

Here is a research agenda on such themes: Long-term Dynamics of Fairness Intervention in Connection Recommender Systems

It is quite interesting to me that pathological Matthew effects can occur even if the recommender is identifying some true underlying ‘quality’ measure. Kleinberg and Raghavan (2021):

Algorithmic monoculture is a growing concern in the use of algorithms for high-stakes screening decisions in areas such as employment and lending. If many firms use the same algorithm, even if it is more accurate than the alternatives, the resulting “monoculture” may be susceptible to correlated failures, much as a monocultural system is in biological settings. To investigate this concern, we develop a model of selection under monoculture. We find that even without any assumption of shocks or correlated failures—i.e., under “normal operations”—the quality of decisions may decrease when multiple firms use the same algorithm. Thus, the introduction of a more accurate algorithm may decrease social welfare—a kind of “Braess’ paradox” for algorithmic decision-making.

2 General alignment

The reign of Big Recsys - by Vicki Boykis

Recommender systems today have two huge problems that are leading companies (sometimes at enormous pressure from the public) to rethink how they’re being used: technical bias, and business bias.

3 Filter bubble effects

Much has been written on this; (Arguedas et al. 2022; Lee et al. 2017; Whittaker et al. 2021; Knudsen 2023).

4 Moral philosophy of

e.g. Lazar et al. (2024):

We argue that existing recommenders incentivise mass surveillance, concentrate power, fall prey to narrow behaviourism, and compromise user agency. Rather than just trying to avoid algorithms entirely, or to make incremental improvements to the current paradigm, researchers and engineers should explore an alternative paradigm: the use of language model (LM) agents to source and curate content that matches users’ preferences and values, expressed in natural language. The use of LM agents for recommendation poses its own challenges, including those related to candidate generation, computational efficiency, preference modelling, and prompt injection. Nonetheless, if implemented successfully LM agents could: guide us through the digital public sphere without relying on mass surveillance; shift power away from platforms towards users; optimise for what matters instead of just for behavioural proxies; and scaffold our agency instead of undermining it.

See also Stray et al. (2022) and Schuster and Lazar (2024).

Figure 2

5 Pinterest sounds unusual

Will Oremus, How Pinterest Built One of Silicon Valley’s Most Successful Algorithms

There are troubles that have plagued higher-profile social networks: viral misinformation, radicalisation, offensive images and memes, spam, and shady sites trying to game the algorithm for profit, all of which Pinterest deals with to one degree or another. Here the company has taken a different approach than rival platforms: embrace bias, limit virality, and become something of an anti-social network.…

But what if optimising engagement isn’t your ultimate goal? That’s a question some other social networks, such as Facebook and Twitter, have recently begun to ask, as they toy with more qualitative goals such as “time well spent” and “healthy conversations,” respectively. And it’s one that Seyal, Pinterest’s head of core product, says paved the way for the new feature the company is rolling out this week.

One of Pinterest users’ top complaints for years has been a lack of control over what its algorithm shows them, Seyal says. “You’d click on something, and your whole feed becomes that.” The question was how to solve it without putting the algorithm’s efficacy at risk. “Every person who runs a feed for an online platform will say, ‘Oh, yeah, we tried to make it more controllable. But when we tried to launch it, it dropped top-line engagement.’”

Eventually, Seyal says he decided that was the wrong question altogether. Instead, he told the engineers tasked with addressing the user-control problem that they didn’t have to worry about the effects on engagement. Their only job was to find a fix that would reduce the number of user complaints about the feed overcorrecting in response to their behaviour.

6 Experiments

Tournesol

Tournesol is a transparent participatory research project about the ethics of algorithms and recommendation systems.

Help us advance research by giving your opinion on the videos you have watched in order to identify public interest contents that should be largely recommended.

7 Incoming

  • Dynomight, Algorithmic ranking is unfairly maligned

    I think the solution is to embrace algorithmic ranking, but insist on “control”—to insist that the algorithm serves your goals and not someone else’s.

    How could that happen? In principle, we could all just refuse to use services without control. But I’m skeptical this would work, because of rug-pulls. The same forces that made TikTok into TikTok will still exist and history is filled with companies providing control early on, getting a dominant position, and then taking the control away. Theoretically everyone could leave at that point, but that rarely seems to happen in practice.

    Instead, I think the control needs to be somehow “baked in” from the beginning. There needs to be some kind of technological/legal/social structure in place that makes rug pulls impossible.

  • Dylan Hadfield-Menell is a deep researcher in this and related dynamics

  • The Dark Forest and the Cozy Web

  • Why Facebook won’t let you turn off its news feed algorithm

8 References

Abdollahpouri, Adomavicius, Burke, et al. 2020. Multistakeholder Recommendation: Survey and Research Directions.” User Modeling and User-Adapted Interaction.
Arguedas, Robertson, Fletche, et al. 2022. Echo Chambers, Filter Bubbles, and Polarisation: A Literature Review.”
Carroll, Dragan, Russell, et al. 2022. Estimating and Penalizing Induced Preference Shifts in Recommender Systems.”
Dean, and Morgenstern. 2022. Preference Dynamics Under Personalized Recommendations.”
Eilat, and Rosenfeld. 2023. Performative Recommendation: Diversifying Content via Strategic Incentives.”
Hron, Krauth, Jordan, et al. 2022. Modeling Content Creator Incentives on Algorithm-Curated Platforms.”
Kleinberg, and Raghavan. 2021. Algorithmic Monoculture and Social Welfare.” Proceedings of the National Academy of Sciences.
Knees, Neidhardt, and Nalis. 2024. Recommender Systems: Techniques, Effects, and Measures Toward Pluralism and Fairness.” In Introduction to Digital Humanism: A Textbook.
Knudsen. 2023. Modeling News Recommender Systems’ Conditional Effects on Selective Exposure: Evidence from Two Online Experiments.” Journal of Communication.
Lazar, Thorburn, Jin, et al. 2024. The Moral Case for Using Language Model Agents for Recommendation.”
Lee, Karimi, Jo, et al. 2017. Homophily Explains Perception Biases in Social Networks.” arXiv:1710.08601 [Physics].
Leqi, Hadfield-Menell, and Lipton. 2021. When Curation Becomes Creation: Algorithms, Microcontent, and the Vanishing Distinction Between Platforms and Creators.” Queue.
O’Neil. 2017. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.
Raghavan. 2021. The Societal Impacts of Algorithmic Decision-Making.”
Schuster, and Lazar. 2024. Attention, Moral Skill, and Algorithmic Recommendation.” Philosophical Studies.
Stray, Halevy, Assar, et al. 2022. Building Human Values into Recommender Systems: An Interdisciplinary Synthesis.”
Stray, Vendrov, Nixon, et al. 2021. What Are You Optimizing for? Aligning Recommender Systems with Human Values.”
Teeny, Siev, Briñol, et al. 2021. A Review and Conceptual Framework for Understanding Personalized Matching Effects in Persuasion.” Journal of Consumer Psychology.
Whittaker, Looney, Reed, et al. 2021. Recommender Systems and the Amplification of Extremist Content.” Internet Policy Review.
Xu, Ruqing, and Dean. 2023. Decision-Aid or Controller? Steering Human Decision Makers with Algorithms.”
Xu, Shuyuan, Tan, Fu, et al. 2022. Dynamic Causal Collaborative Filtering.” In Proceedings of the 31st ACM International Conference on Information & Knowledge Management. CIKM ’22.