Dynamics of recommender systems at societal scale
Variational approximations to high modernism
July 3, 2023 — January 23, 2025
Suspiciously similar content
Placeholder for notes on the large-scale effects of societies whose attention is steered by recommender systems. A case of collective dynamics in the attention economy with notoriously tricky alignment.
1 Matthew effects
“The rich get richer”. Baby’s first pathological recommendation problem. It is well known and repeatedly discovered that naive recommenders make popular things more popular in a way that is not necessarily quality-based. That is not necessarily a problem in itself (maybe there is value in choosing few things rather than many things.) But also it seems to happen when it is not what we want. For example, in academia we worry about topocracy.
Here is a research agenda on such themes: Long-term Dynamics of Fairness Intervention in Connection Recommender Systems
It is quite interesting to me that pathological Matthew effects can occur even if the recommender is identifying some true underlying ‘quality’ measure. Kleinberg and Raghavan (2021):
Algorithmic monoculture is a growing concern in the use of algorithms for high-stakes screening decisions in areas such as employment and lending. If many firms use the same algorithm, even if it is more accurate than the alternatives, the resulting “monoculture” may be susceptible to correlated failures, much as a monocultural system is in biological settings. To investigate this concern, we develop a model of selection under monoculture. We find that even without any assumption of shocks or correlated failures—i.e., under “normal operations”—the quality of decisions may decrease when multiple firms use the same algorithm. Thus, the introduction of a more accurate algorithm may decrease social welfare—a kind of “Braess’ paradox” for algorithmic decision-making.
2 General alignment
The reign of Big Recsys - by Vicki Boykis
Recommender systems today have two huge problems that are leading companies (sometimes at enormous pressure from the public) to rethink how they’re being used: technical bias, and business bias.
3 Filter bubble effects
Much has been written on this; (Arguedas et al. 2022; Lee et al. 2017; Whittaker et al. 2021; Knudsen 2023).
4 Moral philosophy of
e.g. Lazar et al. (2024):
We argue that existing recommenders incentivise mass surveillance, concentrate power, fall prey to narrow behaviourism, and compromise user agency. Rather than just trying to avoid algorithms entirely, or to make incremental improvements to the current paradigm, researchers and engineers should explore an alternative paradigm: the use of language model (LM) agents to source and curate content that matches users’ preferences and values, expressed in natural language. The use of LM agents for recommendation poses its own challenges, including those related to candidate generation, computational efficiency, preference modelling, and prompt injection. Nonetheless, if implemented successfully LM agents could: guide us through the digital public sphere without relying on mass surveillance; shift power away from platforms towards users; optimise for what matters instead of just for behavioural proxies; and scaffold our agency instead of undermining it.
5 Pinterest sounds unusual
Will Oremus, How Pinterest Built One of Silicon Valley’s Most Successful Algorithms
There are troubles that have plagued higher-profile social networks: viral misinformation, radicalisation, offensive images and memes, spam, and shady sites trying to game the algorithm for profit, all of which Pinterest deals with to one degree or another. Here the company has taken a different approach than rival platforms: embrace bias, limit virality, and become something of an anti-social network.…
But what if optimising engagement isn’t your ultimate goal? That’s a question some other social networks, such as Facebook and Twitter, have recently begun to ask, as they toy with more qualitative goals such as “time well spent” and “healthy conversations,” respectively. And it’s one that Seyal, Pinterest’s head of core product, says paved the way for the new feature the company is rolling out this week.
One of Pinterest users’ top complaints for years has been a lack of control over what its algorithm shows them, Seyal says. “You’d click on something, and your whole feed becomes that.” The question was how to solve it without putting the algorithm’s efficacy at risk. “Every person who runs a feed for an online platform will say, ‘Oh, yeah, we tried to make it more controllable. But when we tried to launch it, it dropped top-line engagement.’”
Eventually, Seyal says he decided that was the wrong question altogether. Instead, he told the engineers tasked with addressing the user-control problem that they didn’t have to worry about the effects on engagement. Their only job was to find a fix that would reduce the number of user complaints about the feed overcorrecting in response to their behaviour.
6 Experiments
Tournesol is a transparent participatory research project about the ethics of algorithms and recommendation systems.
Help us advance research by giving your opinion on the videos you have watched in order to identify public interest contents that should be largely recommended.
7 Incoming
Dynomight, Algorithmic ranking is unfairly maligned
I think the solution is to embrace algorithmic ranking, but insist on “control”—to insist that the algorithm serves your goals and not someone else’s.
How could that happen? In principle, we could all just refuse to use services without control. But I’m skeptical this would work, because of rug-pulls. The same forces that made TikTok into TikTok will still exist and history is filled with companies providing control early on, getting a dominant position, and then taking the control away. Theoretically everyone could leave at that point, but that rarely seems to happen in practice.
Instead, I think the control needs to be somehow “baked in” from the beginning. There needs to be some kind of technological/legal/social structure in place that makes rug pulls impossible.
Dylan Hadfield-Menell is a deep researcher in this and related dynamics