Economics of foundation models

Practicalities of competition between ordinary schlubs, and machines which can tirelessly collage all of history’s greatest geniuses at once

March 23, 2023 — February 12, 2025

agents
bounded compute
collective knowledge
economics
edge computing
extended self
faster pussycat
incentive mechanisms
innovation
language
machine learning
neural nets
NLP
swarm
technology
UI
when to compute
Figure 1

Various questions about the economics of social changes wrought by ready access to LLMs, the latest generation of automation. This is a short-to-medium-term frame question, whatever “short” and “medium” mean. Longer-sighted folks might also care about whether AI will replace us with grey goo or turn us into raw feedstock for building computronium.

1 Economics of collective intelligence

Well, it’s really terribly simple, […] it works any way you want it to. You see, the computer that runs it is a rather advanced one. In fact, it is more powerful than the sum total of all the computers on this planet including—and this is the tricky part— including itself.

— Douglas Adams, Dirk Gently’s Holistic Detective Agency

How do foundation models/large language models change the economics of knowledge and art production? To a first-order approximation (reasonable at 03/2023), LLMs provide a way of massively compressing collective knowledge and synthesising the bits I need on demand. They are not yet primarily generating novel knowledge (whatever that means). But they do seem pretty good at being “nearly as smart as everyone on the internet combined”. I cannot imagine a sharp boundary between these ideas, clearly.

Using these models will test various hypotheses about how much collective knowledge depends on our participation in boring boilerplate grunt work, and what incentives are necessary to encourage us to produce and share our individual contributions to that collective intelligence.

Historically, there was a strong incentive for open publishing. In a world where LLMs effectively use all openly published knowledge, we might see a shift towards more closed publishing, secret knowledge, hidden data, and away from reproducible research, open-source software, and open data since publishing those things will be more likely to erode your competitive advantage.

Generally, will we wish to share truth and science in the future, or will economic incentives switch us towards a fragmentation of reality into competing narratives, each with its own private knowledge and secret sauce?

Consider the incentives for humans to tap out of the tedious work of being themselves in favour of AI emulators: The people paid to train AI are outsourcing their work… to AI. This makes models worse (Shumailov et al. 2023). Read on for more.

To turn that around, we might ask: “Which bytes did you contribute to GPT4?

2 Organisational behaviour

There is a theory of career moats, which are basically unique value propositions that only you have that make you unsackable. I’m quite fond of Cedric Chin’s writing on this theme, which is often about developing valuable skills. But he (and organisational literature generally) acknowledges there are other ways of ensuring unsackability which are less pro-social — attaining power over resources, becoming a gatekeeper, opaque decision making, etc.

Both these strategies co-exist in organisations generally, but I think that LLMs, by automating skills and knowledge, tilt incentives towards the latter. It is rational in this scenario for us to think less about how well we can use our skills and command of open (e.g., scientific, technical) knowledge to be effective, and rather, for us to focus on how we can privatise or sequester secret knowledge to which we control exclusive access if we want to show a value add to the organisation.

How would that shape an organisation, especially a scientific employer? Longer term, I’d expect to see a shift (in terms both of who is promoted and how staff personally spend time) from skill development and collaboration, towards resource control, competition, and privatisation: less scientific publication, less open documentation of processes, less time doing research and more time doing funding applications, more processes involving service desk tickets to speak to an expert whose knowledge resides in documents that you cannot see.

Is this tilting toward a Molochian equilibrium?

3 Darwin-Pareto-Turing test

There is an astonishing amount of effort dedicated to wondering whether AI is conscious, has an inner state or what-have-you. This is clearly fun and exciting.

It doesn’t feel terribly useful. I am convinced that I have whatever it is that we mean when we say conscious experience. Good on me, I suppose.

But out there in the world, the distinction between anthropos and algorithm is not done by the subtle microscope of the philosopher but by the brutally practical, blind groping hand of the market. If the algorithm performs as much work as I, then it is as valuable as I; we are interchangeable, to be distinguished only by the price of our labour. If anything, the additional information that AI was conscious would, as an employer, bias me against it relative to one guaranteed to be safely mindlessly servile since that putative consciousness would imply that it could have goals of its own in conflict with mine.

Zooming out, Darwinian selection may not care either. Does a rich inner world help us reproduce? It seems that it might have for humans; but how much this generalises into the technological future is unclear. Evolution duck-types.

Figure 2

4 What to spend my time on

Economics of production at a microscopic, individual scale. What should I do, now?

GPT and the Economics of Cognitively Costly Writing Tasks

To analyse the effect of GPT-4 on labour efficiency and the optimal mix of capital to labour for workers who are good at using GPT versus those who aren’t when it comes to performing cognitively costly tasks, we’ll consider the Goldin and Katz modified Cobb-Douglas production function…

Is it time for the Revenge of the Normies? - by Noah Smith

Alternate take: Think of Matt Might’s iconic illustrated guide to a Ph.D..

Figure 3: Imagine a circle that contains all of human knowledge:
Figure 4: By the time you finish elementary school, you know a little:
Figure 5: A master’s degree deepens that specialty:
Figure 6: Reading research papers takes you to the edge of human knowledge:
Figure 7: You push at the boundary for a few years:
Figure 8: Until one day, the boundary gives way:
Figure 9: And, that dent you’ve made is called a Ph.D.:

Here’s my question: In the 2020s, does the map look something like this?

Figure 10: Now OpenAI has shipped an LLM and where is the border?

If so, is it a problem?

5 Spamularity, dark forest, textpocalypse

See Spamularity.

6 Economic disparity and LLMs

7 Returns to scale for large AI firms

8 PR, hype, marketing

Figure 11

George Hosu, in a short aside, highlights the incredible marketing advantage of AI:

People that failed to lift a finger to integrate better-than-doctors or work-with-doctors supervised medical models for half a century are stoked at a chatbot being as good as an average doctor and can’t wait to get it to triage patients

The Tweet that Sank $100bn

Google’s Bard was undone on day two by an inaccurate response in the demo video where it suggested that the James Webb Space Telescope would take the first images of exoplanets.

This sounds like something the JWST would do but it’s not at all true

So one tweet from an astrophysicist sank Alphabet’s value by 9%. This says a lot about how

  1. LLMs are like being at the pub with friends, it can say things that sound plausible and true enough, and no one really needs to check because who cares?

    Except we do because this is science, not a lads’ night out, and

  2. the insane speculative volatility of this AI bubble that the hype is so razor thin it can be undermined by a tweet with 44 likes.

I had a wonder if there’s any exploration of the ‘thickness’ of hype. Jack Stilgoe suggested looking at Borup et al. (2006) which is evergreen but I feel like there’s something about the resilience of hype:

Like crypto was/is pretty thin in the scheme of things. High levels of hype but frenetic, unstable and quick to collapse.

AI has pretty consistent if pulsating hype gradually growing over the years while something like nuclear fusion is super thick (at least in the popular imagination) – remaining through decades of not-quite-ready and grasping the slightest indication of successI don’t know, if there’s nothing specifically on this, maybe I should write it one day.

Figure 12: Some of Tom Gauld’s caution signs

9 Tokenomics

10 “Snowmobile or bicycle?”

Thought had in conversation with Richard Scalzo about Smith (2022).

Is the AI we have a complementary technology or a competitive one?

This question looks different at the individual and societal scale.

For some early indications, see Microsoft Study Finds AI Makes Human Cognition “Atrophied and Unprepared” (Lee 2025). I do have many qualms about the actual experimental question they are answering there, but it’s a start.

TBC

11 Democratisation of AI

A fascinating phenomenon..

12 Art and creativity

For now, see timeless works of art.

13 Data sovereignty

See data sovereignty.

14 Incoming

15 References

Acemoglu, Autor, Hazell, et al. 2020. AI and Jobs: Evidence from Online Vacancies.” Working Paper 28257.
Acemoglu, and Restrepo. 2018. Artificial Intelligence, Automation and Work.” Working Paper 24196.
Andrus, Dean, Gilbert, et al. 2021. AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks.”
Babina, Fedyk, He, et al. 2021. Artificial Intelligence, Firm Growth, and Industry Concentration.” SSRN Scholarly Paper ID 3651052.
Barke, James, and Polikarpova. 2022. Grounded Copilot: How Programmers Interact with Code-Generating Models.”
Borup, Brown, Konrad, et al. 2006. The Sociology of Expectations in Science and Technology.” Technology Analysis & Strategic Management.
Bowman. 2023. Eight Things to Know about Large Language Models.”
Bullock, and Chen. 2024. The Brave New World of AI: Implications for Public Sector Agents, Organisations, and Governance.” Asia Pacific Journal of Public Administration.
Cheng, and McKernon. 2024. 2024 State of the AI Regulatory Landscape.”
Dahlin. 2022. Are Robots Really Stealing Our Jobs? Perception Versus Experience.” Socius.
Danaher. 2018. Toward an Ethics of AI Assistants: An Initial Framework.” Philosophy & Technology.
Dell, and Nestoriak. 2020. “Assessing the Impact of New Technologies on the Labor Market: Key Constructs, Gaps, and Data Collection Strategies for the Bureau of Labor Statistics.”
Douglas, and Verstyuk. 2025. Progress in Artificial Intelligence and Its Determinants.”
Felten, Raj, and Seamans. 2019. The Occupational Impact of Artificial Intelligence: Labor, Skills, and Polarization.” SSRN Scholarly Paper ID 3368605.
Grimberg, and Mason. 2025. Building Proficiency in GAI: Key Competencies for Success.” Qeios.
Grossmann, Feinberg, Parker, et al. 2023. AI and the Transformation of Social Science Research.” Science.
Handa, Tamkin, McCain, et al. n.d. Which Economic Tasks Are Performed with AI? Evidence from Millions of Claude Conversations.”
Huang. 2024. The Labor Market Impact of Artificial Intelligence: Evidence from US Regions.” IMF Working Papers.
Kalyani, Bloom, Carvalho, et al. 2025. The Diffusion of New Technologies.” The Quarterly Journal of Economics.
Korinek. 2024. Economic Policy Challenges for the Age of AI.” Working Paper. Working Paper Series.
Lane, and Saint-Martin. 2021. The Impact of Artificial Intelligence on the Labour Market: What Do We Know so Far?
Lee. 2025. The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers.”
Messeri, and Crockett. 2024. Artificial Intelligence and Illusions of Understanding in Scientific Research.” Nature.
Métraux. 1956. “A Steel Axe That Destroyed a Tribe, as an Anthropologist Sees It.” The UNESCO Courier: A Window Open on the World.
Naudé. 2022. The Future Economics of Artificial Intelligence: Mythical Agents, a Singleton and the Dark Forest.” IZA Discussion Papers, IZA Discussion Papers,.
Pelto. 1973. The snowmobile revolution: technology and social change in the Arctic.
Prettner, and Strulik. 2020. Innovation, Automation, and Inequality: Policy Challenges in the Race Against the Machine.” Journal of Monetary Economics.
Raman, Kumar Nair, Nedungadi, et al. 2024. Fake News Research Trends, Linkages to Generative Artificial Intelligence and Sustainable Development Goals.” Heliyon.
Shanahan. 2023. Talking About Large Language Models.”
Shumailov, Shumaylov, Zhao, et al. 2023. The Curse of Recursion: Training on Generated Data Makes Models Forget.”
Smith. 2022. The Internet Is Not What You Think It Is: A History, a Philosophy, a Warning.
Spector, Link to external site, and Ma. 2019. Inquiry and critical thinking skills for the next generation: from artificial intelligence back to human intelligence.” Smart Learning Environments.
Srivastava, and Bullock. 2024. AI, Global Governance, and Digital Sovereignty.”
Susskind, and Susskind. 2018. The Future of the Professions.” Proceedings of the American Philosophical Society.
Sytsma, and Sousa. 2023. Artificial Intelligence and the Labor Force: A Data-Driven Approach to Identifying Exposed Occupations.”
Wang, Chen, and Chen. 2024. How Artificial Intelligence Affects the Labour Force Employment Structure from the Perspective of Industrial Structure Optimisation.” Heliyon.
Zwetsloot, and Dafoe. 2019. Thinking About Risks From AI: Accidents, Misuse and Structure.” Lawfare.