Scaling laws for very large neural nets

Theory of trading-off budgets for compute size and data

January 14, 2021 — February 24, 2025

bounded compute
functional analysis
machine learning
model selection
optimization
statmech
when to compute
Figure 1

Got good behaviour from a million parameter model? Want to see if stuff gets weirder as we hit a billion parameters? Turns out it does! It even seems to do so dependably! There’s something philosophically deep here. Why does looking at more stuff seem to bring more computationally complex problems within reach? I don’t know, but I’m keen to carve out some time to solve that.

Brief links on the theme of scaling in the extremely large model/large data limit and what that does to the models’ behaviour. A new front in the complexity and/or statistical mechanics of statistics, and whether neural networks extrapolate.

For how to scale up these models in practice, see distributed gradient descent.

Content on this page hasn’t been updated as fast as the field has been moving; you should follow the references for the latest.

1 Bitter lessons in compute

See optimal cleverness.

2 Big transformers

One fun result comes from transformer language models. An interesting observation back in 2020 was that there seemed to be an unexpected trade-off where you can go faster by training a bigger network. I think this paper was ground zero for modern scaling studies, which try to identify and predict optimal trade-offs and ultimate performance under different scaling (of compute, data, parameters) regimes.

nostalgebraist summarises Henighan et al. (2020);Kaplan et al. (2020):

2.1 L(D): information

OpenAI derives a scaling law called L(D). This law is the best you could possibly do — even with arbitrarily large compute/models — if you are only allowed to train on D data points.

No matter how good your model is, there’s only so much it can learn from a finite sample. L(D) quantifies this intuitive fact (if the model is an autoregressive transformer).

2.2 L(C): budgeting

OpenAI also derives another scaling law called L(C). This is the best you can do with compute C if you spend it optimally.

What does optimal spending look like? Remember, you can spend a unit of compute on * a bigger model (N), or * training the same model for longer (S)

… In the compute regime we’re currently in, making the model bigger is way more effective than taking more steps.

The scaling laws continue to be revised.

2.3 Observational

Ruan, Maddison, and Hashimoto (2025):

Understanding how language model performance varies with scale is critical to benchmark and algorithm development. Scaling laws are one approach to building this understanding, but the requirement of training models across many different scales has limited their use. We propose an alternative, observational approach that bypasses model training and instead builds scaling laws from ~100 publicly available models. Building a single scaling law from multiple model families is challenging due to large variations in their training compute efficiencies and capabilities. However, we show that these variations are consistent with a simple, generalized scaling law where language model performance is a function of a low-dimensional capability space, and model families only vary in their efficiency in converting training compute to capabilities. Using this approach, we show the surprising predictability of complex scaling phenomena: we show that several emergent phenomena follow a smooth, sigmoidal behaviour and are predictable from small models; we show that the agent performance of models such as GPT-4 can be precisely predicted from simpler non-agentic benchmarks; and we show how to predict the impact of post-training interventions like Chain-of-Thought and Self-Consistency as language model capabilities continue to improve.

Figure 2: From Ruan, Maddison, and Hashimoto (2025): Observational scaling laws generalize existing compute scaling laws which directly relate training compute to downstream capabilities (dashed line) by hypothesizing the existence of a lowrank space of LM capabilities that have a log-linear relationship with compute (centre), and can be extracted directly from standardised LM benchmarks (left). This enables us to get low-cost, high-resolution scaling predictions of LMs’ complex downstream capabilities from their observable standard benchmark metrics using nearly 100 publicly accessible LMs (left to right)

Standard compute scaling In compute scaling laws, there is a hypothesized power-law relationship between models’ compute measures \(C_m\) (e.g., training FLOPs) and their errors \(E_m\) (e.g., perplexity). Specifically, for a model \(m\) within a family \(f\) (e.g., Llama-2 7B, 13B, and 70B) we hypothesize

\[ \log \left(E_m\right) \approx \beta_f \log \left(C_m\right)+\alpha_f \]

and if this linear fit is sufficiently accurate, we draw inferences about the performance of a model at future compute scales \(C^{\prime}>C\) by extrapolating this relationship. However, fitting such a scaling law can be tricky, as each model family \(f\) and downstream benchmark has its own scaling coefficients \(\beta_f\) and \(\alpha_f\). This means that scaling experiments, especially for post-training analysis, are often fitted on very few (3-5) models sharing the same model family, and any predictions are valid only for a specific scaling strategy used within a model family. Several studies […] have generalized the functional form to analyse the scaling of LMs’ downstream performance (where \(E_m\) is normalised to \([0,1]\)) with a sigmoidal link function \(\sigma\) :

\[ \sigma^{-1}\left(E_m\right) \approx \beta_f \log \left(C_m\right)+\alpha_f \]

Observational scaling In our work, we hypothesize the existence of a low-dimensional capability measure for LMs that relate compute to more complex LM capabilities and can be extracted from observable standard LM benchmarks, as illustrated in Figure 2. Specifically, given \(T\) simple benchmarks and \(B_{i, m}\) the error of a model \(m\) on benchmark \(i \in[T]\), we hypothesize that there exists some capability vector \(S_m \in \mathbb{R}^K\) such that,

\[ \begin{aligned} \sigma^{-1}\left(E_m\right) & \approx \beta^{\top} S_m+\alpha \\ S_m & \approx \theta_f \log \left(C_m\right)+\nu_f \\ B_{i, m} & \approx \gamma_i^{\top} S_m \end{aligned} \]

for \(\theta_f, \nu_f, \beta \in \mathbb{R}^K, \alpha \in \mathbb{R}\), and orthonormal vectors \(\gamma_i \in \mathbb{R}^K\).

3 Incoming

4 References

Biderman, Schoelkopf, Anthony, et al. 2023. Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling.” In Proceedings of the 40th International Conference on Machine Learning.
Brill. 2024. Neural Scaling Laws Rooted in the Data Distribution.”
Douglas, and Verstyuk. 2025. Progress in Artificial Intelligence and Its Determinants.”
Henighan, Kaplan, Katz, et al. 2020. Scaling Laws for Autoregressive Generative Modeling.” arXiv:2010.14701 [Cs].
Hoffmann, Borgeaud, Mensch, et al. 2022. Training Compute-Optimal Large Language Models.”
Hu, Song, Weinstein, et al. 2022. Training Overparametrized Neural Networks in Sublinear Time.”
Hutter. 2021. Learning Curve Theory.”
Kaplan, McCandlish, Henighan, et al. 2020. Scaling Laws for Neural Language Models.” arXiv:2001.08361 [Cs, Stat].
Kirstain, Lewis, Riedel, et al. 2021. A Few More Examples May Be Worth Billions of Parameters.” arXiv:2110.04374 [Cs].
Kumar, Bradbury, Young, et al. 2020. Exploring the Limits of Concurrency in ML Training on Google TPUs.” arXiv:2011.03641 [Cs].
Mahowald, Ivanova, Blank, et al. 2024. Dissociating language and thought in large language models.” Trends in Cognitive Sciences.
Muennighoff, Rush, Barak, et al. 2023. Scaling Data-Constrained Language Models.” Advances in Neural Information Processing Systems.
Naveed, Khan, Qiu, et al. 2024. A Comprehensive Overview of Large Language Models.”
Owen. 2024. How Predictable Is Language Model Benchmark Performance?
Ruan, Maddison, and Hashimoto. 2025. Observational Scaling Laws and the Predictability of Langauge Model Performance.” In Advances in Neural Information Processing Systems.
Schaeffer, Miranda, and Koyejo. 2023. Are Emergent Abilities of Large Language Models a Mirage? Advances in Neural Information Processing Systems.
Sharma, and Kaplan. 2020. A Neural Scaling Law from the Dimension of the Data Manifold.” arXiv:2004.10802 [Cs, Stat].
Sorscher, Geirhos, Shekhar, et al. 2023. Beyond Neural Scaling Laws: Beating Power Law Scaling via Data Pruning.”
Thirunavukarasu, Ting, Elangovan, et al. 2023. Large Language Models in Medicine.” Nature Medicine.
Togelius, and Yannakakis. 2023. Choose Your Weapon: Survival Strategies for Depressed AI Academics.”
Wei, Tay, Bommasani, et al. 2022. Emergent Abilities of Large Language Models.”
Zhang, Warstadt, Li, et al. 2020. When Do You Need Billions of Words of Pretraining Data? arXiv:2011.04946 [Cs].
Zhao, Zhou, Li, et al. 2024. A Survey of Large Language Models.”