ML benchmarks and their pitfalls
On marginal efficiency gain in paperclip manufacture
August 15, 2020 — February 20, 2025
Suspiciously similar content
Your baseline
Your baseline has got me feeling fine
It’s filling up my mind
with apologies to Puretone
There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened. – Douglas Adams, The Restaurant at the End of the Universe
Machine learning’s gamified/Goodharted version of the replication crisis is the paper treadmill wherein something counts as a “novel result” if it performs on some conventional benchmarks. But how often does that demonstrate real progress and how often is it overfitting to benchmarks?
1 AGI benchmarks
e.k.a. evals.
Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are not keeping pace in difficulty: LLMs now achieve over 90% accuracy on popular benchmarks like MMLU, limiting informed measurement of state-of-the-art LLM capabilities. In response, we introduce Humanity’s Last Exam, a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. The dataset consists of 2,700 challenging questions across over a hundred subjects. We publicly release these questions, while maintaining a private test set of held out questions to assess model overfitting.
Duchnowski, Pavlick, and Koller (2025):
We introduce the dataset of Everyday Hard Optimization Problems (EHOP), a collection of NP-hard optimization problems expressed in natural language. EHOP includes problem formulations that could be found in computer science textbooks, versions that are dressed up as problems that could arise in real life, and variants of well-known problems with inverted rules. We find that state-of-the-art LLMs, across multiple prompting strategies, systematically solve textbook problems more accurately than their real-life and inverted counterparts. We argue that this constitutes evidence that LLMs adapt solutions seen during training, rather than leveraging reasoning abilities that would enable them to generalize to novel problems.
2 Gaming, shortcuts
Oleg Trott on How to sneak up competition leaderboards.
Jörn-Henrik Jacobsen, Robert Geirhos, Claudio Michaelis: Shortcuts: How Neural Networks Love to Cheat.
Sanjeev Arora, Yi Zhang, Rip van Winkle’s Razor, a Simple New Estimate for Adaptive Data Analysis has a minimum description length approach to model meta-overfitting which I will not summarize except to recommend it for being extremely psychedelic.
2.1 Goodhart’s law in particular
Filip Piekniewski on the tendency to select bad target losses for convenience. Measuring Goodhart’s Law at OpenAI.
3 Measuring speed
Lots of algorithms claim to go fast, but that is a complicated claim on modern hardware. Stabilizer attempts to randomise things to give a “fair” comparison.
4 Incoming
MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering | OpenAI
We introduce MLE-bench, a benchmark for measuring how well AI agents perform at machine learning engineering. To this end, we curate 75 ML engineering-related competitions from Kaggle, creating a diverse set of challenging tasks that test real-world ML engineering skills such as training models, preparing datasets, and running experiments. We establish human baselines for each competition using Kaggle’s publicly available leaderboards. We use open-source agent scaffolds to evaluate several frontier language models on our benchmark, finding that the best-performing setup—OpenAI’s o1-preview with AIDE scaffolding—achieves at least the level of a Kaggle bronze medal in 16.9% of competitions. In addition to our main results, we investigate various forms of resource-scaling for AI agents and the impact of contamination from pre-training. We open-source our benchmark code (opens in a new window) to facilitate future research in understanding the ML engineering capabilities of AI agents.