Superintelligence

Incorporating technological singularities, hard AI take-offs, game-over high scores, the technium, deus-ex-machina, deus-ex-nube, AI supremacy, nerd raptures and so forth

December 2, 2016 — October 7, 2024

economics
faster pussycat
innovation
language
machine learning
mind
neural nets
NLP
technology
Figure 1: Go on, buy the sticker

Small notes on the Rapture of the Nerds. If AI keeps on improving, will explosive intelligence eventually cut humans out of the loop and go on without us? Also, crucially, would we be pensioned in that case?

The internet has opinions about this.

A fruitful application of these ideas is in producing interesting science fiction and contemporary horror.

Figure 2

1 X-risk

x-risk is a term used in, e.g. the rationalist community to discuss risks of a possible AI explosion.

FWIW: I personally think that (various kinds of) AI x-risk are plausible, and serious enough to worry about, even if they are not the most likely option. If the possibility is that everyone dies, then we should be worried about it, even if it is only a 1% chance.

I would like to write some wicked tail risk theory at some point.

2 X-risk risk

There are people who think that focussing on x-risk is itself a risky distraction from more pressing problems, especially accelerationists.

e.g. what if we do not solve the climate crisis because we put effort into the AI risks instead? Or so much effort that it slowed down the AI that could have saved us? Or some much effort that we got distracted from other more pressing risks?

Here is one piece, that I found rather interesting: Superintelligence: The Idea That Eats Smart People (I thought that effective altruism meta criticism was the idea that ate smart people?)

Personally, I’m doubt these need to be zero-sum tradeoffs. Getting the human species ready to deal with catastrophe in general seems like a virtuous goal, and one that would help with both of these problems.

2.1 WTF is TESCREALism?

An article has gone viral in my circles recently denouncing TESCREALism. There is a lot going on there, but the main subject of criticism seems to be that some flavours of longtermism lead to unpalatable conclusions, including excessive worry about AI x-risk. It goes on to frame several online community which have entertained various longtermist ideas to be a “bundle”, which I assume is intended to imply that these groups form a political bloc which encourages or enables accelerationist hypercapitalism.

I am not a fan of this TESCREALism as a term.

For one, thing the article very leans on genealogical arguments, mostly guilt by association. Many of the movements it names have not single view, or diametrically opposed views, on the topic of AI X-risk.

The article is less good at identifying what arguments in particular the author thinks is deficient in the bundle, predominantly leaning on identifying the ‘bundle’ as an outgroup.

If I disagree with some version of longtermism, why not just say disagree with that? Better yet, why not mention which of the many longtermisms I am worried about? The muddier strategy of the article, Disagreeingwith-longtermism-plus-feeling-bad-vibes-about-various-other-movements-and-philosophies-that-have-a-diverse-range-of-sometimes-tenuous-relationships-with-longtermism doesn’t feel like it is doing much useful work.

I saw this guilt-by-association play out previously with “neoliberalism”, and probably the criticisms of “woke” “movement” are doing the same thing. Suddenly, I am worried that I am making the same mistake myself when talking about neoreactionaries.

Don’t get me wrong, movements are hijacked by bad actors all the time, and it is important to be aware of that; that is not, however, a negation of the arguments made by people in movements, per se.

If they are functioning as a bloc, then… that could be a thing I guess? I do not find this particular bloc to cleave reality at the joints though; I am not sure there is even a movement to hijack,, in this acronym. Cosmism and effective altruism are not not in correspondence with each other, not least because all the Cosmists are dead.

I made a meal of that, didn’t I? Many of my colleagues have been greatly taken by dismissing things as TESCREALism of late, so I think it needs mentioning.

3 In historical context

Figure 3

More filed under big history.

3.1 Most-important century model

4 Models of AGI

Figure 4: I cannot even remember where I got this

5 Technium stuff

More to say here; perhaps later.

6 Aligning AI

Let us consider general alignment, because I have little AI-specific to say yet.

7 Constraints

7.1 Compute methods

We are getting very good at efficiently using hardware (Grace 2013). AI and efficiency (Hernandez and Brown 2020) makes this clear:

We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a neural network to the level of AlexNet (by contrast, Moore’s Law would yield an 11x cost improvement over this period). Our results suggest that for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency.

See also

7.2 Compute hardware

TBD

8 Omega point etc

Surely someone has noticed the poetical similarities to the idea of noösphere/Omega point. I will link to that when I discover something well-written enough.

Q: Did anyone think that the noösphere would fit on a consumer hard drive?

“Hi there, my everyday carry is the sum of human knowledge.”

9 Incoming

Figure 5: Tom Gauld

10 References

Acemoglu, Autor, Hazell, et al. 2020. AI and Jobs: Evidence from Online Vacancies.” Working Paper 28257.
Acemoglu, and Restrepo. 2018. Artificial Intelligence, Automation and Work.” Working Paper 24196.
———. 2020. The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand.” Cambridge Journal of Regions, Economy and Society.
Birhane, and Sumpter. 2022. The Games We Play: Critical Complexity Improves Machine Learning.”
Bostrom. 2014. Superintelligence: Paths, Dangers, Strategies.
Bubeck, Chandrasekaran, Eldan, et al. 2023. Sparks of Artificial General Intelligence: Early Experiments with GPT-4.”
Chalmers. 2016. The Singularity.” In Science Fiction and Philosophy.
Chollet. 2019. On the Measure of Intelligence.” arXiv:1911.01547 [Cs].
Collison, and Nielsen. 2018. Science Is Getting Less Bang for Its Buck.” The Atlantic.
Donoho. 2023. Data Science at the Singularity.”
Efferson, Richerson, and Weinberger. 2023. Our Fragile Future Under the Cumulative Cultural Evolution of Two Technologies.” Philosophical Transactions of the Royal Society B: Biological Sciences.
Everitt, and Hutter. 2018. Universal Artificial Intelligence: Practical Agents and Fundamental Challenges.” In Foundations of Trusted Autonomy.
Grace. 2013. Algorithmic Progress in Six Domains.”
Grace, Salvatier, Dafoe, et al. 2018. Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts.” Journal of Artificial Intelligence Research.
Harari. 2018. Homo Deus: A Brief History of Tomorrow.
Hernandez, and Brown. 2020. Measuring the Algorithmic Efficiency of Neural Networks.”
Hildebrandt. 2020. Smart Technologies.” Internet Policy Review.
Hutson. 2022. Taught to the Test.” Science.
Hutter. 2000. A Theory of Universal Artificial Intelligence Based on Algorithmic Complexity.”
———. 2005. Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability. Texts in Theoretical Computer Science.
———. 2007. Universal Algorithmic Intelligence: A Mathematical Top→Down Approach.” In Artificial General Intelligence.
Jeon, and Van Roy. 2024. Information-Theoretic Foundations for Machine Learning.”
Johansen, and Sornette. 2001. Finite-Time Singularity in the Dynamics of the World Population, Economic and Financial Indices.” Physica A: Statistical Mechanics and Its Applications.
Lee. 2020a. 14 COEVOLUTION.” In The Coevolution: The Entwined Futures of Humans and Machines.
———. 2020b. The Coevolution: The Entwined Futures of Humans and Machines.
Manheim, and Garrabrant. 2019. Categorizing Variants of Goodhart’s Law.”
Mitchell. 2021. Why AI Is Harder Than We Think.” arXiv:2104.12871 [Cs].
Nathan, and Hyams. 2021. Global Policymakers and Catastrophic Risk.” Policy Sciences.
Omohundro. 2008. The Basic AI Drives.” In Proceedings of the 2008 Conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference.
Philippon. 2022. Additive Growth.” Working Paper. Working Paper Series.
Russell. 2019. Human Compatible: Artificial Intelligence and the Problem of Control.
Sastry, Heim, Belfield, et al. n.d. “Computing Power and the Governance of Artificial Intelligence.”
Scott. 2022. I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale.” American Academy of Arts & Sciences.
Silver, Singh, Precup, et al. 2021. Reward Is Enough.” Artificial Intelligence.
Sornette. 2003. Critical Market Crashes.” Physics Reports.
Sunehag, and Hutter. 2013. Principles of Solomonoff Induction and AIXI.” In Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence: Papers from the Ray Solomonoff 85th Memorial Conference, Melbourne, VIC, Australia, November 30 – December 2, 2011. Lecture Notes in Computer Science.
Wong, and Bartlett. 2022. Asymptotic Burnout and Homeostatic Awakening: A Possible Solution to the Fermi Paradox? Journal of The Royal Society Interface.
Zenil, Tegnér, Abrahão, et al. 2023. The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence.”