Superintelligence
Incorporating technological singularities, hard AI take-offs, game-over high scores, the technium, deus-ex-machina, deus-ex-nube, AI supremacy, nerd raptures and so forth
December 1, 2016 — October 18, 2024
Small notes on the Rapture of the Nerds. If AI keeps on improving, will explosive intelligence eventually cut humans out of the loop and go on without us? Also, crucially, would we be pensioned in that case?
The internet has opinions about this.
A fruitful application of these ideas is in producing interesting science fiction and contemporary horror. I would like there to be other fruitful applications, but as they are, they are all so far much more speculative.
1 Safety, risks
See AI Safety.
2 WTF is TESCREALism?
An article has gone viral in my circles recently denouncing TESCREALism. There is a lot going on there, but the main subject of criticism seems to be that some flavours of longtermism lead to unpalatable conclusions, including excessive worry about AI x-risk at the expense of the currently-living. It goes on to frame several online communities which have entertained various longtermist ideas to be a “bundle”, which I assume is intended to imply that these groups form a political bloc which encourages or enables accelerationist hypercapitalism.
I am not a fan of TESCREALism as a term.
For one thing, the article leans on genealogical arguments, mostly guilt by association. Many of the movements it names have no single view on the topic of AI x-risk, longtermism, or even the future in general, not a consistent utilitarian stance, nor a consistent interpretation of utilitarianism when they are utilitarian. We could draw a murder-board upon which key individuals in each of them are connected by red string, but it doesn’t seem to be a strongly natural category. Anyway, just because the associations do not seem meaningful to me that doesn’t mean that the arguments they share might not be bad.
However, the article is not good at identifying what arguments in particular the author thinks are deficient in the bundle. I think it is some longtermism themes? If I disagree with some version of e.g. longtermism, why not just say I disagree with that? Better yet, why not mention which of the many longtermisms I am worried about?
The effect to me is that the main critique is a large assortment of people with whom the author disagrees in different ways and who disagree with each other in different ways, are outgroup in a vaguely-specified nefarious alignment with dark forces. The muddier strategy of the article, disagreeing-with-longtermism-plus-feeling-bad-vibes-about-various-other-movements-and-philosophies-that-have-a-diverse-range-of-sometimes-tenuous-relationships-with-longtermism, doesn’t feel like it is making TESCREALism do useful work as a unit of analysis.
I saw this guilt-by-association play out in public discourse previously with “neoliberalism”, and probably the criticisms of the “woke” “movement” are doing the same thing. Since reading this, I have become worried that I am making the same mistake myself when talking about neoreactionaries. As such, I am grateful to the authors for making me interrogate my own prejudices, although I suspect that if anything, I have been shifted in the opposite direction than they intended.
Don’t get me wrong, it is important to see what uses are made of philosophies by movements. Further, movements are hijacked by bad actors all the time (which is to say, actors whose ends may have little to do with the stated goals of the movement), and it is important to be aware of that. Analysis of those important dynamics is typically best done by reducing them to their component parts, not gerrymandering them together.
If “TESCREALists” are functioning as a bloc, then… by all means, analyse this. I think that signatories to some components of the acronym do indeed function as a bloc from time to time (cf rationalists and effective altruists).
Broadly, I am not convinced there is a movement to hijack in this acronym, just some occasional correlations. Cosmism and effective altruism are not in correspondence with each other, not least because all the Cosmists are dead.
I made a meal of that, didn’t I? Many of my colleagues have been greatly taken by dismissing things as TESCREALism of late, so I think it needs mentioning.
I’m vaguely baffled by the whole thing though, and wondering how much mileage I can get out of lumping all the movements that have grated me the wrong way in the past together into a single acronym. (“Let me tell you about NIMBYs, coal magnates, and liberal scolds, three-chord punk, and how they are all part of the same movement, which I will call NICOLSP.”)
3 In historical context
Ian Morris on whether deep history says we’re heading for an intelligence explosion
Wong and Bartlett (2022)
we hypothesize that once a planetary civilization transitions into a state that can be described as one virtually connected global city, it will face an ‘asymptotic burnout’, an ultimate crisis where the singularity-interval time scale becomes smaller than the time scale of innovation. If a civilization develops the capability to understand its own trajectory, it will have a window of time to affect a fundamental change to prioritize long-term homeostasis and well-being over unyielding growth—a consciously induced trajectory change or ‘homeostatic awakening’. We propose a new resolution to the Fermi paradox: civilizations either collapse from burnout or redirect themselves to prioritising homeostasis, a state where cosmic expansion is no longer a goal, making them difficult to detect remotely.
More filed under big history.
3.1 Most-important century model
- Holden Karnofsky, The “most important century” blog post series
- Robert Wiblin’s analysis: This could be the most important century
4 Models of AGI
Hutter’s models
-
AIXI [’ai̯k͡siː] is a theoretical mathematical formalism for artificial general intelligence. It combines Solomonoff induction with sequential decision theory. AIXI was first proposed by Marcus Hutter in 2000[1] and several results regarding AIXI are proved in Hutter’s 2005 book Universal Artificial Intelligence. (Hutter 2005)
Prequel: An Introduction to Universal Artificial Intelligence - 1st Edition - M
ecologies of minds considers the distinction between evolutionary and optimising minds.
5 Technium stuff
More to say here; perhaps later.
6 Aligning AI
Let us consider general alignment, because I have little AI-specific to say yet.
7 Constraints
7.1 Compute methods
We are getting very good at efficiently using hardware (Grace 2013). AI and efficiency (Hernandez and Brown 2020) makes this clear:
We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a neural network to the level of AlexNet (by contrast, Moore’s Law would yield an 11x cost improvement over this period). Our results suggest that for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency.
See also
7.2 Compute hardware
TBD
8 Omega point etc
Surely someone has noticed the poetical similarities to the idea of noösphere/Omega point. I will link to that when I discover something well-written enough.
Q: Did anyone think that the noösphere would fit on a consumer hard drive?
“Hi there, my everyday carry is the sum of human knowledge.”
9 Incoming
Where did Johansen and Sornette’s economic model (Johansen and Sornette 2001) go? I think it ended up spawning Efferson, Richerson, and Weinberger (2023) Sornette (2003) and then fizzled. I am not really sure if either of those is “about” superintelligence per se but it looks like superintelligence might be implied.
Henry Farrell and Cosma Shalizi: Shoggoths amongst us connect AIs to cosmic horror to institutions.
Ten Hard Problems in and around AI
We finally published our big 90-page intro to AI. Its likely effects, from ten perspectives, ten camps. The whole gamut: ML, scientific applications, social applications, access, safety and alignment, economics, AI ethics, governance, and classical philosophy of life.
Artificial Consciousness and New Language Models: The changing fabric of our society - DeepFest 2023
Douglas Hofstadter changes his mind on Deep Learning & AI risk
François Chollet, The implausibility of intelligence explosion
Ground zero of the idea in fiction, perhaps, Vernor Vinge’s The Coming Technological Singularity
Stuart Russell on Making Artificial Intelligence Compatible with Humans, an interview on various themes in his book (Russell 2019)
Attempted Gears Analysis of AGI Intervention Discussion With Eliezer
Kevin Scott argues for trying to find a unifying notion of what knowledge work is to unify what humans and machines can do (Scott 2022).
Hildebrandt (2020) argues for talking about smart tech instead of AI tech. See Smart technologies | Internet Policy Review
Speaking of ‘smart’ technologies we may avoid the mysticism of terms like ‘artificial intelligence’ (AI). To situate ‘smartness’ I nevertheless explore the origins of smart technologies in the research domains of AI and cybernetics. Based in postphenomenological philosophy of technology and embodied cognition rather than media studies and science and technology studies (STS), the article entails a relational and ecological understanding of the constitutive relationship between humans and technologies, requiring us to take seriously their affordances as well as the research domain of computer science. To this end I distinguish three levels of smartness, depending on the extent to which they can respond to their environment without human intervention: logic-based, grounded in machine learning or in multi-agent systems. I discuss these levels of smartness in terms of machine agency to distinguish the nature of their behaviour from both human agency and from technologies considered dumb. Finally, I discuss the political economy of smart technologies in light of the manipulation they enable when those targeted cannot foresee how they are being profiled.
Everyone loves Bart Selman’s AAAI Presidential Address: The State of AI