Superintelligence
Incorporating technological singularities, hard AI take-offs, game-over high scores, the technium, deus-ex-machina, deus-ex-nube, AI supremacy, nerd raptures and so forth
December 2, 2016 — October 7, 2024
Small notes on the Rapture of the Nerds. If AI keeps on improving, will explosive intelligence eventually cut humans out of the loop and go on without us? Also, crucially, would we be pensioned in that case?
The internet has opinions about this.
A fruitful application of these ideas is in producing interesting science fiction and contemporary horror.
1 X-risk
x-risk is a term used in, e.g. the rationalist community to discuss risks of a possible AI explosion.
FWIW: I personally think that (various kinds of) AI x-risk are plausible, and serious enough to worry about, even if they are not the most likely option. If the possibility is that everyone dies, then we should be worried about it, even if it is only a 1% chance.
I would like to write some wicked tail risk theory at some point.
2 X-risk risk
There are people who think that focussing on x-risk is itself a risky distraction from more pressing problems, especially accelerationists.
e.g. what if we do not solve the climate crisis because we put effort into the AI risks instead? Or so much effort that it slowed down the AI that could have saved us? Or some much effort that we got distracted from other more pressing risks?
Here is one piece, that I found rather interesting: Superintelligence: The Idea That Eats Smart People (I thought that effective altruism meta criticism was the idea that ate smart people?)
Personally, I’m doubt these need to be zero-sum tradeoffs. Getting the human species ready to deal with catastrophe in general seems like a virtuous goal, and one that would help with both of these problems.
2.1 WTF is TESCREALism?
An article has gone viral in my circles recently denouncing TESCREALism. There is a lot going on there, but the main subject of criticism seems to be that some flavours of longtermism lead to unpalatable conclusions, including excessive worry about AI x-risk. It goes on to frame several online community which have entertained various longtermist ideas to be a “bundle”, which I assume is intended to imply that these groups form a political bloc which encourages or enables accelerationist hypercapitalism.
I am not a fan of this TESCREALism as a term.
For one, thing the article very leans on genealogical arguments, mostly guilt by association. Many of the movements it names have not single view, or diametrically opposed views, on the topic of AI X-risk.
The article is less good at identifying what arguments in particular the author thinks is deficient in the bundle, predominantly leaning on identifying the ‘bundle’ as an outgroup.
If I disagree with some version of longtermism, why not just say disagree with that? Better yet, why not mention which of the many longtermisms I am worried about? The muddier strategy of the article, Disagreeingwith-longtermism-plus-feeling-bad-vibes-about-various-other-movements-and-philosophies-that-have-a-diverse-range-of-sometimes-tenuous-relationships-with-longtermism doesn’t feel like it is doing much useful work.
I saw this guilt-by-association play out previously with “neoliberalism”, and probably the criticisms of “woke” “movement” are doing the same thing. Suddenly, I am worried that I am making the same mistake myself when talking about neoreactionaries.
Don’t get me wrong, movements are hijacked by bad actors all the time, and it is important to be aware of that; that is not, however, a negation of the arguments made by people in movements, per se.
If they are functioning as a bloc, then… that could be a thing I guess? I do not find this particular bloc to cleave reality at the joints though; I am not sure there is even a movement to hijack,, in this acronym. Cosmism and effective altruism are not not in correspondence with each other, not least because all the Cosmists are dead.
I made a meal of that, didn’t I? Many of my colleagues have been greatly taken by dismissing things as TESCREALism of late, so I think it needs mentioning.
3 In historical context
Ian Morris on whether deep history says we’re heading for an intelligence explosion
Wong and Bartlett (2022)
we hypothesize that once a planetary civilization transitions into a state that can be described as one virtually connected global city, it will face an ‘asymptotic burnout’, an ultimate crisis where the singularity-interval time scale becomes smaller than the time scale of innovation. If a civilization develops the capability to understand its own trajectory, it will have a window of time to affect a fundamental change to prioritize long-term homeostasis and well-being over unyielding growth—a consciously induced trajectory change or ‘homeostatic awakening’. We propose a new resolution to the Fermi paradox: civilizations either collapse from burnout or redirect themselves to prioritizing homeostasis, a state where cosmic expansion is no longer a goal, making them difficult to detect remotely.
More filed under big history.
3.1 Most-important century model
- Holden Karnofsky, The “most important century” blog post series
- Robert Wiblin’s analysis: This could be the most important century
4 Models of AGI
Hutter’s models
-
AIXI [’ai̯k͡siː] is a theoretical mathematical formalism for artificial general intelligence. It combines Solomonoff induction with sequential decision theory. AIXI was first proposed by Marcus Hutter in 2000[1] and several results regarding AIXI are proved in Hutter’s 2005 book Universal Artificial Intelligence. (Hutter 2005)
Prequel: An Introduction to Universal Artificial Intelligence - 1st Edition - M
ecologies of minds considers the distinction between evolutionary and optimising minds.
5 Technium stuff
More to say here; perhaps later.
6 Aligning AI
Let us consider general alignment, because I have little AI-specific to say yet.
7 Constraints
7.1 Compute methods
We are getting very good at efficiently using hardware (Grace 2013). AI and efficiency (Hernandez and Brown 2020) makes this clear:
We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a neural network to the level of AlexNet (by contrast, Moore’s Law would yield an 11x cost improvement over this period). Our results suggest that for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency.
See also
7.2 Compute hardware
TBD
8 Omega point etc
Surely someone has noticed the poetical similarities to the idea of noösphere/Omega point. I will link to that when I discover something well-written enough.
Q: Did anyone think that the noösphere would fit on a consumer hard drive?
“Hi there, my everyday carry is the sum of human knowledge.”
9 Incoming
Where did Johansen and Sornette’s economic model go? Johansen and Sornette (2001)? I think it ended up spawning Efferson, Richerson, and Weinberger (2023) Sornette (2003). I am not really sure if either of those is “about” superintelligence per se but it looks a little like superintelligence might be implied.
Henry Farrell and Cosma Shalizi: Shoggoths amongst us connect AIs to cosmic horror to institutions.
Ten Hard Problems in and around AI
We finally published our big 90-page intro to AI. Its likely effects, from ten perspectives, ten camps. The whole gamut: ML, scientific applications, social applications, access, safety and alignment, economics, AI ethics, governance, and classical philosophy of life.
Artificial Consciousness and New Language Models: The changing fabric of our society - DeepFest 2023
Douglas Hofstadter changes his mind on Deep Learning & AI risk
François Chollet, The implausibility of intelligence explosion
Ground zero of the idea in fiction, perhaps, Vernor Vinge’s The Coming Technological Singularity
Stuart Russell on Making Artificial Intelligence Compatible with Humans, and interview on various themes in his book (Russell 2019)
Attempted Gears Analysis of AGI Intervention Discussion With Eliezer
Kevin Scott argues for trying to find a unifying notion of what knowledge work is to unify what humans and machines can do (Scott 2022).
Hildebrandt (2020) argues for talking about smart tech instead of AI tech. See Smart technologies | Internet Policy Review
Speaking of ‘smart’ technologies we may avoid the mysticism of terms like ‘artificial intelligence’ (AI). To situate ‘smartness’ I nevertheless explore the origins of smart technologies in the research domains of AI and cybernetics. Based in postphenomenological philosophy of technology and embodied cognition rather than media studies and science and technology studies (STS), the article entails a relational and ecological understanding of the constitutive relationship between humans and technologies, requiring us to take seriously their affordances as well as the research domain of computer science. To this end I distinguish three levels of smartness, depending on the extent to which they can respond to their environment without human intervention: logic-based, grounded in machine learning or in multi-agent systems. I discuss these levels of smartness in terms of machine agency to distinguish the nature of their behaviour from both human agency and from technologies considered dumb. Finally, I discuss the political economy of smart technologies in light of the manipulation they enable when those targeted cannot foresee how they are being profiled.
Everyone loves Bart Selman’s AAAI Presidential Address: The State of AI