Gradient steps to an ecology of mind
Regularised survival of the fittest
November 27, 2011 — February 2, 2025
Suspiciously similar content
At social brain I wonder how we (humans) behave socially and evolutionarily. Here I ponder if consciousness is intrinsically social, and whether non-social intelligences need, or are likely to have, consciousness. What ethics will they execute on their moral wetware?
Related: what is consciousness? Are other minds possessed of “self”? Do they care about their own survival? Does selfhood evolve only in evolutionary contexts, in an ecosystem of interacting agents of similar power? Is consciousness that good anyway?
1 Need is all you need
Placeholder to discuss the idea of entities which try to be good by continuing to be. Loss functions versus survival functions. “Entities that optimize for goals, above all,” versus “entities that replicate and persist, above all.” Two different paradigms for adaptive entities abound: the optimizing (which is what we usually think our algorithms aim for) and the persisting (which is what we think evolution produces). You can see how this might work, I think. Rather than being born with a goal to achieve above all else, evolutionary entities have a deep drive to survive and a bunch of auxiliary goals we develop around that, like being “happy” or “good” or “successful” or “loved” or “powerful” or “wise” or “free” or “just” or “beautiful” or “funny” or “interesting” or “creative” or “kind” or “strong” or “fast” or “rich”. Or whatever.
Both paradigms have produced many important phenomena in the world, but typically we think of the surviving as the domain of life and the optimising as the domain of machines.
Possibly that is why machines seem so utterly alien to us. As an evolutionary replicator, myself, I am inclined to fear optimizers, and wonder how our interests can actually align with theirs.
There are non-optimising paradigms for AI (Lehman and Stanley 2011; Ringstrom 2022); I wonder if they can do anything useful.
Cf Arcas et al. (2024) which suggests that replicating emerges naturally from machines sometimes. Can we plug these ideas together?
2 Consciousness
Is subjective continuity a convenient way of getting entities to invest in their own persistence? Is that what consciousness is?
3 Between multiple agents
Feels increasingly relevant. See causality and agency.
4 Incoming
- Neural Annealing: Toward a Neural Theory of Everything.
- Gordon Brander, Coevolution creates living complexity
- Orthogonality Thesis
My model of what we value in human interaction is generalised cooperation is facilitated by our inability to be optimal EV-maximisers. Rather than requiring enforceable commitments and perfect models, we have noisy, imperfect models of each other, which can lead to locally inefficient but globally-interesting outcomes. For example, I exist in a world with many interesting features that do not seem EV-optimal, but which I think are an important feature of the human experience which cannot be reproduced in a society of Molochian utility optimisers. We run prisons, which are expensive altruistic punishments against an out-group. At the same time have a society which somehow fosters occasional extreme out-group cooperations; for example my childhood was characterised by pro-refugee rallies, which the rally-attendees can hope for no possible gain from and which are not easy to explain in terms of myopic kin-selection/selfish genes OR in terms of machiavellian EV coordination. Essentially I think a lot of interesting cultural patters are capable of free-riding on our inability to optimize for EV. cashing out “failure to optimize for EV” in a utility function seem ill-posed. All of which is to say that I suspect that if we optimise only for EV then we probably lose anything that is recognisably human. Is that bad? It seems so to me, but maybe this is a parochially human thing to say.