57 pages • 1 hour read
Max TegmarkA modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
Chapter 5 discusses the kind of future humans want and why they want it, as well as presents a number of “aftermath scenarios.” These scenarios all refer to a time after which AI has surpassed human-level intelligence, i.e., after the “intelligence explosion.” Tegmark provides a substantial list of potential scenarios in Table 5.1, several of which he discusses at length throughout the chapter via a series of hypotheticals. These scenarios include Libertarian Utopia, the Benevolent Dictator, Egalitarian Utopia, Gatekeeper, Protector God, Enslaved God, Conquerors, Descendants, Zookeeper, 1984, Reversion, and Self-Destruction. These scenarios can be separated, broadly speaking, into three main categories: peaceful coexistence, human extinction, or the prevention of superintelligence.
There are a number of different scenarios via which humans could peacefully coexist with superintelligent AI. In the “Enslaved God” scenario, this coexistence is forced on the godlike AI because it is imprisoned and made to do the bidding of human beings. Tegmark writes that, regardless of the moral concerns we may have with this case, this scenario could be unstable because it might end with breakout. It’s also more “low-tech” (180) than what could be achieved by a free AI. That said, Tegmark believes this is “the scenario that some AI researchers aim for by default” (179).
The other coexistence scenarios are more genuinely peaceful. The superintelligent AI could be a benevolent dictator, wanting only what’s best for humanity. This AI eliminates crime, disease, poverty, etc. The downside is that it prevents humanity from an active role in shaping its future. The Libertarian utopia allows for a shared world, protected by property rights between humans, cyborgs, digitally uploaded minds, and multiple superintelligent AIs. People are able to mix in fascinating ways and have a plethora of unique experiences. The inegalitarian potential of such a society is one of its clear drawbacks. In the “Protector God” scenario, an extremely superintelligent AI lurks in the background like an invisible hand. It acts as a God infinitely and solely interested in human happiness. It hides its own presence so that humans will have a greater sense of freedom. The problem is that this freedom is illusory. This, still, is better than the final option of coexistence, the Zookeeper. In this scenario, humans are kept in a comfortable zoo, eternally confined, and only provided with the bare necessities.
There are also a number of cases in which superintelligences are prevented from developing. One is the Gatekeeper scenario, in which an advanced AI prevents a more advanced AIs for developing. In this case, humans get to stay in charge of their future because technological advancement is curtailed by the Gatekeeper. This, depending on the perspective, is both the upside and downside of the scenario. The most likable choice, according to a pole Tegmark has conducted, is the Egalitarian Utopia. It’s similar to the Libertarian Utopia in that cyborgs, humans, and uploads live peacefully together, but in this case, there is no superintelligent AI and no private property. Everyone receives basic income and access to free software so that they can explore as they see fit. One potential downside is that it’s inherently unstable and may yield a superintelligent AI anyway. Superintelligence may also be prevented by “Reversion” to a pre-technological society in which advanced tech is outlawed, or through a 1984-like totalitarian state that bans AI research with full-scale surveillance.
Finally, there is the set of scenarios in which humanity goes extinct. Two of these cases are worst-case scenarios. In one, we go extinct through our own malfeasance via a nuclear holocaust or some other kind of advanced weaponry attack. Tegmark refers to this as omnicide, the collective suicide of the human race (195). He lays out the current situation in which the number of hydrogen bombs already in existence is capable of vast, unimaginable destruction. It is also possible, he reasons, that humans go existent because of a race of conquering AIs or AI robots. The AIs could do this because they think we are a nuisance or threat. They could also do this accidentally, or via “banality” because they have goals that significantly misalign with our own. In one case, though, humans may go extinct slowly as they are replaced by descendants of another form. Here, humans are phased out by kinder AIs that don’t kill us but also don’t permit procreation, instead allowing for the rearing of robotic babies.
Though many of these scenarios may sound outlandish, keep in mind that Tegmark is not arguing for anything that he thinks will happen. Instead, he is simply exploring possibilities and trying to explore the pros and cons of each situation. Tegmark and many others in his AI-safety circle, like Australian philosopher Toby Ord, believe that the decisions that humans make about our future could set us on particular paths toward any one of these distant possibilities. This, for Tegmark, is the existential importance of what might otherwise seem like childish speculation.