57 pages • 1 hour read
Max TegmarkA modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
In this epilogue, Tegmark shares the story of the inception and productivity of his nonprofit organization, The Future of Life Institute, co-founded with his wife, Meia Chita-Tegmark and a Harvard student, Viktoriya Krakovna. This institute was created in 2014 and designed with the goal of using “technological stewardship” to improve quality of life. Much of the Future of Life Institute’s work is aimed at the avoidance of future catastrophes, from nuclear war, misdirected AI, or other causes.
Tegmark discusses a trip to the London Science Museum in which he becomes deeply emotional about the history of technological progress and the potential of a “poetically tragic” future in which humans are completely replaced by machines (320). He discusses his resolve to focus on FLI and AI-safety research and mentions two conferences aimed at AI-safety, one in Puerto Rico in 2015 and another in Asilomar in 2017. On the former, he writes, “Our goal was to shift the AI-safety conversation from worrying to working: from bickering about how worried to be, to agreeing on concrete research projects that could be started right away to maximize the chance of a good outcome” (321). The work was to create consensus and unity. Tegmark expresses that the “ultimate moonshot triumph” entailed finding the funding for future projects.
Tegmark then discusses meeting Elon Musk, known primarily for his role in SpaceX and Tesla at the time, and how Musk eventually donated a large sum of money to FLI. Tegmark writes that the announcement of this donation was the climax of the Puerto Rico conference, the main purpose of which was to make AI-safety more mainstream.
In the two-year span between the Puerto Rico conference and the Asilomar conference, there were significant advancements in AI-safety research by DeepMind, Google, IBM, OpenAI, and others. Many people from both academia and industry came together for FLI’s Asilomar conference in 2017. The purpose of this conference, beyond extensive networking, was the generation of a set of principles for AI research that reflect extremely broad consensus across the field. The conference-goers generated 23 principles, which include research issues, ethical values, and “longer-term” issues (331).
Tegmark concludes by asking readers to think about how they can make a positive contribution to the future and believes that adopting a positive mindset is the first step. “Why sacrifice something you have,” he asks, “if you can’t imagine the even greater gain that this will provide? This means that we should be imagining positive futures not only for ourselves, but also for society and for humanity itself” (334). He notes that better, more democratic, and egalitarian societies are more likely to produce an AI revolution with a positive outcome.