logo

57 pages 1 hour read

Max Tegmark

Life 3.0: Being Human in the Age of Artificial Intelligence

Nonfiction | Book | Adult | Published in 2017

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Chapter 3Chapter Summaries & Analyses

Chapter 3 Summary & Analysis: “The Near Future: Breakthroughs, Bugs, Las Weapons, and Jobs”

Chapter 3 is a survey analysis of potential near-future scenarios. It focuses on the advancement of artificial intelligence and its impact on various sectors of human life, including warfare, economics, the future of hiring, law, and healthcare.

The chapter begins with a guiding existential question: “What does it mean to be human in the present day and age?” (82). Tegmark proceeds to discuss breakthroughs in modern AI development, especially in a field called deep reinforcement learning. Such AIs, like those from DeepMind, learn by receiving positive reinforcement for achieving tasks related to their programmed goals. When discussing the success of another AI that has advanced through deep reinforcement learning, AlphaGo, Tegmark notes that the achievements of such AIs often look like they’re based on intuition. For Tegmark, the “marriage of intuition and logic” provides some AI with the ability to be genuinely creative. He believes that there are many examples of AI taking “baby steps” toward appearing completely human in its ability to converse (90).

Tegmark predicts that the near future will be very different and will challenge much of what it means to be human even before AGIs reach human levels of ability in complex situations. He explores how societies can update their laws, weaponry, and economic futures, noting that “the challenges transcend all traditional boundaries—both between specialties and between nations” (93). Current and future technological breakthroughs will require novel ways of responding to crises. Tegmark believes that despite the innumerable benefits AI will offer, there could be accidents on an extremely large scale. When dealing with potential risks, “we should become more proactive than reactive” (94). He notes that there are four interconnected steps for technical protocol in AI safety: verification, validation, security, and control (94).

He then discusses sectors of human life that will be transformed by AI, beginning with space exploration. The first phase of AI safety protocol, verification, is essential in space exploration because the slightest oversight can cause a mission failure. Tegmark briefly discusses financial markets and the need for validation. Whereas verification makes sure something is constructed accurately, validation questions whether the right kind of machine was created. Tegmark notes that in manufacturing AI robots have helped lead to radically fewer deaths in factories, despite the occasional fatal accident. These accidents, he believes, could have been avoided with better validation systems. He also discusses the safety benefits of self-driving cars but notes that in this instance, verification and validation need to be supplemented with control, i.e., the “ability for a human operator to monitor the system and change its behavior if necessary” (99). He continues this discussion of control in a section on the energy sector. In the past, several accidents in energy could have been avoided by better interfaces; machines must be made to interact well with the humans that operate them or there could be communication breakdowns in control.

He notes that there have been significant advantages in healthcare because of AI involvement, which is rapidly approaching human-level ability in some areas of diagnosis. Robots could also become great surgeons. He makes the case for a “moral imperative for developing better AI for medicine” since the number of deaths due to hospital care (including damaged or faulty equipment) is staggeringly high (102). Then, in a section on the communications industries, he notes the relevance of the fourth element of AI safety protocol, security. Security is aimed at preventing “deliberate malfeasance,” that is, the intentional bad action of criminals (103). He notes the importance of making AIs extremely secure before critical tasks can be trusted to them. Maintaining AI security is a challenge developers have not yet mastered: He writes, “In the ongoing computer-security arms race between offense and defense, there’s so far little indication that defense is winning” (104).

Turning to law, Tegmark notes the possibility of “robojudges” that, as a matter of mere mathematical function, determine cases without the impediment of human biases or errors. He thinks that this could finally ensure real equality under the law (105). He then discusses the importance of having more technologically minded people in the legal community, as well as potential legal controversies, like the debate between privacy and freedom of information. We may also find ourselves asking one day what sort of rights various machines should be granted. For instance, should a self-driving car be granted its own insurance policy? “If it sounds far-off,” he warns, “consider that most of our economy is already owned by another form of non-human entity: corporations” (109).

Tegmark also discusses weapons systems from a variety of angles, noting the rising impact of drone warfare and that drones can use algorithms to assess targets. He notes that human-operated drones will not, ultimately, be as efficient as autonomously flown drones. He notes a couple of sobering cases in which near nuclear crises were barely averted thanks to the heroism of a lone person. This, presumably, is meant to indicate the ongoing need for human control over systems that may malfunction or misinterpret data. Tegmark’s biggest moral concern with weaponry is the generation of another arms race. He includes an open letter that he penned with British computer scientist Stuart Russell regarding his concern with autonomous weapons. He states that many experts have notes the value in banning biological and chemical weapons and discusses meeting Henry Kissinger. Another area of concern is cyberwarfare, which may become more powerful and dangerous as societies become more completely automated.

Tegmark notes that the labor market will also be radically changed. The more robots take over the workplace, the more humans are liberated from jobs of bare drudgery. Despite this, Tegmark is concerned with rising levels of income inequality and believes that advancing technology could make this problem worse if it is not addressed in the right way. Digitalization adds to the “edge of capital over labor” as fewer employees are required to maintain and move products (120). Given the changing labor market, Tegmark says that he’s encouraging his own children to pursue careers with skills that “machines are currently bad at” so that they’ll be more likely to have career stability (121). Eventually, humans may have a harder time finding work in any field. He reflects on historical examples, including the shift from blue to white collar jobs and the replacement of horses by cars, which, ominously (given the parallel he’s drawing), led to the slaughter of millions of horses. He also investigates the idea of universal basic income. As he notes has already become the case with social media, encyclopedias, videoconferencing and more, “technological progress can end up providing many valuable products and services for free even without government intervention” (127), and he expects this trend to continue. Tegmark believes there is both a moral and practical argument for greater financial equality and the democracy in which it results. He views liberation from work as potentially very meaningful: “once one breaks free of the constraint that everyone’s activities must generate income, the sky’s the limit” (129).

blurred text
blurred text
blurred text
blurred text