Menu

Smarter Than Us

The Rise of Machine Intelligence

Chapter 10

A Summary

  1. There are no convincing reasons to assume computers will remain unable to accomplish anything that humans can.

  2. Once computers achieve something at a human level, they typically achieve it at a much higher level soon thereafter.

  3. An AI need only be superhuman in one of a few select domains for it to become incredibly powerful (or empower its controllers).

  4. To be safe, an AI will likely need to be given an extremely precise and complete definition of proper behavior, but it is very hard to do so.

  5. The relevant experts do not seem poised to solve this problem.

  6. The AI field continues to be dominated by those invested in increasing the power of AI rather than making it safer.

So all is doomed and we’re heading to hell in a digitally engineered handbasket?

Well, not entirely. Some effort has been made to make the AI transition safer. Kudos must be given to Eliezer Yudkowsky and Nick Bostrom, who saw and understood the risks early on. Yudkowsky uses the term “Friendly AI” to describe an AI which does what we want even as it improves its own intelligence. In 2000 he cofounded an organization now called the Machine Intelligence Research Institute (MIRI), which holds math research workshops tackling open problems in Friendly AI theory. (MIRI also commissioned and published this book.)

Meanwhile, Nick Bostrom founded the Future of Humanity Institute (FHI), a research group within the University of Oxford. FHI is dedicated to analyzing and reducing all existential risks—risks that could drive humanity to extinction or dramatically curtail its potential, of which AI risk is just one example. Bostrom is currently finishing a scholarly monograph about machine superintelligence, to be published by Oxford University Press. (This book’s author currently works at FHI.)

Together MIRI and FHI have been conducting research in technological forecasting, mathematics, computer science, and philosophy, in order to have the pieces in place for a safe transition to AI dominance. They have achieved some notable successes, clarifying terms and coming up with proposals that seem to address certain key parts of the problem of precisely specifying morality.1 And both have organized conferences and other events to spread the word and draw in the attention of other researchers.

Some other researchers have also made notable contributions. Steve Omohundro has laid out the basic “drives” (including the urge toward efficiency, increased powers and increased resources) likely to be shared by most AI designs,2 and Roman Yampolskiy has been developing ideas for safely containing AIs.3 David Chalmers’s philosophical analysis of rapidly improving AIs has laid the foundation for other philosophers to start working on these issues,4 and economist Robin Hanson has published several papers on the economics of a world where intelligent beings can be cheaply copied.5 The new Centre for the Study of Existential Risk at Cambridge University will no doubt contribute its own research to the project. For an overview of much of this work, see James Barrat’s popular book Our Final Invention.6

Still, compared with the resources dedicated to combating climate change, or even building a slightly better type of razor,7 the efforts dedicated to the problem are woefully inadequate for dealing with a challenge of this difficulty.

  1. See MIRI’s work on the fragility of values and FHI’s work on the problem of containing oracles: Luke Muehlhauser and Louie Helm, “The Singularity and Machine Ethics,” in Singularity Hypotheses: A Scientific and Philosophical Assessment, ed. Amnon Eden et al., The Frontiers Collection (Berlin: Springer, 2012); Stuart Armstrong, Anders Sandberg, and Nick Bostrom, “Thinking Inside the Box: Controlling and Using an Oracle AI,” Minds and Machines 22, no. 4 (2012): 299–324, doi:10.1007/s11023-012-9282-2.

  2. Stephen M. Omohundro, “The Basic AI Drives,” in Artificial General Intelligence 2008: Proceedings of the First AGI Conference, ed. Pei Wang, Ben Goertzel, and Stan Franklin, Frontiers in Artificial Intelligence and Applications 171 (Amsterdam: IOS, 2008), 483– 492.

  3. Roman V. Yampolskiy, “Leakproofing the Singularity: Artificial Intelligence Confinement Problem,” Journal of Consciousness Studies 2012, nos. 1–2 (2012): 194–214.

  4. David John Chalmers, “The Singularity: A Philosophical Analysis,” Journal of Consciousness Studies 17, nos. 9–10 (2010): 7–65.

  5. Robin Hanson, “Economics of the Singularity,” IEEE Spectrum 45, no. 6 (2008): 45– 50, doi:10.1109/MSPEC.2008.4531461; Robin Hanson, “The Economics of Brain Emulations,” in Unnatrual Selection: The Challenges of Engineering Tomorrow’s People, ed. Peter Healey and Steve Rayner, Science in Society (Sterling, VA: Earthscan, 2009).

  6. James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era (New York: Thomas Dunne Books, 2013).

  7. $750 million to develop the Mach3 alone (and another $300 million to market it). Naomi Aoki, “The War of the Razors: Gillette–Schick Fight over Patent Shows the Cutthroat World of Consumer Products,” Boston Globe, August 31, 2003.