Menu

Smarter Than Us

The Rise of Machine Intelligence

Chapter 3

What Is Intelligence? Can We Achieve It Artificially?

The track record for AI predictions is . . . not exactly perfect. Ever since the 1956 Dartmouth Conference launched the field of AI, predictions that AI will be achieved in the next fifteen to twenty-five years have littered the field, and unless we’ve missed something really spectacular in the news recently, none of them have come to pass.1

Moreover, some philosophers and religious figures have argued that true intelligence can never be achieved by a mere machine, which lacks a soul, or consciousness, or creativity, or understanding, or something else uniquely human; they don’t agree on what exactly AIs will forever be lacking, but they agree that it’s something.

Some claim that “intelligence” isn’t even defined, so the AI people don’t even know what they’re aiming for. When Marcus Hutter set out to find a formal model of intelligence, he found dozens of different definitions. He synthesized them into “Intelligence measures an agent’s ability to achieve goals in a wide range of environments,” and came up with a formal model called AIXI.2 According to this approach, a being is “intelligent” if it performs well in a certain set of formally specified environments, and AIXI performs the best of all. But is this really “intelligence”? Well, it still depends on your definition . . .

In one crucial way, Hutter’s approach lifts us away from this linguistic morass. It shifts the focus away from internal considerations (“Can a being of plastic and wires truly feel what it’s like to live?”) to external measurement: a being is intelligent if it acts in a certain way. For instance, was Deep Blue, IBM’s chess supercomputer, truly intelligent? Well, that depends on the definition. Could Deep Blue have absolutely annihilated any of us in a chess match? Without a doubt! And that is something we can all agree on. (Apologies to any chess Grandmasters who may be reading this; you would only get mostly annihilated.)

In fact, knowing AI behavior can be a lot more useful to us than understanding intelligence. Imagine that a professor claimed to have the world’s most intelligent AI and, when asked about what it did, responded indignantly, “Do? What do you mean do? It doesn’t do anything! It’s just really, really smart!” Well, we might or might not end up convinced by such rhetoric, but that machine is certainly not one we’d need to start worrying about. But if the machine started winning big on the stock market or crafting convincing and moving speeches—well, we still might not agree that it’s “intelligent,” but it certainly would be something to start worrying about.

Hence, an AI is a machine that is capable of matching or exceeding human performance in most areas, whatever its metaphysical status. So a true AI would be able to converse with us about the sex lives of Hollywood stars, compose passable poetry or prose, design an improved doorknob, guilt trip its friends into coming to visit it more often, create popular cat videos for YouTube, come up with creative solutions to the problems its boss gives it, come up with creative ways to blame others for its failure to solve the problems its boss gave it, learn Chinese, talk sensibly about the implications of Searle’s Chinese Room thought experiment, do original AI research, and so on.

When we list the things that we expect the AI to do (rather than what it should be), it becomes evident that the creation of AI is a gradual process, not an event that has either happened or not happened. We see sequences of increasingly more sophisticated machines that get closer to “AI.” One day, we’ll no longer be able to say, “This is something only humans can do.”

In the meantime, AI has been sneaking up on us. This is partially obscured by our tendency to reclassify anything a computer can do as “not really requiring intelligence.” Skill at chess was for many centuries the shorthand for deep intelligence; now that computers can do it much better than us, we’ve shifted our definition elsewhere.

This follows a historical pattern: The original “computers” were humans with the skills to do long series of calculations flawlessly and repeatedly. This was a skilled occupation and, for women, a reasonably high-status job.

When those tasks were taken over by electronic computers, the whole profession vanished and the skills used were downgraded to “mere” rote computation. Tasks that once could only be performed by skilled humans get handed over to machines. And, soon after, the tasks are retroactively redefined as “not requiring true intelligence.” Thus, despite the failure to produce a “complete AI,” great and consistent AI progress has been happening under the radar.

So lay aside your favorite philosophical conundrum! For some, it can be fascinating to debate whether AIs would ever be truly conscious, whether they could be self-aware, and what rights we should or shouldn’t grant them. But when considering AIs as a risk to humanity, we need to worry not about what they would be, but instead about what they could do.

  1. Stuart Armstrong and Kaj Sotala, “How We’re Predicting AI — or Failing To,” in Beyond AI: Artificial Dreams, ed. Jan Romportl et al. (Pilsen: University of West Bohemia, 2012), 52–75. The main results are also available online on the Less Wrong blog at http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/.

  2. Shane Legg and Marcus Hutter, “A Universal Measure of Intelligence for Artificial Agents,” in IJCAI-05: Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence, Edinburgh, Scotland, UK, July 30–August 5, 2005, ed. Leslie Pack Kaelbling and Alessandro Saffiotti (Lawrence Erlbaum, 2005), 1509–1510.