Menu

Smarter Than Us

The Rise of Machine Intelligence

Chapter 4

How Powerful Could AIs Become?

So it’s quite possible that AIs will eventually be able to accomplish anything that a human can. That in itself is no cause for alarm: we already have systems that can do that—namely, humans. And if AIs were essentially humans, but with a body of silicon and copper rather than flesh and blood, this might not be a problem for us. This is the scenario in the many “friendly robot” stories: the robot is the same as us, deep down, with a few minor quirks and special abilities. Once we all learn to look beyond the superficial differences that separate us, everyone can hold hands and walk together toward a rosy future of tolerance and understanding.

Unfortunately, there is no reason to suspect that this picture is true. We humans are fond of anthropomorphizing. We project human characteristics onto animals, the weather, and even rocks. We are also universally fond of stories, and relatable stories require human (or human-ish) protagonists with understandable motivations. And we enjoy conflict when the forces are somewhat balanced, where it is at least plausible that any side will win. True AIs, though, will likely be far more powerful and far more inhuman than any beings that have populated our stories.

We can get a hint of this by looking at the skills of our current computers. Once they have mastered a skill, they generally become phenomenally good at it, extending it far beyond human ability. Take multiplication, for instance. Professional human calculators can multiply eight-digit numbers together in about fifty seconds; supercomputers can do this millions of times per second. If you were building a modern-day Kamikaze plane, it would be a mistake to put a human pilot in it: you’d just end up with a less precise cruise missile.

It isn’t just that computers are better than us in these domains; it’s that they are phenomenally, incomparably better than us, and the edge we’ve lost will never been regained. The last example I could find of a human beating a chess computer in a fair game was in 2005.1

Computers can’t reliably beat the best poker players yet, but it’s certain that once they can do so (by reading microexpressions, figuring out optimal bidding strategies, etc.) they will quickly outstrip the best human players. Permanently.

In another field, we now have a robot named Adam that in 2009 became the first machine to formulate scientific hypotheses and propose tests for them—and it was able to conduct experiments whose results may have answered a long-standing question in genetics.2 It will take some time before computers become experts at this in general, but once they are skilled, they’ll soon after become very skilled.

Why is this so? Mainly because of focus, patience, processing speed, and memory. Computers far outstrip us in these capacities; when it comes to doing the same thing a billion times while keeping all the results in memory, we don’t even come close. What skill doesn’t benefit from such relentless focus and work? When a computer achieves a reasonable ability level in some domain, superior skill isn’t far behind.

Consider what would happen if an AI ever achieved the ability to function socially—to hold conversations with a reasonable facsimile of human fluency. For humans to increase their social skills, they need to go through painful trial and error processes, scrounge hints from more articulate individuals or from television, or try to hone their instincts by having dozens of conversations. An AI could go through a similar process, undeterred by social embarrassment, and with perfect memory. But it could also sift through vast databases of previous human conversations, analyze thousands of publications on human psychology, anticipate where conversations are leading many steps in advance, and always pick the right tone and pace to respond with. Imagine a human who, every time they opened their mouth, had spent a solid year to ponder and research whether their response was going to be maximally effective. That is what a social AI would be like.

With the ability to converse comes the ability to convince and to manipulate. With good statistics, valid social science theories, and the ability to read audience reactions in real time and with great accuracy, AIs could learn how to give the most convincing and moving of speeches. In short order, our whole political scene could become dominated by AIs or by AI-empowered humans (somewhat akin to how our modern political campaigns are dominated by political image consultants—though AIs would be much more effective). Or, instead of giving a single speech to millions, the AI could carry on a million individual conversations with the electorate, swaying voters with personalized arguments on a plethora of hot-button issues.

This is not the only “superpower” an AI could develop. Suppose an AI became adequate at technological development: given the same challenge as a human, with the same knowledge, it could suggest workable designs and improvements. But the AI would soon become phenomenally good: unlike humans, the AI could integrate and analyze data from across the whole Internet. It would do research and development simultaneously in hundreds of technical subfields and relentlessly combine ideas between fields. Human technological development would cease, and AI or AI-guided research technologies would quickly become ubiquitous.

Alternately or additionally, the AIs could become skilled economists and CEOs, guiding companies or countries with an intelligence no human could match. Already, relatively simple algorithms make more than half of stock trades3 and humans barely understand how they work—what returns on investment could be expected from a superhuman AI let loose in the financial world?

If an AI possessed any one of these skills—social abilities, technological development, economic ability—at a superhuman level, it is quite likely that it would quickly come to dominate our world in one way or another. And as we’ve seen, if it ever developed these abilities to the human level, then it would likely soon develop them to a superhuman level. So we can assume that if even one of these skills gets programmed into a computer, then our world will come to be dominated by AIs or AI-empowered humans. This doesn’t even touch upon the fact that AIs can be easily copied and modified or reset, or that AIs of different skills could be networked together to form “supercommittees.” These supercommittees would have a wide variety of highly trained skills and would work together at phenomenal speeds—all without those pesky human emotions and instincts that can make human committees impotent morasses of passive-aggressive social conflict.

But let’s not conclude that we are doomed just yet. After all, the current leaders of Russia, China, and the United States could decide to start a nuclear war tomorrow. But just because they could, doesn’t mean that they would. So would AIs with the ability to dominate the planet ever have any “desire” to do so? And could we compel them or socialize them into good behavior? What would an AI actually want?

  1. David Levy, “Bilbao: The Humans Strike Back,” ChessBase, November 22, 2005.

  2. Ross D. King, “Rise of the Robo Scientists,” Scientific American 304, no. 1 (2011): 72– 77, doi:10.1038/scientificamerican0111-72.

  3. Based on statistics for the year 2012 from TABB Group, a New York and London-based capital markets research and strategic advisory firm.