There’s a lot of controversy over robots these days. The NY Times reports, “Robots have once again gripped the nation’s imagination, stoking fears of displaced jobs and perhaps even a displaced human race.” Yesterday, the Atlantic ran a story asserting that people are freaking out, creating an “artificial crisis” over AI.
60 Minutes stirred the pot with a news segment that went viral when they interviewed MIT Professors, Erik Brynjolfsson and Andrew McAfee, on how robots are partly to blame for our jobless recovery:
Andrew McAfee: Our economy is bigger than it was before the start of the Great Recession. Corporate profits are back. Business investment in hardware and software is back higher than it's ever been. What's not back is the jobs.
Steve Kroft: And you think technology and increased automation is a factor in that?
Erik Brynjolfsson: Absolutely.
Is this just an artificial crisis, or is AI something that we should take seriously?
If we focus exclusively on the hardware aspect of AI—that is, robotics—there’s reasons to be skeptical. As the Atlantic explains, “let's calm our warm-blooded nerves by remembering that the current stock of humanoid robots is still remarkably primitive, as Brynjolfsson and McAfee acknowledge themselves. They look creepy. They struggle with people skills. They fall down stairs. They're bad at problem-solving. They're not very creative.”
This line of argument against the threat of AI is not uncommon. Unfortunately, it’s also very wrong.
In discussing the current or future impact of machine intelligence—whether on the economy, the stock market, or with military weapons, for example—it is important to distinguish between hardware and software. Take, for example, the millions of algorithms currently evolving in the stock market today. What do they look like? Or, how about self-replicating cyberweapons like Stuxnet and Flame used for attacking Iran. How are they with people skills, problem solving, and creativity?
Another thing: It is terribly misleading to associate progress on AI with our ability to create something that looks, walks, or talks like us. This is where most people get side-tracked.
A good analogy that I’ve mentioned before is the historical attempt to master flight. Many well-meaning but unfortunate inventors once thought the key to flying was by strapping on wings and imitating a bird. Given how much of a failure that turned out to be, we likewise shouldn’t attempt to judge the nature of AI on something that resembles our physical likeness. Software, of course, is immaterial.
So, I repeat the question: Is this just an artificial crisis—as the Atlantic and others assert—or is AI something that we should take seriously?
As WSJ reporter, Scott Patterson, writes in one of my favorite books Dark Pools: High-Speed Traders, A.I. Bandits, and the Threat to the Global Financial System (pick up a copy if you haven't already):
“The algorithms were changing so rapidly, devouring one another so viciously in the daily microsecond skirmishes of the Algo Wars, that the market seemed poised on the edge of either a mind-blowing evolutionary leap—or a cataclysmic implosion. Its own architects...could barely keep pace with the changes. It was a lab experiment in real time, with no turning back. Mathematicians, computer programmers, and physicists were conducting a grand experiment on the global financial system—one of the most chaotic, unpredictable forces on the planet..."
When we consider just how much influence the machine of global finance has over the world, the greatest threat posed by AI is not to replace our jobs locally, but to replace our worldwide influence in one of the most sensitive networks we've ever created—a network more strategic, concentrated, and vital than any military alliance between countries.
With money, trade, and the fate of nations now laying in the market’s hands the real question we should be asking is, if AI takes over global finance, what does that mean for the rest of the world?