Why drones make us nervous

Nov 18, 2013 /


In his TED Talk, The kill decision shouldn’t belong to a robot, Daniel Suarez described the rise of drones, automated weapons and AI-powered intelligence-gathering tools. Here, he goes further, describing no less than a coming “automation revolution.”

Drones are in the news these days. More than any other technology, they capture the zeitgeist of the early 21st century. The controversy over drones in combat and the proposed use of drones by law enforcement might make headlines, but there’s also growing concern as remotely piloted drones transition to semi- and fully autonomous roles — operating without direct human supervision.

Of course, not all drones are destined for war or surveillance. Advocates see a positive future for mobile robotics in precision agriculture, search-and-rescue, environmental monitoring, logistics, hazardous-material handling, mapping, the uses limited only by human imagination. Then there are the folks who just want to play with robots, the drone enthusiasts. These entrepreneurs and hobbyists point to a future that’s not so frightening — one where humans figure out how to incorporate autonomous machines into our society without much drama.

But take the temperature of the general public on the subject of autonomous vehicles, and you will find unease. That’s one reason why the robotics industry avoids using the “D-word” and encourages newer, less emotionally freighted terms like “Unmanned Aircraft Systems.”

Yet, I would argue that autonomous drones and cars aren’t really where most of the automation is occurring. They’re just the physical manifestation of a much larger trend — the tip of a technological iceberg passing beyond humanity’s bow, and one that we’re rightly uneasy about.

What we’re concerned about is an automation revolution — every bit as transformational as the industrial revolution before it. The agent of this change is narrow AI software [1], a tool that can be leveraged to raise individual human whim to economies of scale. Technology is, after all, merely the physical manifestation of the human will, and when it comes to AI agents, that human can be digitally magnified a billionfold. Whether you’re a high-frequency Wall Street trader, a malware author, a medical researcher, a marketer, an astronomer, a dictator or a drone builder, narrow AI is the workhorse of the automation age. It is narrow AI software that imbues silicon with agency. And such narrow AI agents are increasingly everywhere in our society — a situation that risks tilting centuries-old human social arrangements on their head.

And still, it’s largely drones that get the press. Drones are a lightning rod for the automation revolution because they are its most visible manifestation. Drones can be pointed to overhead, or seen on the highways, and their growing use in various settings noticed.

Drones are a lightning rod for the automation revolution because they are its most visible manifestation.

Meanwhile, their virtual kin — innumerable and relatively invisible software agents — already manage broad swaths of human society as stand-ins for human actors. That transition has gone almost unnoticed. Software algorithms now handle our stock market trading, logistics, electrical power grid management, banking, communications, medical diagnosis, mapping analytics and much more. The human logic behind such decision-making has been codified into algorithms that greatly increase speed and efficiency.

That’s why there’s no turning back.

We are at the dawn of an Automation Age, and narrow (or weak) AI software agents are its hallmark. Narrow AI is distinctly different from the sort of human-level (or strong) AI we know from science fiction. Your narrow AI-powered GPS unit doesn’t contemplate the why of your proposed trip to Reseda. It simply creates the most efficient route to get there.

That efficiency is marvelous when it instantly delivers search results or alerts you to fraudulent use of your credit card, but not so marvelous when it makes widespread surveillance not only practical but downright cost-effective. Or when it creates a million-computer-strong botnet controlled by one or two people. So while the Singularity [2] will no doubt be an issue of concern to coming generations, for now we’ve got more pressing concerns — namely, how we’re going to grapple with highly capable, cost-effective, scalable, and yet relatively dumb narrow-AI automation proliferating all around us.

What will be the outcome for human society as it competes with sub-Singularity AI? That’s what the philosophers, technologists, sociologists, engineers, artists, politicians, activists, economists and many more must ponder and debate in coming years.

Automation — both in its cyber and robotics incarnations — is not going away. It is now a permanent fixture of our civilization. And even if modern industrial society were to break down, the survivors would be frantically working to get their computer networks up and running again as soon as possible. Their robots, too. Our machines are simply that useful.  No — our narrow AI friends are here to stay.

Even if modern industrial society were to break down, the survivors would be frantically working to get their computer networks up and running again.

Instead, just as our ancestors domesticated vicious dogs and dangerous large herbivores, so too will we humans need to safely domesticate narrow AI organisms. Doing so will involve building social structures to resist the power-centralizing effect of such easily multiplied, mindlessly obedient and inscrutable digital constructs. Failure to adjust to the rapid spread of narrow AI throughout society will come with stark consequences.

And those societies that attempt to resist progress, rejecting narrow AI automation, will be at a competitive disadvantage against those who successfully use it. On the other hand, those societies that implement automation without careful consideration of the consequences will be building an ecosystem for technological domination by the very few — because the same dynamic by which AIs increase productivity through centralized control can be used to undermine the checks and balances of democratic social systems.

But that is the charge of our times: To compete in the future we must learn to ingest increasing levels of software automation into the corpus of democratic life without fundamentally distorting the body politic. Previous generations had their challenges, and this appears to be ours.

And while we humans might not process facts with the swift precision of an optimized algorithm, one thing at which we excel is adaptation. And the sooner we understand the challenge presented to us by the automation age, the sooner we can start adapting to it.


[1] Narrow (or weak) AI can be defined as an information system designed to solve specific, reasonably well-defined problems. Whereas generalized (or strong) AI is an information system designed to autonomously learn new tasks and adapt to changing environments, narrow AI is unlikely ever to lead to human-level artificial intelligence because the tasks to which it is put are greatly simplified models of the real world.

[2] For the Amish among us, the Singularity is  “… a theoretical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence that will ‘radically change human civilization, and perhaps even human nature.’” From Eden, Amnon; Moor, James; Søraker, Johnny; Steinhart, Eric, eds. (2013). Singularity Hypotheses: A Scientific and Philosophical Assessment. Springer. p. 1.

Daniel Suarez is a former systems consultant and the author of sci-fi thrillers focused on technology-driven change, including Daemon, FreedomTM, Kill Decision and the upcoming InfluxFind Daniel on Twitter: @itsDanielSuarez; read an excerpt from Kill Decision, or read a Q&A with Daniel Suarez.