Earlier this year, 116 technology luminaries signed an open letter (PDF) imploring the United Nations to ban “lethal autonomous weapons systems,” warning that they would “permit armed conflict to be fought at a scale greater than ever.” According to the Independent, it “marks the first time that artificial intelligence (AI) experts and robotics companies have taken a joint stance on the issue.”
Not all observers are as concerned; Andrew Ng, up until recently Baidu’s chief scientist, concludes that “worrying about killer robots is like worrying about overpopulation on Mars—we’ll have plenty of time to figure it out.”
In the early years of the 21st century, few topics have generated more intense interest, or elicited more spirited debate, than AI, beginning with the very understanding of the term: one observer quipped this March that “there are about as many definitions of AI as researchers developing the technology.” Robbie Whiting, a founder of the brand consulting firm Junior, contends that “AI is not a buzzword, and it is going to change the world.”
While one should use the term rigorously and be mindful of hyperbole, AI is already reshaping domains as varied as transportation (including autonomous vehicles), finance, and health care. Facebook’s chief technology officer believes it “can solve problems that scale to the whole planet.” Elon Musk, meanwhile, contends that it poses “a fundamental risk to the existence of human civilization.” Most technologies are neither intrinsically beneficial nor harmful; instead, we need to consider who is using them, and to what ends.
There is little dispute that AI is progressing far more rapidly than efforts to comprehend its complex nature, numerous dimensions, and far-reaching national security consequences. A recent report (PDF) by Gregory Allen and Taniel Chan, then graduate students at Harvard University, called on the U.S. government to establish “something like a RAND Corporation for AI.”
As RAND researchers, we subscribe to the Harvard team’s comparison of the challenge of AI to that of nuclear weapons; during the Cold War, RAND thinkers revolutionized how we think about security, deterrence, and survival.
Consider four arenas in which AI’s net impact is likely to be significant but uncertain.
Kai-Fu Lee, the chairman of Sinovation Ventures, assesses that AI “is poised to bring about a wide-scale decimation of jobs” while concentrating an ever-greater proportion of wealth into the hands of companies that develop and/or adopt it. Others respond that such fears have attended every disruptive technology, dating back to the printing press in the 15th century.
Which companies and countries will flourish in the AI era?
The Economist thusly reassures readers that AI “is creating demand for work,” with growing numbers of individuals around the world “supplying digital services online.” Which companies and countries will flourish in the AI era? Which sectors will be eliminated, modified, and/or created? How will the nature of work change?
Proponents of armed drones contend that such weapons can strike targets with far greater accuracy than humans; the larger a role they play in combat theaters, the thinking runs, the less frequently service members would have to deploy into harm’s way.
But what if such weapons become sufficiently independent that they operate independently, without human direction? Would removing war from the purview of humans unleash another, more unconstrained, weapons race?
An open letter published during the 2015 International Joint Conference on Artificial Intelligence warned that autonomous weapons “require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.” Will an era of increasingly automated conflict be, on balance, more peaceful or more violent?
RAND researchers have called for an analytical framework and an international approach to address the use of long-range armed drones in counter-terrorism and targeted killing.
Policymakers can easily be overwhelmed by the number of choices they must make and the range of stimuli they encounter—exponentially greater in today’s era of social media than in decades past. Such information overload would be difficult to manage during one crisis, let alone multiple crises.
A recent article in POLITICO Magazine broached the idea of having a computer “chug through all the decisions a president has to make—not to make the final choices itself, but to help guide the human commander in chief.”
Which decisions should be entrusted to AI? Which should remain in human hands?
But while AI has the aura of infallibility, a recent RAND study highlights the risk of algorithmic biases in filtering the news we consume, influencing the dispensation of criminal justice, even affecting the provision of Social Security benefits. Which decisions should be entrusted to AI? Which should remain in human hands? Which should be given to human-AI teams?
The world has grown accustomed to AI that can perform spectacular computational feats and defeat human beings in popular board games (it was just over 20 years that IBM supercomputer Deep Blue famously defeated chess grandmaster Gary Kasparov). How will its continued progression impinge on humans’ creative space?
AI researcher Jesse Engel believes it will “transform the creative process of humans…by augmenting it with smarter tools that enable new avenues of expression.” Others are not as sanguine. Atlantic journalist Adrienne LaFrance notes that AI can already “flirt,” “write novels,” and “forge famous paintings with astounding accuracy.” What does it mean to be creative? Even more basically, what does it mean to be human?
Discussions of AI often veer toward extremes, whether the promise of a utopia free of human suffering or the danger of a dystopia where robots enslave their human creators. More balanced, rigorous analysis is needed to help shape policies that mitigate its risks and maximize its benefits. Steps should be taken to overcome concerns that AI will outpace the ability of the government, and society, to adapt.
How might AI affect vital U.S. national interests? Which types of AI, if any, should be deemed strategic technologies, subject to government constraints? Where should market forces play the biggest role, where are existing policy frameworks adaptable for new technologies, and where might new approaches (within this country or internationally) make sense?
While AI still makes for great science fiction movies, these are the questions that are real and most pressing.
-Marjory S. Blumenthal, Andrew Parasiliti, Ali Wyne