The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Tackling near and far AI threats at once

By Seth Baum | October 6, 2016

Artificial intelligence experts are divided over the threat of superintelligent computers. One group argues that even though these machines may be decades or centuries away, the scale of the catastrophe they could cause is so great, we should take action now to prevent them from taking over the world and killing everyone. Another group dismisses the fear of superintelligent computers as speculative and premature, and prefers to focus on existing and near-future AI. In fact, though, these two sides may have more common ground than they think. It’s not necessary to choose between focusing on long-term and short-term AI threats, when there are actions we can take now to simultaneously address both.

A superintelligent computer would be one that was smarter than humans across a wide range of cognitive domains. Concern about superintelligence dates to Irving Good’s 1966 paper “Speculations concerning the first ultraintelligent machine,” which posited that such AI would be “the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” More recently, philosopher Nick Bostrom (in his book Superintelligence) and tech celebrities like Bill Gates and Elon Musk have thrust the issue into the spotlight, arguing that superintelligence poses a grave danger we should worry about right now.

All this focus on superintelligence, though, has been poorly received by many people in the AI community. Existing AI technology is nowhere close to being superintelligent, but does pose current and near-term challenges. That makes superintelligence at best irrelevant and at worst “a distraction from the very real problems with artificial intelligence today,” as Microsoft principal researcher Kate Crawford put it in a recent op-ed in The New York Times. Near-term AI problems include the automation of race and gender bias, excessive violence from military robots, and injuries from robotics used in medicine, manufacturing, and transportation, in particular self-driving cars.

If short-term and long-term AI were fundamentally distinct issues, it might be worth debating which is more worthy of attention. They are not so distinct, though, and there are at least three ways to address them concurrently.

First, the norms of the AI community need to move from an emphasis on technology for its own sake to an emphasis on technology for the benefit of society. The distinguished AI researcher Stuart Russell calls for a shift so that “alignment of AI systems with human objectives is central to the field.” A lot of current AI work is focused on tasks like image and speech recognition, which do have applications that can benefit society, but this research is ultimately focused on improving AI capabilities themselves rather than a larger goal. The same holds for work that aims to build superintelligence without careful attention to how to prevent it from getting out of control: Some researchers are concentrating mainly on the task at hand, rather than wider consequences. As long as the field is focused on building AI for its own sake, those who worry about short-term and long-term issues both face an uphill battle.

Second, flowing out of the needed shift described above, there should be more technical research on how to make AI safer and more beneficial for society. For example, in the near-term, society may want to prevent drones programmed to kill terrorists from also killing civilians, while in the long term, we may want to prevent a superintelligence designed to work on the stock market from physically attacking humans. In both cases, we want to avoid a negative side effect caused by a machine that was designed to achieve some other goal. Avoiding negative side effects is an open technical challenge in AI research.

Third, governments should create policies to address AI issues. There is already some de facto AI policy in place, for example, legacy transportation laws that apply to self-driving cars. However, novel applications of AI create a need for dedicated regulations. Unfortunately, success in this area is impeded by weak links between government and the scientists and entrepreneurs developing artificial intelligence. In contrast to other fields, such as biotechnology and nuclear engineering, computer science has less of a tradition of policy engagement. The problem is compounded by the libertarian ideology that pervades much of the tech world, fostering the outlook that less regulation is always better. The lucrative salaries offered by private AI companies, meanwhile, tend to deter experts from going into government. As a result of all this, governments have little capacity to make smart policy on artificial intelligence. This hurts society, which needs laws that guide AI in more beneficial directions, and could also hurt the AI sector by leading to onerous restrictions where none are needed. The better the links between government and those who work in the field, the better policy will be, which will help mitigate both short- and long-term threats.

Instead of debating which is the more serious problem—a far-future superintelligence that could kill us all versus short-term AIs that perpetuate social ills—we can, and should, tackle both.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments