Artificial Stupidity

By Thomas Gaulkin | September 11, 2018

I learned a few things from reading an excerpt from Yuval Noah Harari’s book, 21 Lessons for the 21st Century, published in the October issue of The Atlantic. One is that it took a Google machine-learning program just four hours to teach itself and master chess, once the pinnacle of centuries of human intellectual effort, easily defeating the top-ranked computer chess engine in the world.

Another is that artificial intelligence systems may be inherently anti-democratic and anti-human. New heights of computing power and data processing make it more efficient to centralize systems in authoritarian governments, Harari says, and will render humans increasingly irrelevant. “By 2050,” he writes, “a useless class might emerge, the result not only of a shortage of jobs or a lack of relevant education but also of insufficient mental stamina to continue learning new skills.”

In other words, we might just be too dumb to keep up.

Harari thinks we can avoid the worst outcomes by encouraging decentralization of data and continuing to work on our own intelligence as much as we do on the artificial kind. “If we invest too much in AI and too little in developing the human mind, the very sophisticated artificial intelligence of computers might serve only to empower the natural stupidity of humans, and to nurture our worst (but also, perhaps, most powerful) impulses.”

But it might be easier to simply limit the cognitive abilities of artificial intelligences to more human levels. In a white paper published online,  two computer scientists who study AI safety break down 14 known constraints on human intelligence and suggest which of them would work best to limit an AI’s most harmful tendencies. “Humans have clear computational constraints (memory, processing, computing, and clock speed) and have developed cognitive biases,” the authors said in their paper. “In order to build a safe [artificial general intelligence], some of those biases may need to be replicated.”

More than a few computer scientists doubt that solution can succeed, since a mediocre artificial intelligence might still find ways to overcome the limits we place on it. And even if it did work, other studies suggest that impaired robots are fully capable of leading humans to their doom anyway.

One recent experiment reported in the Washington Post demonstrated that elementary school students are subject to peer pressure from robots. When a group of children were posed a simple question alongside robots programmed to answer incorrectly, they copied the robots’ wrong answers three-quarters of the time.

An earlier study (with the reassuring title “Overtrust of Robots in Emergency Evacuation Scenarios”) revealed adults will look to robots for guidance even when they know those robots have already made potentially lethal mistakes, like leading them toward fire instead of away from it.

Both children and adults may be responding in these unfortunate ways because of something called “automation bias” — a general belief that robots are inherently smarter than humans. “They imbue them with all these amazing and fanciful properties,” Alan Wagner, one of the authors of the adult study, told the Post. “It’s a little bit scary.”

For now, these findings remain  mostly in the realm of lab results and speculation. Data dictatorships and robots that eclipse or mislead humans can’t undermine our species yet. But the threat is real. Online bots, often deliberately designed to seem less intelligent (or more humanlike?), are already influencing key decisions made by human populations—from swinging elections to rejecting vaccines.

And we haven’t even gotten to the existing threats to our own intelligence. According to a paper published last month in the Proceedings of the National Academy of Sciences, prolonged exposure to air pollution in China is causing significant cognitive deficits. Correlating air quality indices with results from standardized tests taken throughout the country in 2010 and 2014, the researchers found a widespread drop in test scores, especially on the verbal portion. They estimate that, on average, improving air quality in China to current US Environmental Protection Agency standards would raise scores by the equivalent of a full year of education.

So maybe the more urgent problem isn’t how to keep robots from becoming too smart, but how to keep humans from becoming too dumb.


Publication Name: The Atlantic
To read what we're reading, click here

Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
davyboy
davyboy
5 years ago

If AI can not reason, then AI is not intelligence, for reason must be able to see
that which is not obvious.

DAVY
DAVY
5 years ago

CAN AI REACT BEFORE THE UNPREDICTABILITY OF CHILDREN OCCURS
WITHOUT SEEING AND REASONING