The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Artificial intelligence: Can we control it?

By Alex Hearn | July 18, 2016

Nick Bostrom, the founding director of the Future of Humanity Institute at Oxford University, says the last invention humanity ever needs to make might just be the last choice humanity makes. In his 2014 book Superintelligence, Bostrom warned long and loud that advances in artificial intelligence (AI) could lead to machines that surpass humans in general intelligence and, perhaps, then find humanity superfluous. Other researchers have derided Bostrom’s warnings as alarmist, suggesting, as Financial Times innovation editor John Thornhill writes, “that we remain several breakthroughs short of ever making a machine that ‘thinks,’ let alone surpasses human intelligence.” But Bostrom's views—which encompass both the obvious promise and potential danger of AI—have gained widely publicized support from Stephen Hawking, Bill Gates, and Elon Musk. In this piece, Thornhill describes Bostrom's thinking at length and places it in the context of a wave of worldwide investment in AI research and development.


Publication Name: Financial Times
To read what we're reading, click here

Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments