The possibility and risks of artificial general intelligence

By Phil Torres, April 29, 2019

This article offers a survey of why artificial general intelligence (AGI) could pose an unprecedented threat to human survival on Earth. If we fail to get the “control problem” right before the first AGI is created, the default outcome could be total human annihilation. It follows that since an AI arms race would almost certainly compromise safety precautions during the AGI research and development phase, an arms race could prove fatal not just to states but for the entire human species. In a phrase, an AI arms race would be profoundly foolish. It could compromise the entire future of humanity.

Share: 

Leave a Reply

avatar
  Subscribe  
Notify of

ALSO IN THIS ISSUE

RELATED POSTS

Receive Email
Updates