Roughly five dozen artificial intelligence researchers from five continents—including recognizable names such as UC Berkeley’s Stuart Russell—last week issued an open letter objecting to the establishment of something called the Research Center for the Convergence of National Defense and Artificial Intelligence.
The center—established at Korea Advanced Institute of Science and Technology (KAIST), in cooperation with defense contractor Hanwha Systems—rang alarm bells because, in the researchers’ view, its purpose was to “accelerate the arms race to develop [autonomous] weapons.” The researchers therefore vowed that they would “boycott all collaborations with any part of KAIST until such time as the president of KAIST provides assurances, which we have sought but not received, that the center will not develop autonomous weapons lacking meaningful human control.” The researchers would not, they continued, “visit KAIST, host visitors from KAIST, or contribute to any research project involving KAIST.”
Soon after the open letter was issued, KAIST’s president stated that the institute indeed would refrain from research into weapons lacking meaningful human control. Did that end it? Not immediately. Toby Walsh, a boycott organizer from the University of New South Wales, was quoted as saying that the president’s statement left “some questions unanswered,” and that he intended to consult with his co-signatories before deciding on next steps. Consultation has now occurred and—as reported by Reuters—the boycott is over.
Meanwhile, according to Matthew Hutson of Science, at least one computer science researcher (Ronald Arkin of the Georgia Institute of Technology) found it “a bit extreme” to boycott an entire university because of research conducted at a single laboratory. Maybe Arkin had a point—but with this week’s UN meetings on autonomous weapons likely to accomplish little of note, perhaps only extreme actions have a chance to slow the development of “killer robots.”