The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Is your robot up to it? DARPA wants machines that can explain how conditions affect their own performance

By | February 5, 2019

Credit: Matt Field. Based in part on a photo by Rico Shen CC BY-SA 4.0.Credit: Matt Field. Based in part on a photo by Rico Shen CC BY-SA 4.0.

As a competent marriage counselor might say, the foundation of any good relationship is communication—a principle the US military’s research wing is taking to heart with a new program aimed at developing autonomous systems that are not just machines, but “trusted partners” with greater self-awareness.

Autonomous technologies such as driverless cars cannot tell the humans they interact with much about how well they can perform in different conditions. A coach, for instance, might want to let a player with a sore arm sit out a game. Similarly, during bad weather, a passenger might choose a driverless car that performs better in the rain over another—if only the car could communicate its abilities better. The Defense Advanced Research Projects Agency (DARPA) seeks to develop that skill.

“If the machine can say, ‘I do well in these conditions, but I don’t have a lot of experience in those conditions,’ that will allow a better human-machine teaming,” DARPA program manager Jiangying Zhou says in a release about the new Competency-Aware Machine Learning program.

Machine learning is a subset of artificial intelligence whereby computers apply algorithms to data and progressively get better at formulating results. Many in the field refer to the technology behind this process as a “black box”; machine learning systems don’t or can’t readily explain the “why” behind a given result. DARPA’s new program, announced last week, is in keeping with the agency’s goal of developing artificial intelligence technologies that can explain the “black box” of a machine’s decision-making.

“Autonomous systems cannot provide real-time feedback when changing conditions such as weather or lighting cause their competency to fluctuate,” the DARPA release states. The machine’s inability to convey its limitations “reduces trust and undermines team effectiveness.”

RELATED:
‘I’m afraid I can’t do that’: Should killer robots be allowed to disobey orders?

The military is developing an ever-greater reliance on artificial-intelligence-based systems like autonomous swarming drones or so-called “computer vision” systems that can interpret what’s in video or photos. In the case of autonomous systems, the ability of a machine to explain itself will allow human operators to use them more “efficiently and effectively,” states the release.


Publication Name: Defense Advanced Research Projects Agency
To read what we're reading, click here

Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments