U.S. Army researchers enhance AI critical to soldier-machine teamwork


Artificial intelligence possesses the capacity to achieve incredible results, but cannot always work alone. Researchers identified two key components in a successful human-machine collaboration that may enhance how the U.S. Army will fight in the future.


Happy new year 2020 and our best wishes for friends readers customers and family 925 001
U.S. Army researchers seek to enhance multi-domain operations by reducing uncertainty in human-machine collaborations and applying explainable artificial intelligence models onto intelligence, surveillance and reconnaissance networks. (Picture source: U.S. Army/Sgt. Steven Lewis)


To achieve dominance in what is known as multi-domain operations, warfighters will need a layered intelligence, surveillance and reconnaissance, or ISR, network that maintains a functional relationship between autonomous sensors, human intelligence and friendly special operations forces. Multi-domain operations, known as MDO, is a joint warfighting concept that foresees conflict occurring in multiple domains: land, air, sea, cyber and space. The concept has many nuances, but basically describes how the Army, as part of the joint force, will solve the problem of layered standoff in all domains.

Given the demands of this environment, soldiers will place enormous amounts of trust not only between allies, but also toward machine agents. For humans and machines to operate effectively, artificial intelligence and machine learning agents must demonstrate explainability and tellability. "Soldiers need to gain the most value that they can from AI-based ISR assets developed by their partners, said Dr. Alun Preece, co-director of the Crime and Security Research Institute at Cardiff University in the United Kingdom. "To do this, they will need an ISR asset to provide a useful degree of transparency and rationale for its outputs--the assets can't be black boxes."

Explainability refers to the level of understanding that one can draw from the agent, while tellability pertains to the quality of that information's delivery. The former instills confidence and the latter improves operational agility and performance.

Encapsulating these ideas is the concept of coalition situational understanding, or CSU, which applies the notion of explainable artificial intelligence, or XAI, to complex coalition tasks. "We are exploring what it means for there to be synergy between soldiers and AI assets in MDOs involving joint, interagency and multinational coalitions," Preece said. "In particular, we seek to understand how XAI helps Soldiers be better able to deal with uncertainty in tactical situational understanding."

A CSU-layered model developed by the team depicts how human-machine collaborations distribute resources across multiple partners and technologies. The bottom layer holds information collected from physical sensors and human agents. The top layer gathers the resources obtained from the layers below and uses reasoning to provide predictions for the future.

This conceptual architecture illustrates the level of understanding associated with each CSU problem. A problem at the lower levels may only require the detection, identification or localization of objects, while higher levels entail determining threats, intents or anomalies. "The model covers different kinds of decision-making collaborations between soldiers and AI assets," Preece said. "It examines how explanations can be 'layered' to provide increasing levels of detail from high-level rationales to lower-level insights into how the AI asset reached its decisions."

With the accelerating rates of urbanization around the world, the CSU model will help support warfighters as they transition into data-packed urban environments and eventually full-scale smart cities that operate on their own human-machine network.

"Future military operations in complex urban environments will involve teams of humans and autonomous agents--software and robotic--with near-pear adversaries present in the physical, cyber and information domains," said Tien Pham, a collaborating senior scientist at the U.S. Army Combat Capabilities Development Command's Army Research Laboratory. "It is important for us to think about research within this distributed coalition context and develop explainable AI technologies that will enable coalition situation understanding for decision-making."

Artificial intelligence and machine learning systems must account for decision-makers without technical training in information science in order to most effectively ease their burdens, researchers said.

While limited in its current incarnation, the integration of human and machine agents from across coalition partners into dynamic teams--a concept Pham calls human-agent knowledge fusion, or HAKF, shows promise in improving the decision-making process for the warfighter. "HAKF supports explainability and tellability naturally as conversational processes between human and machine agents," Pham said. "This enables AI agents to provide explanations of results arising from complex machine learning tasks and to receive knowledge that modifies their models or knowledge bases."

This research was conducted as part of the United States-United Kingdom Distributed Analytics and Information Science International Technology Alliance program. Preece and Pham serve as the UK Academic Technical Area Lead and the US. government Technical Area Lead for the program, respectively.
___________________________________

CCDC Army Research Laboratory is an element of the U.S. Army Combat Capabilities Development Command. As the Army's corporate research laboratory, ARL discovers, innovates and transitions science and technology to ensure dominant strategic land power. Through collaboration across the command's core technical competencies, CCDC leads in the discovery, development and delivery of the technology-based capabilities required to make Soldiers more lethal to win our nation's wars and come home safely. CCDC is a major subordinate command of the U.S. Army Futures Command.


 

This website uses cookies to manage authentication, navigation, and other functions. By using our website, you agree that we can place these types of cookies on your device.