How can we build robust, assured, and therefore trustworthy artificial intelligence (AI)-based systems?
That question lies at the heart of DARPA’s Assured Neuro Symbolic Learning and Reasoning (ANSR) program.
“Informally, trust is an expression of confidence in an autonomous system’s ability to perform an underspecified task,” said Dr. Alvaro Velasquez, DARPA’s ANSR Program Manager. “Ensuring autonomous systems will operate safely and perform as intended is integral to trust, which is key to the Defense Department’s success in adopting autonomy. We believe that integrating data-driven neural learning and traditional symbolic reasoning is the key to achieving this trust.”
DARPA selected teams to explore diverse, hybrid architectures that integrate data-driven machine learning with symbolic reasoning, a problem-solving method that uses symbols or abstract representations to understand information or follow rules to reach conclusions.