Teaching AI What It Should and Shouldn’t Do

Home / Articles / External / Government

soldiers working at a computer
U.S. Air Force Master Sgt. Anthony Madrid, 316th Training Squadron signals intelligence flight chief, checks the quality of intelligence gathering work from Airman 1st Class Owen Arthur, 316th TRS student, during the capstone exercise, Operation Loneshark, at Goodfellow Air Force Base, Texas, Oct. 11, 2022 (U.S. Air Force photo by Senior Airman Abbey Rieves).

December 3, 2024 | Originally published by Defense Advanced Research Projects Agency (DARPA) on September 27, 2024

Thanks to the rapid growth of large language models (LLMs), artificial intelligence (AI) agents have quickly been integrated into many facets of everyday life – from drafting documents to generating artwork to providing research assistance. But verifying the accuracy or appropriateness of an AI’s response is not always easy. For AI systems to be trusted partners with humans in situations where safe and ethical decisions are paramount, further work is needed to efficiently convey knowledge about human intent, laws, policies, and norms into logical programming languages, which an AI can understand.

To achieve this goal, DARPA announced its new Human-AI Communications for Deontic Reasoning Devops program, or CODORD for short. Deontics, a philosophical term, refers to obligations, permissions, and prohibitions. Devops refers to the combination of software development and IT operations, including development that continues during operations. CODORD seeks to enable communication of deontic knowledge from humans via natural language (e.g., spoken or written English, French, German, etc.) automatically into a highly expressive logical programming language. If successful, CODORD will vastly reduce the cost and time needed to transfer massive amounts of human-generated knowledge about obligations, permissions, and prohibitions into logical languages.