Reliability of a military system refers to its ability to complete a specific mission without
failure. Failure modes in traditional acquisition systems often involve hardware, software,
and human-systems integration. These failure modes are generally well-understood and can
be mitigated early in design. As artificial intelligence (AI) and autonomous products are
incorporated in military systems, mission completion also depends on the system’s
robustness and resiliency in the changing environments of a contested battlefield. Issues
with AI model robustness, the data pipeline, adversarial attacks, and human-machine
teaming can result in mission failures that go beyond traditional failures.
This webinar presents an approach for assessing reliability risks in AI products. It begins with
a discussion about the new, less-understood AI failure modes. Product developers can
mitigate these new failure modes by employing sound engineering practices in several key
areas. These areas are broken down into detailed elements, forming the basis for an AI
reliability risk assessment tool. An evaluation of the developer’s planned and completed
activities against these elements can indicate areas of risk that may need further mitigation.
This approach can be used by AI product developers and integrators, as well as evaluators
responsible for assessing AI products.