The Responsible Artificial Intelligence (RAI) Toolkit provides a voluntary process to identify, track, and improve alignment of AI projects to RAI best practices and the U.S. Department of Defense’s (DoD’s) AI ethical principles while capitalizing on opportunities for innovation. It is intended to enable personnel working throughout an AI system’s life cycle to assess the system’s alignment with the DoD’s AI ethical principles and address any concerns identified via that assessment.
The RAI Toolkit is built around the SHIELD assessment, an acronym for the following six sequential activities that make up the core RAI activities on a particular project:
- Set Foundations – Identify the relevant RAI, ethical, legal, and policy foundations for a project, along with potential issues (statements of concern [SOCs]) and opportunities.
- Hone Operationalizations – Operationalize the foundations and SOCs into concrete assessment.
- Improve and Innovate – Leverage mitigation tools to make progress toward meeting the foundations and addressing the SOCs.
- Evaluate Status – Evaluate the extent to which the foundations are being met and the SOCs are being addressed.
- Log for Traceability – Document to ensure traceability.
- Detect via Continuous Monitoring – Continuously monitor system for any degradations in performance.
The Toolkit also includes a DoD-specific risk assessment resource – the DoD AI Guide on Risk (DAGR).
Note: Registration is not required for this webinar. To access the webinar, simply click the link below at the scheduled day/time.
https://dtic.webex.com/dtic/j.php?MTID=m0f80aafb6f9f002abea8e11c70173854