There are many inherent weaknesses that underlie existing machine-learning (ML) models, opening the technology up to spoofing, corruption, and other forms of deception. Attacks on artificial intelligence (AI) algorithms could result in a range of negative effects – from altering a content recommendation engine to disrupting the operation of a self-driving vehicle. As ML models become increasingly integrated into critical infrastructure and systems, these vulnerabilities become even more worrisome. DARPA’s Guaranteeing AI Robustness against Deception (GARD) program is focused on getting ahead of this safety challenge by developing a new generation of defenses against adversarial attacks on ML models.