As artificial intelligence (AI) becomes increasingly integrated into modern warfare, ensuring its security and resilience is critical to national defense.
Like any new technology, AI has weaknesses. Researchers have demonstrated that AI-enabled systems can be tricked or manipulated via different “attacks.” But even these demonstrations have mostly been done in lab conditions, where researchers have complete control over the data and access to the AI’s inner workings. As a result, the findings don’t necessarily reflect how well the attacks would work in real-world military operations. DARPA experts say we must remedy this lack of understanding to appropriately mitigate adverse downstream effects on operational systems.