Hours before dawn, under the veil of a new moon, two figures in military fatigues grapple like Greco-Roman wrestlers within the razor-wire perimeter of the Blue Grass Army Depot in Richmond, Kentucky.
Their movements are rigid but discreet, each maneuvering for leverage beneath the orange glow of the floodlights lining the depot’s security fence.
In the distance, a patrolling sentry squints, straining to make sense of the dimly lit commotion. He thumbs the two-way radio on his pistol belt but hesitates, worried he may frustrate his supervisor with an inaccurate report.
But before the fight can go to ground — and before the sentry can reassess and request support — a bright green reticle highlights the entwined bodies on a control room monitor several miles away. Scylla, the artificial intelligence algorithm powering the depot’s security architecture, has made sense of what the sentry cannot.
Scylla knows instantly that the image captured by the depot’s security cameras depicts a struggle. In seconds, the algorithm references a database of friendly and malicious faces, identifies the belligerents, and fires a report to the watch captain: “An on-duty military police officer is trying to subdue an intruder — a known bad actor with presumed hostile intent.”
This fictional scenario described by Drew Walter, deputy assistant secretary of defense for nuclear matters, illustrates AI’s demonstrated capabilities and underscores its potential in the realm of physical security. Last month, the Defense Department tested Scylla at the Depot. Walter described the platform as a “considerable advancement in our ability to safeguard critical assets.”