SUMMARY
Military commanders have used information throughout warfare to influence, mislead, disrupt, or otherwise affect the enemy’s decision-making and capabilities. This article discusses the history of information operations (IOs) and enduring importance of incorporating actions in the information environment in military strategy.
The Evolution of Military IOs
Ancient IOs relied on human intellect and psychology. As early as the 5th century BC, Sun Tzu recognized the importance of employing spies and couriers to collect intelligence on the adversary before engaging in battle [1]. According to Tzu, “All warfare is based on deception,” highlighting the importance of information and psychological warfare in ancient military operations [2]. Military deception is one of the oldest information-related capabilities that are leveraged to this day by U.S. military IOs [3].
As communications evolved in the Middle Ages, more advanced societies recognized that just about any physical tool could be used to affect the information environment. Medieval armies used information propagation and security advances with carrier pigeons, visual signals, and early cryptography. During the Crusades, European and Muslim troops used intricate spies and informants to collect enemy movements and intentions. Most notably, Muslim military commander Sultan Saladin deployed spies to track the Crusaders’ activities while also planting false information about the size and location of his main elements [4]. Such information tactics aided his military while significantly degrading Europe’s ability to deploy forces effectively.
The printing press revolutionized knowledge distribution throughout the Renaissance and afterward. Governments and military commanders realized the power of public opinion and used propaganda to demoralize their foes. Psychological operations became a systematic military policy in this age, making narrative control as crucial as combat control. Commanders exploited disinformation to discourage those under siege and deceive the opposition about reinforcements and supply lines. A notable example is the siege of Orleans during the Hundred Years’ War, where Joan of Arc’s presence and disinformation about the French force’s strength and morale helped relieve the siege [5]. Another example is how Genghis Khan’s Mongol forces applied psychological warfare to undermine opponent resistance before an invasion by spreading dread of their savagery [6].
IOs changed considerably in the 20th century, especially during the World Wars. World War I saw newspapers, posters, and films used for propaganda to affect public opinion and morale. Radio increased the reach of psychological operations. British intelligence from 1917 intercepted and decrypted the Zimmermann Telegram, a secret communication from Germany to Mexico proposing a military alliance. The British then released the telegram to U.S. President Woodrow Wilson. U.S. public exposure to this telegram played a significant role in swaying public opinion and contributed to the nation’s decision to enter the war against Germany [7].
In World War II, the Allies’ concatenated Operation Bodyguard, which tricked the Axis forces about the D-Day invasion location. This was one of the most famous IOs in which the Allies deceived the Nazis about the D-Day invasion’s schedule and location. Phantom armies, radio traffic, and deceptive reconnaissance images increased the confusion among German decision-makers [8]. Operation Fortitude involved creating a fictitious First United States Army Group in southeastern England to suggest an invasion at Pas de Calais, France (Figure 1) [9].
Television and early computer technologies changed IOs during the Cold War. Both sides engaged in substantial propaganda, espionage, and counterintelligence as information warfare advanced. The U.S. government used Voice of America and Radio Free Europe as crucial tools to broadcast news and pro-Western narratives to Eastern Europe and the Soviet Union [10]. These broadcasts aimed to counter communist propaganda narratives and promote Western values and perspectives behind the Iron Curtain. In contrast, the Soviet Union ran extensive radio propaganda campaigns and funded speakers, academics, and other activists in the West to undermine the unity of allegiance to classical Western values—these broadcasts aimed at both domestic and international audiences, promoting communist ideology and countering Western influence [11].
One of the most notable military IOs during the Vietnam War was Operation Eldest Son. Conducted by U.S. Special Forces and Central Intelligence Agency operatives, the operation involved tampering with enemy ammunition to make it look like standard munitions were supplied by China or the Soviet Union—this modified ammo aimed to detonate the inside of weapons, thus inflicting harm or death [12]. Spies secretly inserted it in enemy supply routes or left it behind during retreats. The operation aimed to reduce enemy morale and trust by sowing doubts about the munitions. Disinformation pamphlets, television, and radio broadcasts warned Vietnamese forces of the hazards of “poor” munitions supplied by the Chinese and Soviet suppliers [13]. They planted distrust among adversary groups, boosting the operation’s psychological effect.
The Space Race was also a significant part of Cold War IOs. Technological achievements, including the Soviet Union’s launch of Sputnik Yur, Gagarin’s spaceflight, and the U.S. Apollo moon landings, were heavily publicized on television to demonstrate each superpower’s technological and ideological superiority.
Paralleling technological advances in the late 1990s and since the turn of the century, information operations have advanced exponentially. Internet, social media, and modern computers, including mobile platforms, changed how military commanders employed IOs [14]. These platforms increased the reach and precision of information, laid the foundation for cyber warfare, and exponentially increased the speed of spreading digital propaganda over social media.
For example, Operation Glowing Symphony was a significant U.S. cyber operation that marked a notable shift in the approach and tactics employed in cyber and information warfare. Another important component of Operation Glowing Symphony was the official acknowledgment of the U.S. government to using offensive cyber capabilities. This operation was conducted as part of the broader campaign against the Islamic State (ISIS) [15]. These actions disrupted and degraded ISIS’s ability to spread its information, recruit members through social media, and carry out its IOs using digital communication networks [16]. The primary objective of Glowing Symphony was to disrupt ISIS’s extensive and sophisticated media network to spread its messaging and information. ISIS has been effectively using the internet and social media for information dissemination, recruitment, and radicalization. The U.S. Cyber Command dismantled these capabilities by targeting servers, websites, and data centers used by ISIS. A critical aspect of this operation was gaining access to and control over ISIS’s network, which allowed U.S. cyber forces to not only disrupt ISIS’s ability to spread information but also implant and execute friendly information aimed at the same audiences that ISIS was targeting [17].
Introduction
Recent years have seen an unprecedented explosion in AI for a broad range of applications that range from computation, generative AI, research, and other fields. The rise of AI has the potential to revolutionize the military IOs as well. AI can do the following:
- Transform how militaries conduct information warfare, offering unprecedented capabilities and new challenges.
- Generate outstanding amounts of information aligned to the same narrative that can sway the opinions of the masses in extremely short amounts of time.
- Analyze massive volumes of satellite photos, real-time signal intercepts, and open-source intelligence data.
- Dramatically improve situational awareness and decision-making and risk oversaturating the decision-makers with information.
- Generate content to disseminate over all mediums, making counter-information nearly impossible due to the sheer volume of information the target receives.
- Generate deceptive information, leading adversarial AI to derive wrong conclusions and mislead the decision-maker.
- Analyze massive databases to determine target groups’ psychological characteristics, allowing more practical knowledge and focused psychological operations.
AI systems may forecast the future based on historical data and current patterns. Leveraging such technology helps predict adversary maneuvers, grasp complicated conflict zone patterns, and prepare for numerous enemy courses of action.
Adversarial and malicious actors may use AI-generated deepfakes and synthetic media to construct misinformation or disinformation campaigns that are hard to spot, affecting public opinion and degrading morale (e.g., Figure 2). Synthetically produced information could endure for decades as actual occurring events, having long-term collateral effects that will be difficult to challenge and uproot, and creating moral and ethical implications.
Applications of AI in IOs: A Cyber Fortress Case Study
The Cyber Fortress exercise, a critical training and preparedness event held in Virginia, represents a significant evolution in defending critical infrastructure (e.g., Figure 3). Cyber Fortress creates an interagency response framework incorporating local, state, and federal cooperation and international collaboration. This approach includes partnerships with commercial private entities, state and federal governments, and the military to enable a unified response across multiple domains. Cyber Fortress is pushing beyond cyber defense and everyday operations into more complex scenarios involving extensive IOs. This exercise is a control method designed to stress-test processes and institutions, preparing them for cyber response scenarios that may arise in real-world scenarios to include in the information domain. Distinguished by its focus on integrating substantial IOs through the Information Operations Network (ION), this exercise extends beyond conventional cyber defense, presenting a multifaceted approach to cybersecurity and information warfare.
The Cyber Fortress exercise’s ION is a groundbreaking innovation in cyber warfare training. It represents a simulated digital environment, often called a “fake internet,” meticulously designed to mirror the intricate and multifaceted digital ecosystem of the “real internet.” This sophisticated simulation includes various components, such as replicated news websites, social media platforms, and other information dissemination outlets, creating a highly authentic and immersive backdrop for the exercise.
ION’s role in Cyber Fortress is pivotal. It is a dynamic battleground where “Red” and “Blue” Teams launch information campaigns at participants while their cyber counterparts fire ones and zeros over wires. The network is not just a static backdrop but a fully interactive landscape that responds and evolves based on the actions of the exercise’s participants. This level of realism is essential for training personnel in the nuances of modern digital warfare, where the lines between virtual and physical confrontations are increasingly blurred.
AI is central to the operation of ION. Red Teams leverage this system for offensive roles, and the Blue Teams conduct defense.
Red Team AI-Driven IO Campaigns
In the Cyber Fortress exercise, the Red Teams employ AI to execute intricate and aggressive IOs, demonstrating the evolving nature of digital warfare. Their overarching goal is to disseminate disruptive information to sow seeds of distrust and panic among the public. Several innovative AI applications augment the sophistication of their tactics, each designed to exploit the vulnerabilities inherent in the information ecosystem.
The Red Team leverages AI-driven algorithms and linguistic expertise to create messages in multiple languages, embedding cultural and ethnic nuances. This strategy ensures the messages are translated for linguistic accuracy and are culturally relevant, resonating deeply with various ethnic groups in the United States. AI systems generate more persuasive and impactful content by understanding and tapping into cultural idioms, historical contexts, and social nuances.
This multilingual capability is crucial in a country as ethnically diverse as the United States. It allows the Red Teams to effectively target specific communities, potentially creating rifts and exacerbating existing tensions. In this context, AI demonstrates a sophisticated understanding of the cultural dynamics and targeted information’s role in influencing public opinion.
Another critical aspect of the Red Teams’ strategy is using AI for adaptive messaging and real-time feedback. AI systems monitor the public’s reaction to the disseminated content and adjust the messaging accordingly. If a particular narrative is gaining traction or causing the desired level of disruption, AI algorithms can amplify it. Conversely, if a message is not having the intended effect, AI can quickly alter the approach, testing different narratives and strategies to achieve the desired impact. This adaptive approach is crucial in maintaining the momentum of the information campaign. It allows the Red Teams to stay one step ahead, continually refining their tactics in response to public reaction and feedback.
Red Teams also employ AI as chat conversation generators to post comments across various digital platforms. These comment chats mimic human interaction, engaging in online conversations and debates to further influence public opinion. These AI-generated discussions amplify the disinformation campaigns’ reach by actively participating in social media dialogues, forums, and comment sections, giving an illusion of grassroots support or opposition to viewpoints. This tactic effectively manipulates the perceived public consensus, swaying opinions and deepening divisions within the digital discourse.
AI’s role extends beyond content creation to strategically disseminating this tailored content. The Red Teams use AI to identify and utilize various digital platforms within the ION, from social media networks to news websites, ensuring that their disruptive messages achieve maximum reach and impact. This approach mirrors real-world information warfare tactics, exploiting diverse communication channels to spread disinformation and propaganda.
Blue Team AI-Driven IO Campaigns
In the dynamic arena of the Cyber Fortress exercise, the Blue Teams are also exploring AI to counteract the sophisticated IOs launched by their Red Team counterparts. Their multipronged approach uses the latest advancements in AI to generate rapid responses, analyze data, and detect falsified content.
The main effort of the Blue Teams’ strategy is the rapid generation of content for public messaging. AI tools swiftly produce accurate and reliable information to counteract the Red Team’s disinformation campaigns. This quick response capability is critical in mitigating the impact of false narratives. The AI systems are sophisticated enough to parse immense volumes of misinformation, distill facts, and craft timely and factual responses. These AI-driven content generation tools can analyze the trending topics and prevalent narratives from the Red Team and instantly generate counternarratives. Such information battles ensure that the public has access to balanced information, helping to prevent the spreading of harmful misinformation.
Blue Teams adeptly employ AI for rapid content generation, strategic communication, and crucial language translation tasks. They utilize advanced AI algorithms to translate an extensive volume of articles and digital content across various languages. ION capability is essential in identifying and analyzing potentially damaging narratives and disinformation campaigns orchestrated by the Red Teams. By breaking language barriers, the AI systems enable the Blue Teams to comprehensively monitor and counteract misinformation across a diverse linguistic spectrum, ensuring a thorough and effective response to their adversaries’ multifaceted information warfare tactics.
During Cyber Fortress, the Information Operations Support Cell (IOSC), serves as the analytical and strategic hub for the Blue Teams. Service members of the IOSC serve in various technical civilian roles and bring decades of experience and expertise in AI and other relevant technologies. IOSC oversees the information environment, where AI plays a crucial role in sifting through the vast ocean of data produced by various information outlets. Analyzing this data, the IOSC identifies patterns, trends, and strategies shaping the Blue Team’s counterinformation campaigns.
The AI systems in the IOSC utilize advanced algorithms for natural language processing, sentiment analysis, and pattern recognition. This highly sophisticated approach enables them to quickly discern the underlying strategies of the Red Team’s campaigns, such as target demographics, message frequency, and thematic content. Understanding these aspects allows the Blue Teams to tailor their countermeasures more effectively, ensuring rapid and strategically targeted responses. This capability is vital in maintaining the integrity of information within the exercise. By quickly identifying and addressing deceptive content, the Blue Teams help safeguard the digital information landscape from being corrupted by falsified narratives. This task is particularly challenging given the sophistication of modern deepfake technology, which requires equally advanced AI tools to combat.
Beyond reactive measures, the Blue Teams also use AI to develop proactive strategic communication plans. By understanding the information environment and tactics used by the Red Teams, AI tools help craft comprehensive communication strategies. These strategies counter existing misinformation and build resilience within the masses against future disinformation campaigns.
AI does not operate in a vacuum in the Blue Teams. It works in tandem with human analysts who provide context, judgment, and creative thinking that AI alone cannot achieve. This collaboration ensures that the counterinformation campaigns remain grounded in ethical and practical considerations, balancing the efficiency of AI with the nuanced understanding of human operators. The Cyber Fortress exercise also serves as a training ground for the Blue Teams to adapt and improve their AI tools. Through iterative deployment, analysis, and refinement cycles, AI systems become more adept at handling the intricacies of information warfare. This continuous learning aspect of AI is crucial in keeping pace with the evolving tactics of the Red Teams.
Understanding that different demographics consume information differently, the Blue Teams use AI to customize the dissemination of their content. AI algorithms determine the most effective channels and formats for different audiences, ensuring that counternarratives reach the right people in the right way. AI in IOs brings with it a host of ethical considerations. The Blue Teams are conscious of the ethical implications of using AI, particularly regarding privacy, transparency, and accountability. Cyber Fortress lays the foundation for ensuring that AI utilization in IOs adheres to strict ethical guidelines, many of which are still unknown and under development.
Emerging Technologies and Other AI Applications
The future of AI in military IOs is at a revolutionary juncture, with emerging technologies set to enhance capabilities and reshape strategic landscapes significantly. Advanced AI algorithms can process vast amounts of data and generate sophisticated psychological profiles, predictive models, and automated information campaigning. These models can forecast potential threats and generate computerized responses to information campaigns, offering military strategists unprecedented insight and foresight.
One of the most groundbreaking advancements is using AI in deep learning and neural networks. This technology enables the creation of vast amounts of highly realistic synthetic media, which gives psychological operations a strategic advantage. Additionally, AI-driven, natural language-processing and generation tools are becoming sophisticated enough to autonomously create and distribute convincing narrative content at a scale and speed unmatchable by human operatives.
Strategically, AI’s continued integration into military operations will profoundly influence global geopolitics. AI-enhanced information campaigns could lead to a new form of warfare where digital battles occur, impacting public opinion and national policies without physical confrontation. Countries with advanced AI capabilities might gain significant leverage in international relations, potentially leading to a new arms race focused on technological supremacy.
Moreover, AI systems’ automated monitoring and analysis capabilities are crucial for detecting disinformation and unusual activity early. By continuously scanning digital communications and media, these AI systems can identify and flag potential threats or misinformation campaigns, allowing for rapid response and countermeasures. This automated vigilance enhances defense capabilities and ensures the integrity and effectiveness of IOs. As such, the future of AI in military IOs is not just about advanced technology but also about maintaining the information advantage for strategic decision-makers and countering emerging digital threats in an increasingly interconnected world.
Conclusions
Military commanders have always relied on information control and manipulation to amplify the effects of the maneuver element. From the tactical deceit of ancient generals to exquisite cyber operations of the modern world, information control is vital.
Integrating AI into U.S. military IOs is not simply an evolution but a necessary transition in contemporary multidomain operations. AI becomes vital when information volume and complexity surpass human-processing skills. AI’s capacity to create, analyze, and distribute vast amounts of material faster than humans makes it essential for future IOs. U.S. forces must exploit AI’s potential to counter information while effectively exploring ethical implications. The U.S. military must adapt AI technologies if the country wants to maintain and preserve its strategic edge and keep its information campaigns successful and robust against digital arms race rivals who are also leveraging these tools. The strategic use of AI will shape military IOs, allowing the United States to challenge sophisticated threats and influence operations with unparalleled efficiency and scale.
Acknowledgments
The authors would like to acknowledge invaluable AI assistance from ChatGPT, which provided critical insights and guidance throughout the writing process. AI’s ability to process and generate informative content has been indispensable in shaping this article.
The authors would also like to sincerely thank all the human readers and colleagues who provided feedback and suggestions. Your perspectives and insights have been invaluable in ensuring the comprehensiveness and accuracy of this article.
References
- Warner, M. “The Divine Skein: Sun Tzu on Intelligence.” Intelligence and National Security, vol. 21, no. 4, pp. 483–492, 2006.
- Tzu, S. The Art of War. China, 5th century B.C.
- Kearney, K. “Denial and Deception—Network-Centric Challenge.” Unpublished research paper, U.S. Naval War College, 1999.
- Craig, J. S. Peculiar Liaisons: In War, Espionage, and Terrorism in the Twentieth Century. Algora Publishing, 2005.
- Williams, G. “Manipulation and the Maid.” Medieval Warfare, vol. 4, no. 2, pp. 25–32, 2014.
- Narula, S. “Psychological Operations (PSYOPs): A Conceptual Overview.” Strategic Analysis, vol. 28, no. 1, pp. 177–192, 2004.
- Hughes, C. R. “Fighting the Smokeless War: ICTs and International Security.” China and the Internet, Routledge, pp. 139–161, 2003.
- Bendeck, W., C. Elkington, and O. McConnell. “Diversion and Deception in Warfare,” 2016.
- Donovan, M. J. “Strategic Deception: Operation Fortitude.” U.S. Army War College, 2002.
- Puddington, A. Broadcasting Freedom: The Cold War Triumph of Radio Free Europe and Radio Liberty. University Press of Kentucky, 2000.
- Lovell, S. Russia in the Microphone Age: A History of Soviet Radio, 1919–1970. Oxford: OUP, 2015.
- Plaster, J. L. “Wreaking Havoc One Round at a Time.” American Rifleman, pp. 68–72, 2008.
- Stanton, S. L. Green Berets at War: U.S. Army Special Forces in Southeast Asia, 1956–1975. Ivy Books, 1999.
- Yavetz, G., and N. Aharony. “Social Media for Government Information Dissemination: Content, Characteristics and Civic Engagement.” Aslib Journal of Information Management, vol. 73, no. 3, pp. 473–496, 2021.
- Temple-Raston, D. “How the U.S. Hacked ISIS.” National Public Radio, 26 September 2019.
- Donnelly, C., and M. Stolz. “JTF-ARES as a Model of a Persistent, Joint Cyber Task Force.” European Conference on Cyber Warfare and Security, vol. 22, no. 1, pp. 169–176, 2023.
- Randall, J. D. “Crossing Borders in Cyberspace: Regulating Military Cyber Operations and the Fallacy of Territorial Sovereignty.” Army Law, p. 82, 2021.
Biographies
Joseph Vossler is a senior cybersecurity engineer currently serving in the Virginia National Guard’s IOSC. He has over 14 years of experience in cybersecurity, intelligence, and data science, including a significant tenure in the U.S. Army as an operations and intelligence professional. SFC Vossler’s extensive background encompasses roles in cyber network defense, incident response, intelligence analysis, and data science/ML, notably with ec3 Federal Services, Raytheon, Booz Allen Hamilton, U.S. Army Cyber Command, and the Narcotics and Transnational Crime Support Center.
Gerald Mazur has held tactical, operational, and policy cyberspace and IOs positions with the U.S. Department of Defense (DoD). He has led teams at the U.S. Cyber Command, delivering effects in support of combatant commanders; contributed to DoD Joint Cyberspace Operations doctrine; and held command and key staff positions in the Army National Guard’s 91st Cyber Brigade. COL Mazur holds an M.S. in telecommunications and computer forensics from George Mason University.
Andre Slonopas is a U.S. Army Cyber and IO Officer and former Presidential Management Fellow, with a strong background spanning research, academia, and government. He has led significant software projects, authored numerous publications on cyber and hardware security, and is a committed advocate for the AI and energy revolution. MAJ Slonopas holds a Ph.D. in aerospace engineering from the University of Virginia.
Danyl Miller serves as a senior cyberspace and electromagnetic activities Command Sergeant Major (CSM) for the 124th Cyber Protection Battalion of the Virginia Army National Guard and a strategic business analyst for the Joint Force Headquarters – DoD Information Network, with over 30 years of military experience. CSM Miller holds an M.S. in organizational leadership from Excelsior University.
Edward Olbrych serves as an interdisciplinary engineer at the U.S. Army Command, Control, Communications, Computers, Cyber, Intelligence, Surveillance, and Reconnaissance Center focusing on cyber electromagnetic activity technology. He is also an operations planner for the U.S. Army National Guard, where he has contributed to planning and executing multiple joint international and interagency exercises. CPT Olbrych’s background includes contributing to defense projects in software engineering and cybersecurity at Lockheed Martin and serving as a lead developer for Task Force Echo under the 780th Military Intelligence Brigade and a team leader in the 134th Cyber Security Company.
Aaron Sweeney is a principal technical specialist at Microsoft and soldier in the 91st Cyber Brigade, with over 20 years of experience in cybersecurity. SFC Sweeney’s focus is on developing military cyber exercises and architecting secure cloud deployments for Fortune 500 companies and other high-impact stakeholders.
Jacob Strahan works as the cyber resiliency program manager for the Virginia Department of Emergency Management. As an information effects (IO) planner, he has extensive experience in integrating IO effects into domestic and international cyber exercises. SFC (Ret.) Strahan previously served as the noncommissioned officer in charge of the Information Operations Support Center, 91st Cyber Brigade, Virginia Army National Guard.