Architects of Influence: America’s Information Warfare Blueprint

The modern battlespace now includes the cognitive and digital domains where narratives shape public perception, policy, and national security. The United States may recognize the need for an updated information warfare (IW) strategy to prevail in this arena, but it lacks a practical, unified approach for implementation.1 Too often, cybersecurity and IW strategies are subject to a phenomenon where organizations “wait until it hurts” to act on their vulnerabilities, whether these weaknesses are software-based or lurking somewhere within their own institutional processes.2

The day is already well-spent; it is time to stand up a unified IW operational framework proof-of-concept (PoC) – led by Army Cyber Command (ARCYBER) – to integrate with the Cybersecurity and Infrastructure Security Agency (CISA) for defensive coordination and United States Cyber Command (USCYBERCOM) for offensive operations. Both could function under a new National Information Warfare Council to centralize planning and execution.

Of the armed services, the Army is in a unique position in terms of technological capability and talented personnel to be the premier operational force that bridges strategic vision with real-world application. The Army can work with interagency partners to implement an IW operational framework PoC and achieve defensive resilience and offensive dominance capabilities in the information space. Critically, it can do so in a way that answers ethical and legal considerations to sustain long-term bipartisan support to earn – and keep – public trust.

Operational Framework

Successful implementation of an IW strategy must be grounded in established Army and Joint doctrine. ADP 3-13 Information and JP 3-04 Information in Joint Operations provide a framework of core information activities – Enable, Protect, Inform, Influence, Attack, and Integrate – that guide operations in the information domain.3 JP 3-04 integrates this framework to explain the critical requirement of achieving information advantage, defined as “the operational advantage gained through the joint force’s use of information for decision making and its ability to leverage information to create effects on the Information Environment.”4 In practice, this means U.S. forces fight for, defend, and exploit information to outpace adversaries’ decision cycles. From there, they achieve decision dominance, which is the ability to make better, faster decisions than the foe.5

Information advantage and decision dominance require advanced technological capabilities with high computational power and an operational framework to employ those capabilities that enable faster decision-making. The Army views an operational framework as a “cognitive tool” that lets commanders “visualize and describe the application of combat power, in time, space, purpose, and resources, as they develop the concept of operations.”6 This bridges the gap between strategic objectives and tactical execution, just as an exploit PoC demonstrates how an adversary can use a vulnerability to force organizations to implement patches.

Following Army doctrine, this IW operational framework PoC must cover the area of operations (AO), physical arrangement of forces, designation of decisive, shaping, and sustaining operations within their AO, and prioritization of resources through main and supporting efforts. At the strategic level, the AO naturally encompasses the entire cyberspace domain. The physical arrangement of forces leverages existing structures, with CISA designated for domestic defensive cybersecurity operations and USCYBERCOM tasked with executing offensive IW missions. However, the PoC requires some way of organizing the efforts of the CISA defense and USCYBERCOM offense into unified actions, which is where a National Information Warfare Council (NIWC) must be established to lead that unification.7

Clearly defining decisive operations (priority cyber engagements), shaping operations (supportive intelligence and preparatory cyber actions), and sustaining operations (long-term cyberinfrastructure and resilience) requires coherent, effective employment of defensive and offensive resources.8 NIWC would clarify CISA and USCYBERCOM roles and responsibilities, facilitate strategic coordination between defensive and offensive IW operations, simplify implementation, and maximize operational effectiveness within cyberspace.

Defensive Resilience

The vital moment to counter disinformation is when a malicious actor attempts to post false content.9 That decisive point is where defensive operations must reside for the United States to gain the information advantage, not in a response effort after disinformation has already influenced public discourse. Once published, rapid dissemination of disinformation dramatically increases its impact, making subsequent mitigation efforts significantly less effective.10

Case in point: an NCC Group security analysis recommends using “[dis]information screeners to block users from creating and posting” deceptive ads, images, or posts.11 Following that security analysis, CISA – acting as the domestic defensive organization – must take the lead in establishing security systems that leverage AI as an intermediary at the decisive point. AI is uniquely suited to evaluate content in real-time as it is submitted to social media platforms before it reaches a broader audience. By using advanced AI models that detect indicators of propaganda, disinformation patterns, or manipulative, disinformation can be identified and labelled automatically.12

These AI-generated informational labels or warnings would then accompany the posts, clearly outlining detected manipulative tactics and providing users context on how that post may attempt to shape their bias.13 This AI-driven, proactive labeling strategy enhances resilience by educating users at the decisive point, enabling CISA to confront disinformation before it can gain traction. By implementing AI intervention at this decisive point, the strategy not only preserves freedom of speech but significantly strengthens societal resilience against malign-influence operations, aligning fully with First Amendment values and bolstering collective digital literacy.14

Offensive Dominance

AI can be employed similarly on the offensive side, whether as a standalone system or integrated into the defensive PoC. As a PoC for real-time counter-disinformation operations, the United States can deploy AI-driven analytics inspired by MIT’s Reconnaissance of Influence Operations (RIO) system. During the 2017 French elections, RIO processed 28 million tweets and identified propaganda accounts with 96% precision, demonstrating the feasibility of using machine learning to detect coordinated bot activities and inauthentic account clusters.15 USCYBERCOM and ARCYBER teams can apply similar tools to rapidly detect emerging disinformation threats, prioritize their severity, and swiftly initiate countermeasures, ranging from targeted factual rebuttals to cyber disruptions of adversarial infrastructure.

Complementing detection capabilities, proactive engagement through AI-augmented capabilities can counteract adversary-driven disinformation at scale. As part of the PoC, this involves AI-driven counter-disinformation bot networks transparently deployed to disseminate truthful content and rapidly debunk false narratives.16 These large-scale bot networks, vetted by human operators, can respond to disinformation in real time, flooding compromised platforms with verified facts and context. In effect, this would neutralize the propaganda’s impact while improving the Information Advantage for the United States within a given AO.17 AI-assisted content creation further enhances strategic communication, enabling rapid crafting and cultural adaptation of persuasive, truthful narratives, ensuring swift, credible responses during crises.18

Operational execution of these offensive measures would fall under USCYBERCOM, leveraging its authorities and expertise in persistent engagement and “defend forward” strategies. Past successful interventions, such as the preemptive disruption of Russia’s Internet Research Agency during the 2018 U.S. midterm elections, highlight the potential effectiveness of integrated cyber operations and narrative-driven campaigns.19

ARCYBER has the technological resources and talent necessary to actively turn off adversary propaganda networks, manipulate their algorithmic targeting, and coordinate tightly controlled offensive actions.20 ARCYBER is the ideal lead executor for this centralized operational structure to ensure coherent, strategic alignment with broader national security objectives for offensive IW supporting USCYBERCOM strategy.

Legal and Ethical Considerations

While this operational framework PoC has provided some practical solutions to achieving an Information Advantage, any solution must adhere to U.S. law, democratic norms, and the Law of Armed Conflict (LOAC). Unlike its adversaries, the United States operates under legal and ethical constraints that uphold our actions’ legitimacy and protect the freedoms we aim to defend.21 To adhere to these principles, any operational framework PoC must conform to the U.S. First Amendment and ensure the ethical application of AI in technical capabilities while balancing security against domestic values.

The ethical use of AI in information warfare requires safeguards against bias, misclassification, and undue influence. AI systems used for disinformation detection must be explainable, externally audited, and supervised by human analysts to prevent errors that could stifle legitimate discourse.22 Additionally, threat monitoring must respect privacy laws, focusing on aggregate trends and public data rather than intrusive surveillance. Offensive IW operations must comply with LOAC and the Smith-Mundt Act in all actions, ensuring proportionality, necessity, and transparency. In addition, offensive IW operations must embed legal, ethical, and oversight mechanisms at every level to ensure that U.S. influence operations counter adversarial threats while preserving the freedoms they are designed to protect.

The PoC solution to this problem is content-neutral interventions, where AI-driven transparency measures flag potential propaganda but do not censor content.23 The DoD Cyber Workforce Framework (DCWF) identifies the Data Steward work role as the executor for implementing this solution, where Data Stewards are responsible for maintaining ethical standards within U.S. AI models, collaborating with appropriate personnel to protect data privacy, and ensuring all applications of AI are transparent for legal auditing.24

Since government involvement must be limited, transparent, and accountable, the Data Steward work role solves this issue by stewarding Congressional oversight, independent audits, and public-facing methodologies to prevent bias and overreach. Focusing on public education, cognitive bias awareness, and content labeling, the Data Steward is an integral part of the IW operational framework PoC. Their critical inclusion prioritizes resilience over suppression, ensuring citizens remain informed without compromising their right to free speech.25

Conclusion

This IW operational framework PoC demonstrates a unified U.S. Information Warfare strategy anchored in Army and Joint doctrine, effectively bridging strategic intent with operational execution. It identifies roles and responsibilities for CISA and USCYBERCOM and establishes the NIWC to coordinate a unified approach to IW.

In addition, it leverages CISA to take the lead in developing advanced AI-driven capabilities to enhance U.S. defensive resilience by proactively identifying and mitigating disinformation at its decisive point. Subsequently, offensive dominance is achieved through USCYBERCOM and ARCYBER integration of agile counter-disinformation measures which extend U.S. influence in the cyberspace domain. Moreover, this PoC emphasizes strict adherence to ethical and legal standards, safeguarding First Amendment rights, ensuring transparency, and maintaining bipartisan support through the data steward role. Ultimately, this unified approach provides a comprehensive, ethically grounded framework to bolster national security, uphold democratic values, and decisively counter adversarial information warfare by achieving information advantage and decision dominance.

  1. Craig Douglas Albert, Samantha Mullaney, Joseph Huitt, Lance Y. Hunter, and Lydia Snider, “Weaponizing Words: Using Technology to Proliferate Information Warfare,” Cyber Defense Review (Fall 2023): 15-29. ↩︎
  2. Cristian Cornea, “How to detect CVE-2021-22986 RCE,” Pentest-Tools.com, April 29, 2024. ↩︎
  3. Headquarters, Department of the Army, “ADP 3-13 Information,” Army Publishing Directorate, 2023. ↩︎
  4. Joint Chiefs of Staff, “JP 3-04 Information in Joint Operations,” Joint Doctrine Publications, 2022. ↩︎
  5. Headquarters, Department of the Army, “Army Multi-Domain Transformation, Ready to Win in Competition and Conflict,” Army Publishing Directorate, 2021. ↩︎
  6. Headquarters, Department of the Army, “ADP 3-0 Operations,” Army Publishing Directorate, 2019. ↩︎
  7. Peter Wilcox, “The United States National Security Council Needs an Information Warfare Directorate,” The Strategy Bridge, 2019. ↩︎
  8. Ibid. ↩︎
  9. Lucy H. Butler, Toby Prike, and Ullrich K. H. Ecker, “Nudge-based misinformation interventions are effective in information environments with low misinformation prevalence,” PubMed Central, 2024. ↩︎
  10. John J. Heslen, “Neurocognitive hacking, A new capability in cyber conflict?” Politics and the Life Sciences (2020): 87-100. ↩︎
  11. Swathi Nagarajan, “Vaccine Misinformation Part 1: Misinformation Attacks as a Cyber Kill Chain,” NCC Group, November 9, 2021. ↩︎
  12. Linda Slapakova, “Towards an AI-Based Counter-Disinformation Framework,” RAND Corporation, March 29, 2021. ↩︎
  13. Craig Douglas Albert, Ahmed Aleroud, Yufan Yang, Abdullah Melhem, and Josh Rutland, “Twitter Propaganda Operations: Analyzing Sociopolitical Issues in Saudi Arabia,” Social Media + Society (2023): 1-22. ↩︎
  14. Slapakova, “Towards an AI-Based Counter-Disinformation Framework.” ↩︎
  15. Steven T. Smith, Edward K. Kao, Erika D. Mackin, Danielle C. Shah, Olga Simek, and Donald B. Rubin, “Automatic detection of influential actors in disinformation networks,” PNAS, January 7, 2021. ↩︎
  16. Ross, Robert J, and Josh Rutland. Spring, “A Military of Influencers: The U.S. Army Social Media, and Winning Narrative Conflicts,” Cyber Defense Review (Fall 2022): 213-225. ↩︎
  17. Lance Y. Hunter, Craig D. Albert, Josh Rutland, Kristen Topping, and Christopher Hennigan, “Artificial intelligence and information warfare in major power states: how the US, China, and Russia are using artificial intelligence in their information warfare and influence operations,” Defense & Security Analysis (March 2024): 235-269. ↩︎
  18. Ross and Rutland, “A Military of Influencers.” ↩︎
  19. Courtney Kube and Ken Dilanian, “Trump approved operation that disabled Russian troll farm during 2018 midterms,” NBC, February 26, 2019. ↩︎
  20. U.S. Army Cyber Command, “Army Cyber Command leaders outline Theater Information Detachment concept,” October 22, 2024. ↩︎
  21. Qiao Liang and Wang Xiangsui, Unrestricted Warfare (Beijing: China’s People’s Liberation Army, 1999). ↩︎
  22. Beena Ammanath, Trustworthy AI (Wiley, March 2022). ↩︎
  23. IBM, “What is explainable AI?”, n.d. ↩︎
  24. U.S. Department of Defense, “DoD Cyber Workforce Framework,” n.d. ↩︎
  25. John S. Hammond, Ralph L Keeney, and Howard Raiffa, “The Hidden Traps in Decision Making,” Harvard Business Review (September-October 1998). ↩︎