An engraving of a centaur by the Italian Renaissance artist Marcantonio Raimondi. In modern day debates over the role of AI in warfighting, the idea of a centaur has been used positively to describe machines assisting a human operator, with the human retaining moral and cognitive agency. Credit: Metropolitan Museum of Art.
Optimizing AI-enabled decision support tools for human cognitive autonomy in warfare would ensure outputs are aligned with human-defined ends while allowing maximum benefit from machine automation.
Machines are helping humans think on the modern battlefield, but are they also shaping how their human operators think? In Ukraine and Gaza, AI-enabled decision-support tools are enabling warfighting at astonishing speeds by automating parts of human cognition. Instead of hundreds of humans spending thousands of hours reading and analyzing reports, AI-enabled decision support tools sift through large amounts of intelligence data – reports, video, and signals information – to identify and recommend targets for action.
But reporting from the war in Gaza suggests that maintaining human control over AI-enabled decision support tools may be more complex than keeping a human on the loop of decisions (i.e., allowing human operators the opportunity to override AI targeting decisions). Human behavior when teaming with AI-enabled decision support systems suggests some difficult questions about whether humans can avoid dependence in an environment of sophisticated cognitive automation. We consider some of those questions and provide a framework for how to think about maintaining human cognitive autonomy in the context of AI-enabled warfare.
We argue that optimizing AI-enabled decision support tools for human cognitive autonomy in warfare would ensure that outputs are aligned with human-defined ends while allowing maximum benefit from machine automation.
As AI-enabled machines take on more cognitive tasks from humans, should we be worried about becoming cognitively dependent on them? Should our goal be to maintain full cognitive autonomy? If we accept some cognitive dependence on machines because of real benefits, how can we know when we have become “too” dependent? How do we build a machine that improves outcomes but prevents dependence and preserves cognitive autonomy?
Centaur Or Minotaur?
In 2016, Paul Scharre suggested future human-machine teaming would take the form of a centaur — a human brain enabled by machine power. Humans guide and control the machines that increasingly execute complex warfighting tasks at beyond human speed. In centaur warfare, the human retains the roles of essential operator and moral agent, leaving the machine to help where feasible and appropriate.1
In 2023, Sparrow and Henschke responded with the belief that AI-enabled cognition tools will become so good that armies will have AI generals as the brains making decisions that humans execute.2 Instead of humans on the loop, we will have machines on the loop. Instead of a centaur, human-machine teams become minotaurs: a machine brain enabled by human power.3
The problem is that the way those systems are designed, and the context in which they operate, increases human cognitive dependence on them, which means humans may unintentionally give up meaningful human control of AI-enabled systems.
Cognitive Dependence In War
Militaries are racing to integrate AI into decision-making systems to help humans make better decisions faster in war.4 These systems are called AI-enabled decision support tools. With these tools, machines sift through massive amounts of intelligence data to identify, select, and recommend targets for action, sometimes to lethal effect. Those future cognitive tasks could include directing the actions of human forces in combat, coordinating multiple lethal systems to destroy targets etc.
AI-enabled decision support tools, like the U.S. military’s Maven Smart System, have been used in Ukraine and in the Middle East to help identify, track, and prioritize targets.5 Similar systems have been used to more dramatic effect in the Israel Defense Force’s (IDF) operations in Gaza. Reporting on the use of these systems suggests some troubling emerging behavior in the human operators who are supposed to oversee these AI-enabled tools.
Use of AI-enabled decision support tools in Gaza suggests that under certain circumstances, human operators can become overly dependent on AI-enabled systems, and that dependency can result in actions that may violate policy and perhaps international humanitarian law. Reports have centered primarily around two IDF systems, Gospel and Lavender. Both systems scrape massive data repositories to identify targets for military action. Gospel predicts which structures are used by Hamas, while Lavender predicts who is a Hamas operative and where they might be located. That information is then used to strike those targets as part of the larger military operation.
Both tools have increased the speed and volume of IDF targeting operations in Gaza.6 Prior to the introduction of AI-enabled decision support tools, the IDF targeting process required 20 IDF intelligence officers to produce 50-100 targets in 300 days; Gospel produced 200 targets in 10-12 days.7 The Lavender system identified 37,000 suspected militants in the first few weeks of the conflict and is at least 90 percent accurate.8
Some operators acknowledged that they preferred to follow the machine because it was statistically accurate and “…the machine did it coldly. And that made it easier.”9 Human operaters took only 20 seconds, on average, to approve AI targeting decisions.10 In the presence of machine speed and accuracy, and under cognitive pressure to do more, system operators stopped thinking and let the machines think for them. Why? Cognitive dependence is inherently about who or what is in control of the conditions and actions.
Control is the capacity to exert influence over internal states, external states, and desired outcomes.11 However, the amount of control we want can depend on the context or situation we are in. For example, a low-stakes situation (e.g. choosing what cereal to eat in the morning, if you are late for work) would encourage you to accept less control because most choices available are reasonable, and the risk of accepting a low preference selection is minimal. In a high risk situation (e.g. targeting adversaries in a village full of noncombatants), you would prefer more control to mitigate risk, with greater cost in speed and cognition.
We may also consider that control of some parts of a situation may be more necessary than others. Control of defining the desired state may be critical, while control over the process to achieve the desired state may be a place to assume risk to gain speed. We may be willing to make this trade-off by assuming that we have control over validating that the output matches the desired state.
The aforementioned examples from Gaza suggest that humans may have given up direct control of parts of the targeting process in Gaza for speed and cognitive efficiency, while still maintaining the illusion of control through human on the loop.
Human operators must be able to maintain control over the outcomes in war. To do that, they must manage their dependence on machines so that they retain sufficient cognitive autonomy to guide the war towards human defined outcomes. Understanding the trade-off between cognitive dependence and autonomy can better ensure human control of warfare when humans employ AI-enabled decision support tools.
Cognitive Autonomy for Meaningful Control
We define cognitive autonomy as the ability to perform the mental process of acquiring, storing, manipulating, and retrieving information in an independent way, towards decision-making.12 Cognitive autonomy exists on a spectrum. At either end of the spectrum, we have opposing states that are neither achievable nor desirable.
We limit use of the word autonomy to consider cognition in a narrow context.13 Loss of control does not necessarily mean that the situation is deteriorating. Instead, the nature of that control is changing. In a situation where a human and machine are both exerting influence, control of that situation leans toward being either human or machine-defined, depending on which is exerting the greatest influence. The difficulty is in determining how much is sufficient autonomy in a given context.
Evaluating Cognitive Autonomy and Dependence
Applying the idea of cognitive autonomy to human-machine interaction is fairly novel but the idea has been applied in developmental psychology when studying how humans seek to increase control in their environment as they age into adulthood.14 Developmental psychologists use a framework called Cognitive Autonomy and Self-Evaluation to determine when children become cognitively autonomous from their parents and when determining if persons with mental disabilities can function on their own.15
We suggest a framework of six categories for evaluating cognitive autonomy and dependence of humans on a given AI-enabled decision support tool: a) cognitive capacity; b) conceiving a state of being; c) applying value; d) forecasting outcomes; e) understanding influence; and f) controlling choice architecture. We refer to it as the Cognitive Autonomy Variable (CAV) framework.
In the diagram below, a human operator’s cognitive autonomy or dependence are functions of six variables. The composite relationships of those functions suggest that the human is becoming more dependent or more autonomous from the decision support tool.
Cognitive Autonomy Variables
Cognitive capacity: Not to be confused with a measure of intelligence, this variable considers the resources available for cognition. Greater cognitive capacity enables greater capacity in the other variables. In each of the other variables, the human is either more or less reliant on the machine for that function.
Conceiving a state of being: This category captures human creativity and the ability to perceive relationships between information in order to imagine and describe a state that differs from that which is currently in place.
The heart of any war is the belief that the current state of being is so undesirable that lethal force on a national scale has become necessary. War is a function of human conceptions that things should be a certain way. Whether this conception is machine-defined or human-defined will indicate how dependent humans have become on the tool.
Applying value: Value is where human cognition determines desire and preferences. When conceiving of new states of being, the human applies value to determine which state is more desirable than another.
Applying value to conception of a desired state is closely associated with why humans do what they do. In warfare the ‘why’ must be well-understood and tied to a larger human-centered purpose. Our ability to hold on to the purpose behind a decision will help us ensure that a system is on track to accomplish the correct objective function we have defined. When a deviation in the process leads to a different end state, the ability to reverse the undesired state is crucial.
Forecasting future outcomes: Forecasting, the ability to predict outcomes from decisions and values, is essential to making choices that match desired states. Without forecasting, there is only random choice.
Where machines are starting to excel is in their ability to consider larger amounts of information than humans in the forecasting of future outcomes. This makes it useful to employ machines for predictive analytics to down-select options in favor of those that are most likely to yield the desired end state.
Understanding influence: There are external and internal influences on cognition. Understanding the nature and prevalence of influences in any given context allows humans to preserve as much cognitive autonomy as they can by tempering or enabling influence, by accepting some influences over others, and understanding (to the extent possible) how influences are affecting cognition.
In warfare, not only will there be neutral influences from the information environment, there will be influences designed to sway decision-making. An adversary may attempt to deceive by confusing data collection and poisoning collected data, or a decision-support tool may attempt to influence its operator to meet a mis-aligned goal.
Controlling choice architecture: Choice architecture is the array of available choices.16 It is the structuring of the decision environment to guide, constrain, and influence decisions. A choice cannot be made if it is not available, and the volume and types of choices available can shift behavior toward specific outcomes.17
One’s ownership of a system is ultimately reduced as automation takes over reasoning rather than strictly automating the process by which an action is executed. This becomes a concern especially when systems have control over choice architecture. If a system controls the amount and variety of choices presented to us, that would potentially steer us towards choices we might have not made independently.
The reality in high-stakes decision-making situations is that while the pressure often reduces cognitive capacity, the gravity of the decisions demands a higher level of cognition. On the one hand, there’s a need to make decisions quickly to respond effectively to the situation. On the other, there’s a competing need to consider each decision, given the potentially severe consequences of any mistakes.
Simplifying this architecture can potentially speed up decision-making but might also lead to oversights or errors. The dilemma becomes whether to delegate critical, high-stakes decisions to algorithms that can act faster but may lack deep understanding, or to maintain human oversight, accepting slower decision-making processes in exchange for more contextually-aware outcomes.
Applying The CAV Framework
We offer three principles to be considered for decision support tool design, as well as some suggestions for what practical implementation might look like:
First, the design of any AI-enabled decision support tool should consider how humans will interact with machines that can replace some human cognition—particularly how cognitive bias will manifest when humans interact with machines in the expected context.
Second, the ratio of cognitive autonomy to dependence should be recognizable and measurable: We do not intend this as a binary marker but rather as a gauge to inform operators when their interactions with the tool indicate that reliance on the tool is increasing and may exceed desired thresholds.
Third, cognitive autonomy should be optimizable by human control of the tool. The human operator must be able to modify system functions when cognitive dependence exceeds desired limits and more cognitive autonomy is optimal for the context.
These principles could be implemented through tracking systems in the machine that turn into awareness for the operator. Activity logs and performance metrics that compare present user behavior to historic behavior might suggest to an operator that they have started to rely on the tool much more than they have in the past.
The system could also communicate uncertainty clearly in ways that encourage reflection in the operator. In the case of target identification, uncertainty could be communicated by modality rather than as a composite. For instance, full motion video analysis is 80 percent, electromagnetic spectrum is 50 percent, and text source is 60 percent. This interface would at least make the human fully aware of the limitations in the system and lead to reflection on what those numbers mean.
The user interface could be constructed to encourage engagement and cooperation with the tool rather than input and output. So an AI-enabled decision support tool might recommend a target: “Is target #2 with identification confidence levels x and y suitable for action?” This could have important implications for how the operator views their interaction with the tool and their responsibilities.
There are many other ways these tools could be designed to encourage cognitive autonomy when required and allow some dependence when acceptable. Developers should start considering concepts now, but further research must be done to develop this CAV system into a usable tool—particularly if the goal is to make a useful testing and evaluation framework that can validate appropriate human judgment in operational systems.
References
1 Paul Scharre, “Centaur Warfighting: The False Choice of Humans vs. Automation,” Temple International and Comparative Law Journal 30, no. 1 (2016).
2 Robert J. Sparrow and J. H. Adam, “Minotaurs, Not Centaurs: The Future of Manned-Unmanned,” The U.S. Army War College Quarterly: Parameters 53, no. 1 (2023).
3 Heather M. Roff and Richard Moyes, “Meaningful Human Control, Artificial Intelligence, and Autonomous Weapons,” Briefing Paper for Delegates (Convention on Certain Conventional Weapons, Meeting of Experts on Lethal Autonomous Weapons, April 11-15, 2016); “DoD Directive 3000.09, Autonomy in Weapons Systems,” Office of the Under Secretary of Defense for Policy, U.S. Department of Defense, January 25, 2023.
4 David E. Sanger, “In Ukraine, New American Technology Won the Day. Until It Was Overwhelmed,” New York Times, April 25, 2024.
5 “DoD Directive 3000.09,” U.S. Department of Defense.
6 Geoff Brumfiel, “Israel is Using an AI System to Find Targets in Gaza. Experts Say It’s Just the Start,” NPR, December 14, 2023.
7 Ibid.
8 Yara Abraham, “Lavender’: The AI Machine Directing Israel’s Bombing Spree in Gaza,” +972 Magazine, April 3, 2024; see also “The IDF’s Use of Data Technologies in Intelligence Processing,” Israel Defense Force, June 18, 2024.
9 Bethan McKernan and Harry Davies, “’The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets,” Guardian, April 3, 2024.
10 Abraham, “Lavender,” 2024.
11 L. A. Leotti, Sheena S. Iyengar, and Kevin N. Ochsner, “Born to Choose: The Origins and Value of the Need for Control,” Trends in Cognitive Science 14, no. 10 (2010).
12 “What is Cognition?” Cambridge Cognition, August 19, 2024.
13 John Christman, “Autonomy in Moral and Political Philosophy,” in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta (Stanford, Calif.: Metaphysics Research Lab, 2020); Troy E. Beckert, “Cognitive Autonomy and Self-Evaluation in Adolescence: A Conceptual Investigation and Instrument Development,” North American Journal of Psychology 9, no. 3 (2007).
14 Ibid.
15 Evan G. Risko and Sam J. Gilbert, “Cognitive Offloading,” Trends in Cognitive Science 20, no. 9 (2016).
16 Richard V. Adkisson, review of Nudge: Improving Decisions About Health, Wealth, and Happiness, by Richard H. Thaler and Cass R. Sunstein, Social Science Journal 45, no. 4 (December 2008).
17 Ibid.

