Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
Military AI optimists predict future AI assisting or making command decisions. Enough of this optimism is shared by government decision-makers to have produced a recent crop of “AI strategy” documents and even planned systems, such as the US Joint All Domain Command and Control system (JADC2). Rather than heralding an era of canny and rapid machine commanders or advisors for human leaders, AI technologies may at best amount to something resembling the satirical ‘model of a modern Major-General’ in Gilbert and Sullivan’s Pirates of Penzance comic operetta of 1879: a narrow intelligence that has empirical knowledge yet is woefully ill-equipped in the relevant skills of command in war.
This paper argues that, at the fundamental levels of logic and theory, these predictions are dangerously wrong. The nature of war means command decisions will always rely on abductive logic. Commanders, even at the supposedly “simpler” tactical levels of warfare, constantly face unprecedented situations, compounded by conditions of enormous uncertainty. Indeed, as Clausewitz and other military theorists encourage, embracing this situation and using it to one’s advantage against a foe is a feature of military genius.
Meanwhile, machine learning (or ‘narrow AI’) relies on inductive logic. While some AI practitioners claim their models can perform abductive reasoning, following Erik Larson’s argument, the capabilities they refer to are not true abduction because they lack the essential conjectural component. Abduction cannot be reduced to induction (or indeed deduction). Types of logic are not interchangeable, and therefore AI’s limited utility in command – both tactical and strategic – is not something that can be solved by more data or more computing power. In light of our argument, many defense and government leaders are therefore proceeding with a false view of the nature of AI and of war itself. Emerging from such dubious theories and observations, their military AI policies are unlikely to achieve what the optimists predict.