Militaries are going autonomous. But will AI lead to new wars? A tour of recent research

The invasion of Ukraine in February 2022 has resulted in hundreds of thousands of casualties and provided a sickening laboratory for the development of the technology of war. Since then, major advancements have been made in unmanned drones and more generally, lethal autonomous weapon systems (LAWS), defined by the ability to search for and engage targets without a human operator. 

Although the conflict has not yet birthed the first queasy sight of a fully autonomous battlefield, according to assessments of those like Kateryna Bondar of the Center for Strategic and International Studies, a conversion to fully autonomous forces is being actively pursued. "We strive for full autonomy,” said Mykhailo Fedorov, the deputy prime minister of Ukraine, to the Guardian in a June article. Proof of concept demonstrations of basic fully autonomous capabilities have existed now for at least several years. 

Others have long called for regulations or bans on LAWS. "Human control over the use of force is essential,” said the United Nations Secretary-General António Guterres at a meeting in May. “We cannot delegate life-or-death decisions to machines.” However, substantive, binding regulations have yet to be adopted by any nations that lead in the development of LAWS, as surveyed in a September 2025 book by Matthijs Maas. 

Large-scale deployment of LAWS therefore looks increasingly likely to occur, even though researchers like Maas caution against seeing autonomous warfare as inevitable. "The military AI landscape at present is at a crossroads," Maas wrote. Regulations remain a possibility post-deployment, or in response to a stigmatization of the technology that it might cause. 

Nonetheless, the reality that AI is likely to go to war has driven researchers to expand from a "prevailing preoccupation" on how AI will be used—for example, in the form of LAWS—to whether this use will significantly alter geopolitical norms. This was the intriguing argument made by scholars Toni Erskine and Steven Miller in a January article, as well as articles in an accompanying issue of the Cambridge Forum on AI: Law and Governance

Amongst scholars, this shift from seeing LAWS as tools to seeing them as strategic influences has been far from totally uniform or completely new. Research on military AI and LAWS is spread across many sectors of academic study. Nonetheless, it is possible to sketch how and why such a shift has happened, and to explain some of the findings of the new research.

Surprisingly, some scholars have come to somewhat comforting conclusions. For example, in a July 2025 study from the RAND corporation, the authors assessed that AI is not likely to lead to big new wars. "AI’s net effect may tend toward strengthening rather than eroding international stability," the authors wrote. 

These recent studies rely on arguments that deserve to be interrogated further. However, it is worth lingering for a while on the broader dimensions of the transition that has been happening in research on AI and LAWS. 

Military AI: Not an issue for AI research? 

Shifts in the study of military AI have been brought about, in part, because of rapid advances in what AI can do. These advancements have naturally been spearheaded by AI technology corporations and by technical AI researchers.

Increasingly, the technical community has embraced the development of AI for military purposes, or at least ignored military AI's potential problems. This has directly fed into the need for other, non-technical researchers to take up the strategic impacts of military AI as a pressing issue. 

However, there have been a few notable cases where scientists sought to cast military AI as a problem to be solved in technical research. For example, in 2015, the researcher Roman Yampolskiy of the University of Louisville put forward the argument that the most problematic AIs, from the point of view of societal safety, would not be those with mistakes in their design. Rather, it would be those that were intentionally designed for military use, or "dangerous by design." 

The author's logic was straightforward. All things being equal, both non-military and military AI systems would face similar risks of suffering from mistakes in their design. But military systems would still be inherently unsafe, even if all potential mistakes were avoided. And, mistakes in military AI systems would lead to far worse safety problems. 

At that time, the area of technical research focused on making AI systems safe, called AI safety, had produced little published research. Nonetheless, within the emerging field, Yampolskiy noted the majority of AI safety work focused on AI systems that suffered from design flaws, rather than systems that were dangerous by design. 

This trend to sidestep military AI as a technical problem has persisted into modern times. Recent reports that may be taken to be indicative of the field, such as the 2026 International AI Safety Report, or the The Singapore Consensus on Global AI Safety Research Priorities, released in May 2025, do not directly mention military AI as a primary issue to be addressed in technical research.

Instead, the tendency in AI has been to delegate problems with military AI to other fields. An example of one such problem is the ability of military AI to cause unjust wars. In AI safety, this is most often regarded as a form of misuse or malicious use. And, it is often assumed in AI safety that misuse risks are not addressable within technical research, because the technology is assumed to be unavoidably dual-use. 

On the other hand, amongst technical researchers, there has been a growing tendency to propose AI alignment methods that might conflict with military applications. These include proposals to align AI to pluralistic societies, or to legal directives; or to build AI that learns to align itself, for example, by performing ethical learning processes. 

Separately, as reviewed by Erskine and Miller, the largest AI technology companies in the US have embraced the emergence of an "AI arms race." They have done so, for example, by making agreements to build and provide leading models for defense contractors and the US government. These actions come in contradiction to their own safety research, which tends to identify loss-of-control risks as highly problematic—risks that would be severely amplified by military uses.

Broader inquiry into military AI uses

The actions and inactions of AI technologists have led military AI to be primarily taken up as an issue outside of technical AI research. Disciplines where it has received the most significant attention include political science, international relations, law, and philosophy.

Each of these areas has engaged differently with the subject. In his book Architectures of Global AI Governance, Maas provides a historical perspective: "Questions about robotic weapons systems were first raised in civil society as early as the mid-’00s. However, these issues ultimately did not gain significant uptake then, as the technology was at the time seen as too futuristic." 

In philosophy, AI gained attention in the early-2000s as an emerging technology that might be particularly impactful. In 2001, the philosopher Nick Bostrom argued that AI posed extinction-level risks that were deserving of greater study, launching a research program on global catastrophic risks (GCRs) that would become influential. However, as reviewed in an article by Maas from 2023, "With only a few exceptions, existing GCR research has paid relatively little attention to the ways in which military uses of AI could result in catastrophic risk." 

Military AI has perhaps received the greatest attention in the domains of political science and international relations. Already by 2017, political scientists were arguing that AI was set to play a critical role in the future of war. One example of such scholarship was a report produced by two authors from the Belfer Center in collaboration with the US federal agency IARPA. 

In that report, the authors argued that it would be inevitable to see AI systems—especially LAWS—become the dominant tools of warfare, being both faster and lower cost than human combatants. "The applications of AI to warfare and espionage are likely to be as irresistible as aircraft," they wrote. However, they focused mainly on whether and how AI would be deployed, rather than how this deployment might impact geopolitical or national security calculations. 

Within these fields, research into military AI has advanced at a steady tempo. But, as first argued by Erskine and Miller in another paper, from May 2024, this research has been overly focused on narrow questions of the use of AI in war. 

As they wrote: "The focus of academics and policy makers has been overwhelmingly directed towards the use of AI-enabled systems in the conduct of war." They argued that a shift was needed, "from the decisions of soldiers involved in selecting and engaging targets" to "state-level decision making on the very initiation of war."

The authors further argued that AI impacts on strategy, or the resort to war, were arguably of greatest importance. On this point, they quoted an earlier paper by other authors: "If the possibility that a machine might be given the power to ‘decide’ to kill a single enemy soldier is fraught with ethical and legal debates, what are we to make of the possibility that a machine could ultimately determine whether a nation goes to war?" 

Erskine and Miller proposed splitting these neglected impacts into two categories of consideration: First, the way that AI could inform decisions to go to war, for example, by providing recommendations and predictions for human leaders. Second, the way that AI could directly decide to go to war itself, for example, by automatically enacting defensive operations in response to incoming cyberattacks. The latter category naturally includes the possibility of humans losing military AI control.

A shift to a strategic focus

Given the obvious importance of the strategic impacts of AI, it is of interest why they would not have received significant attention much earlier.

Erskine and Miller attribute this deficit to several reasons, in their most recent paper. First, they describe AI technology companies as disrupting conventional strategic processes. By offering a technology that can choose to go to war, or strongly influence the decisions of those who choose to do so, the AI corporations have become akin to nation state actors, themselves. This poses a significant novelty to national security processes, they argue, which has forced researchers to play catch up. 

More pointedly, Erskine and Miller claim that there has been a blind spot on the topic of strategic impacts in the domain of international relations. As an example, they considered a multi-stakeholder meeting called the Responsible AI in the Military Domain (REAIM) summit, which was held in February 2023 at The Hague. A significant outcome of this meeting was a declaration signed in November 2023 by 51 countries, including the US, reflecting the achievement of a notable degree of consensus.

However, they describe the Declaration from the first REAIM summit as being almost entirely focused on military AI as a form of a "weapons system," with little to no reference to strategic impacts of such weapons. Further, they describe the capabilities being framed as something to be regulated by international laws on the conduct of war, such as international humanitarian law, rather than by laws that apply to the resort to war, such as the United Nations Charter. 

They noted several exceptions where the impacts of AI on strategy have seen greater focus. One of these is what the authors describe as the emerging "AI-nuclear weapons nexus," where impacts have been more extensively addressed in both scholarship as well as diplomatic actions.

For example, at a meeting between the Presidents of US and China in November 2024, the leaders asserted that decisions to wage nuclear war will remain in human hands for the foreseeable future. Erskine and Miller describe a "rapidly emerging literature on AI and nuclear weapons" as seeking to make progress on related problems. 

AI as revolutionary—but not destabilizing?

The last year has seen numerous publications that seek to address questions about strategic impacts from the automation of war. In a number of cases, authors argue that even if AI proves a revolutionary technology for fighting wars, it need not disrupt the geopolitical balance of power. 

One notable study is that from Zachary Burdette and colleagues of RAND, who focused on arguably the most salient question, which is whether AI will lead to major new wars. Their conclusion was to the negative. "The risk that AI will directly trigger a major war appears low," the authors wrote. 

They based their assessment on a careful analysis of what happens when AI is inserted into the existing 'logic' of going to war. Let us unpack this a little further. 

Between great powers, one nation goes to war against another only when there are significant reasons in their favor. For example, if one great power has gained a significant first-mover advantage in the development of LAWS, then they might consider going to war. However, they would still be strongly disincentivized by other factors. For example, a rival great power would possess a catastrophic retaliatory nuclear capability. Also, a rival could be expected to rapidly catch up in military AI development, thereby rapidly neutralizing any first-mover advantages.

For these reasons, even if AI provides a revolutionary warfighting capability, for example, with "hyper-intelligent drone swarms," or extremely low cost fighting forces, the logic of deploying those forces (such as there is one) can still strongly contradict the initiation of new wars. 

The authors briefly survey other problematic scenarios, too, like the risks of AI causing wars, itself. But they dismiss this possibility without justification. They also do not consider the possibility that autonomous forces might be deployed by a country against its own civilian population.

But while the analysis of this single study is naturally limited in its scope, other researchers have also expressed skepticism, in other writings, about whether military AI might pose a highly destabilizing factor to global peace and security. 

In a November article from Alex Weisiger of the University of Pennsylvania, the author directly considers how the development of artificial general intelligence (AGI) might lead to the outbreak of new wars. The author envisions AGI as enabling a full conversion to autonomous fighting forces, which are both faster and more "intelligent" than humans. 

Weisiger argues that such advantages would not be decisive. First, because of the very nature of war, which he describes as a highly-competitive process that proceeds as an exercise in game theory, rather than as a fully deterministic calculation. For that reason, an aggressor—even with an AGI advantage—would open themselves to significant risks of losses, he argues, which would strongly discourage them from going to war. Weisiger does not consider the possibility that human personnel losses might be avoided by fully obsoleting them with robots and/or drones.

The author notes that an AGI-equipped aggressor would also possess significant speed advantages, allowing them to "outgun" their rivals. However, the situation of being outgunned is a common feature of contemporary international politics. The author concludes that this would therefore not be a disruptive factor. "The Iraq War over twenty years ago already provided the lesson that militaries unprepared for modern battle should not attempt to fight a conventional war against an elite opponent," the author wrote.

In other words, in a world of uneven military AGI advantages, we do not necessarily see more wars. Rather, surrender becomes the only option more often.

A common theme between both articles is that the disruptiveness of LAWS may be constrained, in the coming years, by the timeless nature of the logic of going to war. However, neither of these studies focuses intensively on more transformative scenarios, like the possibility of AGI becoming a distinctive strategic actor, or military AGI becoming a major new source of global catastrophic disasters. 

Still, it may comfort some to know that AI will not necessarily bring a firestorm to your doorstep, especially if it was not already located in a conflict zone.

Author's note: No AI was used in writing or editing this article.