Published: March 31, 2026
Technological advances and rising military expenditures in recent years have accelerated the development of Lethal Autonomous Weapon Systems (LAWS). Though this technology is still in its infancy, it has already transformed modern warfare. LAWS, when fully evolved, will provide means for precise and independent selection and engagement of targets without exposing soldiers to battlefield dangers. A 2025 Congressional Research Service report titled Defense Primer: U.S. Policy on LAWS classifies it as “a special class of weapon systems that use sensor suites and computer algorithms to independently identify, target and employ an onboard weapon system to engage and destroy it without manual human control.” The US Department of Defense Directive 3000.09, Autonomy in Weapon Systems (2023), defined LAWS as systems that, once activated, “can select and engage targets without further intervention by a human operator.” This concept, known as “human out of the loop” or “full autonomy,” involves target selection and engagement based on inputs from artificial intelligence (AI), big data analytics, and sensor-based identification.
According to Data M Intelligence, the global autonomous weapons market reached USD 14.2 billion in 2024 and is expected to grow to USD 33.47 billion by 2032, with a compound annual growth rate of 11.39 percent during 2025-2032. Simultaneously, global civil society initiatives are advocating a ban on fully autonomous systems. In October 2012, Amnesty International launched the Stop Killer Robots campaign, an alliance of over 180 organizations across 65 countries, calling for an international law on autonomy in weapon systems to ensure machines are not allowed to make decisions that affect life and death.
Concerns have arisen over unsupervised use and the potential for system errors that can cause unintended civilian casualties, escalate conflicts, and threaten global peace and security. The increasing integration of autonomous weapon systems in combat has already been highlighted by their reported use in Ukraine conflict and in Gaza. A February 2025 report by the Foundation for Political, Economic and Social Research titled Deadly Algorithms: Destructive Role of Artificial Intelligence in Gaza War revealed that Israel employed AI-based systems, Lavender and Habsora, to identify and attack human targets. The report states that Lavender can approve targets within 20 seconds, often without substantive human review. Since October 2023, the system has compiled a list of 37,000 potential individuals labelled as Hamas members without verifying their military profile.
Since 2014, the United Nations Convention on Certain Conventional Weapons (UN CCW) has debated the regulation of LAWS. In May 2024, Arms Campaign Director Steve Goose of Human Rights Watch warned that “the world is approaching a tipping point for acting on concerns over autonomous weapons systems,” underscoring the urgency of an international legal instrument. On 2 December 2024, the UN General Assembly adopted Resolution A/RES/79/62 on LAWS by 166 votes in favor, 3 against, and 15 abstentions. The resolution marked a decisive step in acknowledging global concerns over autonomous weapon systems, affirmed the applicability of international humanitarian law (IHL) and called for further consultations in 2025. The first UNGA meeting on autonomous weapons, held on 12-13 May 2025 and attended by 96 countries, including representatives from the International Committee of the Red Cross (ICRC) and civil society, reinforced momentum to prohibit and regulate LAWS. On that occasion, UN Secretary-General António Guterres advocated for a legally binding instrument to ban LAWS by 2026, describing them as “politically unacceptable and morally repugnant.”
Despite global concerns, progress on a legally binding treaty on LAWS remains elusive due to divergent strategic interests of major powers. The US continues to resist codification of a new binding framework, emphasizing the adequacy of national weapons review mechanisms to preserve strategic and technological flexibility. While the US maintains that it does not currently possess LAWS, senior military leaders have acknowledged that Washington may be compelled to develop them if adversaries do so. Russia has opposed any binding treaty, while China supports negotiations on the CCW and the development of norms “when conditions are ripe.” The European Union, in contrast, advocates for a legally binding international instrument, emphasizing Meaningful Human Control (MHC) and compliance with IHL. The EU’s approach seeks to differentiate between systems that incorporate human oversight and those that operate without it.
The integration of artificial intelligence into weapon systems also presents an increasing challenge to nuclear deterrence and strategic stability. For instance, during the sidelines of the Asia-Pacific Economic Cooperation (APEC) Summit in Peru in November 2024, the then US President Joe Biden and China’s President Xi Jinping jointly pledged not to integrate AI in nuclear command-and-control systems, recognizing the catastrophic risks of automation in nuclear decision-making. However, as AI rapidly improves surveillance, missile guidance and targeting systems, it is unclear whether this restraint will hold.
The integration of AI in nuclear forces may introduce instability into deterrence dynamics by reducing decision-making time and increasing challenges caused by algorithmic bias in early warning systems, posing the threat of false nuclear alarms. Cold War history reminds us of human judgment, central to nuclear stability, and averted catastrophes. During the Cuban Missile Crisis, the B-59 submarine incident on 27 October 1962 brought the two superpowers close to nuclear exchange when a Soviet submarine commander considered launching a nuclear-tipped torpedo under the mistaken belief that hostilities had commenced. The refusal by Vasily Arkhipov to authorize the attack prevented a potential nuclear war. Similarly, Stanislav Yevgrafovich Petrov, a lieutenant colonel in the Soviet Air Defense Forces, chose to disregard a false early-warning alert indicating an incoming US nuclear strike in 1983, preventing a global nuclear disaster. Such decision-making underscores the indispensable role of human rationality in nuclear command-and-control systems.
As LAWS presents multifaceted threats to international peace and security, states need to consider negotiating a legally binding instrument that ensures MHC over autonomy in weapon systems. Enhancing transparency, accountability, and rigorous weapons reviews are essential to prevent destabilization and ensure that technological progress does not outpace the human element in the use of force. Confidence-building measures, such as transparency in military AI, the establishment of international verification mechanisms and a moratorium on the development and deployment of LAWS, could help mitigate future dangers.
Jawad Ali Shah is a Research Officer at the Center for International Strategic Studies Sindh (CISSS), Pakistan. He holds a BS in International Relations from the University of Sindh, Jamshoro, Pakistan. His research areas are emerging military technologies, and South Asian nuclear deterrence and strategic stability dynamics. The views are the author’s own.

