ESPIRIDI
ESPIRIDI Podcast
Strategic Policy Framework for Regulating Lethal Autonomous Weapons Systems (LAWS)
0:00
-22:32

Strategic Policy Framework for Regulating Lethal Autonomous Weapons Systems (LAWS)

A Substack Policy Brief by ESPIRIDI

Policy Recommendations by: Bodil Valero, Robi Sen, Fabrizio Degni, Neha Natesh & Lynn Frederick Dsouza - G100 Security & Defence Wing x WICCI National Aviation Council - LAWS Team (Submitted to SIPRI -UNODA - LAWS Team).

ESPIRIDI Podcast | Strategic Policy Framework for Regulating Lethal Autonomous Weapons Systems (LAWS)

The global security architecture is entering a decisive inflection point. The rapid development of Lethal Autonomous Weapons Systems (LAWS) has outpaced the normative and legal regimes designed to govern armed conflict. This Strategic Policy Framework advances a clear thesis: voluntary norms are no longer sufficient. By 2026, the international community must transition toward legally binding multilateral treaty mechanisms to regulate, constrain, and where necessary prohibit autonomous lethal decision-making.

From Normative Ambiguity to Binding Law

Existing discussions under the United Nations framework and the Convention on Certain Conventional Weapons have generated valuable dialogue. However, consensus-based deliberation has not produced enforceable guardrails. ESPIRIDI argues that the absence of binding obligations risks creating regulatory arbitrage, where states pursue competitive autonomy without uniform compliance thresholds.

The principle at stake is simple but foundational: meaningful human control must remain embedded in all lethal decision cycles. Without this, the diffusion of responsibility between programmer, commander, manufacturer, and machine produces what we define as algorithmic impunity — a structural accountability vacuum incompatible with international law.

Preserving Compliance with International Humanitarian Law

The core legal benchmark remains International Humanitarian Law. LAWS must demonstrably comply with the principles of distinction, proportionality, necessity, and precaution. Systems that independently select and engage targets without contextual human judgment risk violating these doctrines, particularly in complex civilian-dense environments.

The framework proposes:

  • Codified requirements for human-in-the-loop or human-on-the-loop control in all lethal engagements

  • Mandatory Article 36-style weapons reviews with expanded AI audit criteria

  • Independent technical verification and multilateral inspection protocols

These measures move beyond declaratory commitments toward verifiable compliance.

Gendered Risk and Algorithmic Bias

A critical dimension often marginalised in security discourse is the gendered impact of AI-enabled targeting. Empirical evidence across civilian AI applications demonstrates that skewed datasets generate discriminatory error rates. When transposed into military systems, such biases can produce catastrophic misidentification of women and children as lawful targets, undermining protected-person status under IHL.

ESPIRIDI calls for:

  • Mandatory dataset transparency disclosures

  • Gender-disaggregated validation testing

  • Bias impact assessments embedded within procurement pipelines

  • Inclusion of women and Global South technologists in weapons review boards

Security innovation must not replicate structural inequities in digital form.

Technical Safeguards Against Escalatory Failure

Autonomous systems deployed in contested environments face degraded communications, spoofing, and electronic warfare blackouts. In such scenarios, unbounded autonomy increases escalation risk.

Accordingly, ESPIRIDI recommends mandatory technical safeguards, including:

  • Hard-coded kill-switch mechanisms with human override authority

  • Immutable forensic logging for post-strike accountability

  • Fail-safe reversion to non-lethal or dormant states during signal loss

  • Strict cyber-resilience standards to prevent adversarial manipulation

These safeguards must be treaty-mandated, not manufacturer-optional.

Prohibition of Biometric Targeting

Perhaps the most urgent normative boundary concerns biometric targeting. The use of facial recognition or behavioural biometrics to autonomously identify and eliminate individuals erodes human dignity and due process at scale.

ESPIRIDI advocates for a total international prohibition on biometric targeting in weapons systems. Automation must never convert identity into a kill variable.

The 2026 Imperative

The regulatory window is narrowing. By 2026, technological maturity may outstrip diplomatic capacity. A binding treaty framework is no longer aspirational — it is strategically necessary to prevent destabilising autonomy races and systemic erosion of accountability.

ESPIRIDI’s position is pragmatic, not reactionary. We recognise the dual-use nature of AI and its legitimate defensive applications. However, lethal autonomy without enforceable guardrails threatens to recalibrate warfare beyond human judgment and legal oversight.

The future of armed conflict must remain governed by law, not code alone.

#ESPIRIDI #MDISFRC #G100 #Security #Defence #WICCINAC #LAWS #AutonomousWeapons #MeaningfulHumanControl #InternationalHumanitarianLaw #AlgorithmicAccountability #AIGovernance #DefensePolicy #HumanRightsInAI #StopBiometricTargeting #GlobalSecurity #TechEthics #Treaty2026 #WomenPeaceSecurity #ResponsibleAI #Multilateralism

For more information please contact: Lynn Frederick Dsouza, Founder & Director — ESPIRIDI, Email: lynn.dsouza@espiridi.com or visit espiridi.com

Share

Leave a comment

Discussion about this episode

User's avatar

Ready for more?