Op-Ed: Technologies That Kill and the Companies That Design Them

In this blog post for the Free University of Amsterdam, PLATFORM WARS' Marijn Hoijtink and Martine Jaarsma explore how the design and development of military AI systems influence their compliance with international humanitarian law.

Marijn Hoijtink, Martine Jaarsma

July 3, 2025

In a new blog post for the Free University of Amsterdam, Marijn Hoijtink and Martine Jaarsma examine the role that design processes of military technologies within private corporations play for legal analyses of military AI.

As wars in Ukraine and Gaza unfold, weaponized artificial intelligence (AI) has become a reality, transforming the nature of modern warfare. AI-powered systems, such as drones and algorithmic targeting tools like Lavender, are accelerating the pace of attacks and increasing civilian harm.

While legal scholars have begun examining AI under International Humanitarian Law (IHL), much of this focus remains on deployment. While this remains important, Hoijtink and Jaarsma argue that we must also look further upstream at the design processes, organizational cultures, and private sector involvement that shape these technologies before they are used. Understanding how military AI is built is just as critical as understanding how it is deployed.

Read the full blog here.

About this article

This might also interest you