The growing reliance of militaries on artificial intelligence (AI) and machine learning (ML) technologies means that private software companies are assuming an increasingly central role in the conception and development of the tools of contemporary warfare. While most of the existing debate on algorithmic warfare has focused on autonomous weapons systems, the rise of AI-enabled software capabilities has largely been neglected. In this article, we examine how developers of AI- and ML-based military decision-support systems visually promote their software products.
Building on insights from Critical Security Studies and Science and Technology Studies, we develop the argument that ‘virtual military demonstrations’, as we label this practice, facilitate technology companies’ claim of epistemic authority on the future of war. This allows commercial actors to represent algorithmic warfare as a strategic and moral imperative for the survival of Western democracies. Through detailed studies of virtual demonstrations by Palantir and Anduril, two US-based defence tech companies, we illustrate how algorithmic warfare is visually and discursively represented as a clean, controllable, and precise business, disconnected from the lived experiences of innocent victims and their environments. We conclude that the obfuscation of the realities of warfare in such a way has important implications which warrant further scrutiny.
Click here for the full article.