A recent confrontation between the US government and the AI company Anthropic illustrates how private technology firms are becoming central actors in the governance of military artificial intelligence. As Anna Nadibaidze and Robin Vanderborght argue, the dispute reveals broader dynamics shaping how AI is integrated into military infrastructures and how decisions about its use are increasingly negotiated between governments and private technology companies.
The conflict emerged after the US government reportedly demanded that Anthropic remove its internal “red lines” restricting the military use of its large language model Claude. These restrictions prohibit the use of the system for domestic mass surveillance and for fully autonomous weapons that can select and attack targets without human oversight. When Anthropic refused to abandon these limits, the US government reacted by ordering federal agencies to stop using its products and designating the company as a potential national security supply-chain risk.
This dispute highlights how the governance of military AI is increasingly shaped by interactions between governments and private technology firms rather than through formal international regulation.Technology companies are not merely suppliers of digital tools; they actively shape the possibilities of military AI by designing systems in particular ways, setting usage restrictions, and promoting narratives about the future of warfare. At the same time, the case illustrates the limits of corporate autonomy when national security priorities are at stake.
A second takeaway concerns the shifting meaning of “responsible AI.” Recent US policy signals a departure from earlier attempts to develop international norms for responsible military AI. Instead of emphasizing ethical constraints or governance frameworks, the current approach focuses on accelerating the integration of AI across military operations. In this context, “responsible AI” is increasingly interpreted as deploying AI as extensively as possible within the defense sector.
Finally, the authors argue that debates about autonomous weapons often overlook how AI is already embedded in military decision-making processes. Rather than focusing exclusively on fully autonomous“killer robots,” it is crucial to examine how AI systems are integrated into targeting processes, intelligence analysis, and operational planning. Large language models like Claude may not directly execute lethal actions, but they can still shape military decisions by providing recommendations or prioritizing targets within decision-support systems.
The dispute between Anthropic and the US government therefore reveals deeper structural changes in the relationship between technology companies and military power. The central challenge is no longer whether AI will become part of warfare, it already has, but how these technologies should be governed when both states and private technology companies play decisive roles in shaping their use.
Original article:
Nadibaidze, A., & Vanderborght, R. Three Takeaways from the US Military–Anthropic Dispute.
https://www.autonorms.eu/three-takeaways-from-the-us-military-anthropic-dispute/
Summary written by:
Vinoja Thevarajah

