Reports of Anthropic's Claude AI used in U.S. military operation against Nicolás Maduro
Reports from the Wall Street Journal and Axios indicate that the U.S. military utilized Anthropic's Claude artificial intelligence model during an operation to arrest Nicolás Maduro. The operation occurred last month and included the bombing of multiple locations in Caracas, Venezuela. While the Wall Street Journal notes that the exact application of the AI model remains unclear, Axios reports that it was used during the active phase of the operation rather than just for preparation. This development has caused political and institutional friction within the United States regarding the military's use of commercial AI tools. Anthropic's internal guidelines explicitly prohibit the use of its technology for facilitating violence, developing weapons, or conducting mass surveillance. Consequently, the Pentagon is reportedly reviewing its partnership with the company following these revelations. The case highlights the ongoing tension between AI companies seeking government contracts and their stated ethical boundaries. Neither the U.S. military nor Anthropic has provided a detailed public explanation of the model's specific role in the mission.