The US military reportedly relied on Anthropic’s Claude AI for intelligence analysis and targeting during an Iran strike hours after Trump ordered a ban on the company’s systems. The US military reportedly used Anthropic during a major air strike on Iran, only hours after President Donald Trump ordered federal agencies to halt use of the company’s systems. Military commands, including US Central Command (CENTCOM) in the Middle East, used Anthropic’s Claude AI model for operational support, according to people familiar with the matter cited by The Wall Street Journal. The tool has reportedly assisted with intelligence analysis, identifying potential targets and running battlefield simulations. The incident shows how deeply advanced AI systems have become embedded in defense operations. Even as the administration moved to sever ties with the company, Claude remained integrated into military workflows. Read more
The company was the first to deploy its AI models on classified US military cloud networks, according to Anthropic CEO Dario Amodei. The CEO of AI company Anthropic, Dario Amodei, has responded to the United States Department of Defense and the White House, ordering military defense contractors that do business with the Department of Defense to stop using Anthropic’s products. Anthropic objected to the use of its AI models for mass domestic surveillance and fully autonomous weapons that can fire without any human input, Amodei told CBS News on Saturday. He added that Anthropic was fine with all of the US government’s proposed use cases for its AI models, except for surveillance and fully autonomous weapons platforms. He said: Read more
OpenAI will deploy its AI models on Pentagon classified networks after the US government ordered agencies to stop using rival Anthropic over national security concerns. OpenAI has reached an agreement with the United States Department of Defense to deploy its artificial intelligence models on classified military networks, just hours after the White House ordered federal agencies to stop using technology from rival firm Anthropic. In a late Friday post on X, OpenAI CEO Sam Altman announced the deal, saying the company would provide its models inside the Pentagon’s “classified network.” He wrote that the department showed “deep respect for safety” and a willingness to work within the company’s operating limits. The announcement came amid a turbulent week for the AI sector. Earlier the same day, Defense Secretary Pete Hegseth labeled Anthropic a “Supply-Chain Risk to National Security,” a designation typically applied to foreign adversaries. The ruling requires defense contractors to certify they are not using t...
Anthropic alleges Chinese AI companies DeepSeek, Moonshot and MiniMax made 24,000 accounts and 16 million Claude exchanges to scrape its AI bot for training. Artificial intelligence firm Anthropic has accused three AI firms of illicitly using its large language model Claude to improve their own models in a technique known as a “distillation” attack. In a blog post on Sunday, Anthropic said that it had identified these “attacks” by DeepSeek, Moonshot, and MiniMax, which involve training a less capable model on the outputs of a stronger one. Anthropic accused the trio of generating “over 16 million exchanges” combined with the firm’s Claude AI across “approximately 24,000 fraudulent accounts.” Read more
Commercial AI models were able to autonomously generate real-world smart contract exploits worth millions; the costs of such attacks are falling rapidly. Recent research by major artificial intelligence company Antropic and AI security organization Machine Learning Alignment & Theory Scholars (MATS) showed that AI agents collectively developed smart contract exploits worth $4.6 million. Research released by Anthropic’s red team (a team dedicated to acting like a bad actor to discover potential for abuse) on Monday said that currently available commercial AI models are capable of exploiting smart contracts. Anthropic’s Claude Opus 4.5, Claude Sonnet 4.5 and OpenAI’s GPT-5 collectively developed exploits worth $4.6 million when tested on contracts, exploiting them after their most recent training data was gathered. Read more
Backed by Wall Street heavyweights, Anthropic’s soaring valuation comes after it closed a $13 billion Series F, reflecting the mainstreaming of AI. AI company Anthropic, the developer of the Claude family of large language models, has reached a $183 billion valuation following its latest funding round — a dramatic increase from the start of the year that underscores the accelerating growth of AI applications. The company disclosed Tuesday that it closed a $13 billion Series F round co-led by venture firms ICONIQ Capital, Fidelity Management & Research Company and Lightspeed Venture Partners. Some of North America’s most prominent investors also joined the raise, reflecting the surge in institutional interest in artificial intelligence as a disruptive technology. Read more