CoreWeave said the agreement means it now serves nine of the 10 major developers of large language models for artificial intelligence. CoreWeave, a publicly traded AI cloud infrastructure company, announced on Friday a “multi-year” agreement with AI developer Anthropic, which will use CoreWeave’s cloud computing data centers for its Claude AI model workloads. The agreement will be rolled out in phases, with the “potential to expand over time,” according to CoreWeave’s announcement. Shares of CoreWeave surged more than 12% on Friday and are trading at $102.73 at the time of writing. Read more
In an experiment, a chatbot resorted to blackmail after it found an email about replacing it, while in another, it cheated to complete a task with a tight deadline. Artificial intelligence company Anthropic has revealed that during experiments, one of its Claude chatbot models could be pressured to deceive, cheat and resort to blackmail, behaviors it appears to have absorbed during training. Chatbots are typically trained on large data sets of textbooks, websites and articles and are later refined by human trainers who rate responses and guide the model. Anthropic’s interpretability team said in a report published Thursday that it examined the internal mechanisms of Claude Sonnet 4.5 and found the model had developed “human-like characteristics” in how it would react to certain situations. Read more
AI firm Anthropic forms an employee-funded PAC while facing questions over political balance and a growing dispute with the Pentagon over AI use. Artificial intelligence firm Anthropic has launched a corporate political action committee (PAC), entering election financing as debates over AI policy intensify in Washington. The company filed a statement of organization with the Federal Election Commission on Friday to establish “AnthroPAC,” an employee-funded PAC that will collect voluntary contributions from staff. The filing lists Anthropic as the “connected organization,” with the committee structured as a “separate segregated fund” and registered as a lobbyist-affiliated PAC. Under US law, individual contributions are capped at $5,000 per election cycle per candidate and must be disclosed through public filings. Read more
Google and lenders move to finance a $5 billion Texas data center for Anthropic as a US judge blocks a federal push to restrict the AI firm’s use. Google is preparing to support a multibillion-dollar data center project in Texas leased to Anthropic as competition for AI infrastructure accelerates. The project, operated by Nexus Data Centers, could exceed $5 billion in its initial phase, with Google expected to provide construction loans, Financial Times reported on Friday, citing people familiar with the matter. A consortium of banks is also competing to arrange financing by mid-year, per the report. According to the report, Anthropic recently signed a lease for the 2,800-acre campus, which forms part of its broader infrastructure tie-up with Google. Construction is already underway, supported by early-stage debt financing from Eagle Point, a publicly traded closed-end investment company. Read more
Judge Rita Lin said it was not until Anthropic raised concerns about how its technology could be used that the US government announced a plan to "cripple Anthropic." A US federal judge in San Francisco has granted Anthropic’s request for temporary reprieve after the Pentagon’s designation of the company as a supply chain risk. In an order on Thursday, Judge Rita Lin of the District Court for the Northern District of California ordered a preliminary injunction against the Pentagon over the label. It also temporarily halts a directive from US President Donald Trump ordering federal agencies to stop using Anthropic’s chatbot, Claude. “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government,” said Judge Lin. Read more
Anthropic previously secured a $200 million Pentagon contract, and its AI has been used in classified operations, including support for US airstrikes on Iran, the Financial Times reports. Anthropic CEO Dario Amodei has reportedly reopened negotiations with the US Department of Defense in a last-minute effort to secure continued access to Pentagon contracts as the company faces the possibility of being labeled a supply chain risk by the Trump administration. Amodei has been holding discussions with Emil Michael, the US undersecretary of defense for research and engineering, to finalize terms governing the military’s use of Anthropic’s artificial intelligence models, the Financial Times reported, citing people familiar with the matter. A new agreement would allow the Pentagon to keep using the company’s technology and could prevent a formal designation that would force contractors in the defense supply chain to cut ties with the AI developer, according to the report. Read more
The US military reportedly relied on Anthropic’s Claude AI for intelligence analysis and targeting during an Iran strike hours after Trump ordered a ban on the company’s systems. The US military reportedly used Anthropic during a major air strike on Iran, only hours after President Donald Trump ordered federal agencies to halt use of the company’s systems. Military commands, including US Central Command (CENTCOM) in the Middle East, used Anthropic’s Claude AI model for operational support, according to people familiar with the matter cited by The Wall Street Journal. The tool has reportedly assisted with intelligence analysis, identifying potential targets and running battlefield simulations. The incident shows how deeply advanced AI systems have become embedded in defense operations. Even as the administration moved to sever ties with the company, Claude remained integrated into military workflows. Read more
The company was the first to deploy its AI models on classified US military cloud networks, according to Anthropic CEO Dario Amodei. The CEO of AI company Anthropic, Dario Amodei, has responded to the United States Department of Defense and the White House, ordering military defense contractors that do business with the Department of Defense to stop using Anthropic’s products. Anthropic objected to the use of its AI models for mass domestic surveillance and fully autonomous weapons that can fire without any human input, Amodei told CBS News on Saturday. He added that Anthropic was fine with all of the US government’s proposed use cases for its AI models, except for surveillance and fully autonomous weapons platforms. He said: Read more
OpenAI will deploy its AI models on Pentagon classified networks after the US government ordered agencies to stop using rival Anthropic over national security concerns. OpenAI has reached an agreement with the United States Department of Defense to deploy its artificial intelligence models on classified military networks, just hours after the White House ordered federal agencies to stop using technology from rival firm Anthropic. In a late Friday post on X, OpenAI CEO Sam Altman announced the deal, saying the company would provide its models inside the Pentagon’s “classified network.” He wrote that the department showed “deep respect for safety” and a willingness to work within the company’s operating limits. The announcement came amid a turbulent week for the AI sector. Earlier the same day, Defense Secretary Pete Hegseth labeled Anthropic a “Supply-Chain Risk to National Security,” a designation typically applied to foreign adversaries. The ruling requires defense contractors to certify they are not using t...
Anthropic alleges Chinese AI companies DeepSeek, Moonshot and MiniMax made 24,000 accounts and 16 million Claude exchanges to scrape its AI bot for training. Artificial intelligence firm Anthropic has accused three AI firms of illicitly using its large language model Claude to improve their own models in a technique known as a “distillation” attack. In a blog post on Sunday, Anthropic said that it had identified these “attacks” by DeepSeek, Moonshot, and MiniMax, which involve training a less capable model on the outputs of a stronger one. Anthropic accused the trio of generating “over 16 million exchanges” combined with the firm’s Claude AI across “approximately 24,000 fraudulent accounts.” Read more
Commercial AI models were able to autonomously generate real-world smart contract exploits worth millions; the costs of such attacks are falling rapidly. Recent research by major artificial intelligence company Antropic and AI security organization Machine Learning Alignment & Theory Scholars (MATS) showed that AI agents collectively developed smart contract exploits worth $4.6 million. Research released by Anthropic’s red team (a team dedicated to acting like a bad actor to discover potential for abuse) on Monday said that currently available commercial AI models are capable of exploiting smart contracts. Anthropic’s Claude Opus 4.5, Claude Sonnet 4.5 and OpenAI’s GPT-5 collectively developed exploits worth $4.6 million when tested on contracts, exploiting them after their most recent training data was gathered. Read more