TLDR The Pentagon is demanding Anthropic remove safety guardrails from its Claude AI so it can be used for any lawful purpose, including autonomous weapons and TLDR The Pentagon is demanding Anthropic remove safety guardrails from its Claude AI so it can be used for any lawful purpose, including autonomous weapons and

“We Cannot in Good Conscience Agree”: Anthropic Defies Pentagon Over AI Weapons

2026/02/27 20:18
3분 읽기

TLDR

  • The Pentagon is demanding Anthropic remove safety guardrails from its Claude AI so it can be used for any lawful purpose, including autonomous weapons and mass surveillance.
  • Anthropic CEO Dario Amodei refused, saying such uses could “undermine democratic values.”
  • A Friday 5pm deadline was set by the Pentagon for Anthropic to comply or face being cut from defense contracts.
  • The Pentagon threatened to invoke the Defense Production Act to force Anthropic’s compliance, and label it a “supply chain risk.”
  • Contract language sent to Anthropic Wednesday night was rejected as making “virtually no progress.”

Anthropic CEO Dario Amodei has refused to drop safety limits on the company’s Claude AI, putting a major government contract at risk. The Pentagon set a Friday deadline demanding Anthropic agree to “any lawful use” of its technology.

The dispute centers on two specific concerns: using Claude for mass domestic surveillance and powering fully autonomous weapons. Anthropic says both uses have never been part of its Pentagon contracts and should not be added now.

Amodei met with Defense Secretary Pete Hegseth earlier this week. The meeting ended without agreement, and the Pentagon sent revised contract language on Wednesday night.

The Pentagon has not held back on threats. It warned it would cut Anthropic from defense contracts and label the company a “supply chain risk” — a designation typically reserved for suppliers in hostile nations.

A senior Pentagon official also told Reuters that Defense Secretary Hegseth would consider invoking the Defense Production Act. That law can force a company to serve national defense needs, even without its consent. Some legal experts have questioned whether that use of the act would be lawful.

What Anthropic Says About AI Weapons and Surveillance

Anthropic said it supports AI for lawful foreign intelligence work, but not domestic surveillance.

The Business Risk for Anthropic

The financial stakes are real. The Pentagon has signed $200 million ceiling agreements with major AI labs including Anthropic, OpenAI, and Google over the past year.

If labeled a supply chain risk, defense contractors like Lockheed Martin could be barred from using Anthropic’s tools on Pentagon projects. The defense industrial base includes around 60,000 contractors.

Amodei said Anthropic offered to work with the Pentagon on R&D to improve AI reliability for defense use, but that offer was not accepted.

As of Thursday, the two sides remained at an impasse with the 5:01 p.m. Friday deadline still in place.

The post “We Cannot in Good Conscience Agree”: Anthropic Defies Pentagon Over AI Weapons appeared first on CoinCentral.

면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.