US Court Rejects Trump Administration Anthropic Ban

A U.S. federal court in San Francisco has halted key actions taken by the Trump Administration against Anthropic, granting the artificial intelligence firm temporary relief. The ruling blocks a Pentagon designation that labeled the company a supply chain risk and suspends a directive that ordered federal agencies to stop using its AI system, Claude.

Judge Rita Lin issued the preliminary order, stating that the government’s actions lacked statutory support. As a result, the court found that the measures appeared harsh rather than grounded in national security concerns.

Anthropic Secures A Court Order Against Trump Administration Measures

The court order prevents enforcement of restrictions imposed by the Trump Administration. These restrictions followed a Pentagon decision to classify Anthropic as a supply chain risk. The directive also required federal agencies to cease using the company’s chatbot.

Judge Lin stated that no law supports labelling a U.S. company as an adversary based on disagreement with the U.S. government policy, as broader federal policy debates continue across multiple sectors. She described the measures as “arbitrary” and an abuse of discretion. In addition, the ruling highlighted that such actions could damage the company’s position in the enterprise AI market.

According to Menlo Ventures data, Anthropic held a 32% share of the enterprise AI sector in 2025. This placed it ahead of competitors, including OpenAI at 25%. The court noted that a government-wide ban could reduce that position.

The injunction follows a lawsuit filed by Anthropic on March 9 in Washington, D.C. The firm contended that Defense Secretary Pete Hegseth overstepped his authority. Specifically, the lawsuit challenged the national security designation and related restrictions.

Contract Dispute and Policy Differences Drive Conflict

The dispute goes back to an agreement between Anthropic and the Pentagon in July 2025. The goal of the contract was to have Claude become the first AI model on the frontier approved for classified networks. However, negotiations broke down in February after the Pentagon requested revised terms.

The Defense Department requested unrestricted military use of Claude for all lawful purposes. Anthropic rejected these conditions. The company claimed that its technology should not support lethal autonomous weapons or mass domestic surveillance.

Following this breakdown, the administration escalated actions against the firm. On February 27, President Trump ordered federal agencies to stop using Anthropic products. The Pentagon also moved to classify the company as a supply chain risk.

During a March 24 hearing in San Francisco, Judge Lin questioned government lawyers on their motives. She asked whether Anthropic faced penalties for publicly criticizing the Pentagon’s position. The court later concluded that the actions resembled First Amendment retaliation.

Additional Developments as US Considers Iran War De-escalation

The court decision arrives as the Trump Administration evaluates its broader foreign policy stance. As CoinGape reported, the U.S. is preparing for peace talks in the Iran war as President Trump considers winding down military efforts, with officials now exploring negotiations as the conflict enters its fourth week and impacts global markets, including oil prices.

According to Axios, intermediaries such as Egypt, Qatar, and the United Kingdom have exchanged messages between the U.S. and Iran. These communications reveal that Iran is open to negotiations, although conditions remain strict.

Iran has reportedly requested a ceasefire, guarantees against renewed conflict, and compensation. In contrast, the U.S. seeks several commitments. These include a pause in missile development, zero uranium enrichment, and limits on proxy financing.

Source: https://coingape.com/breaking-us-court-rejects-trump-administration-anthropic-ban/