AI Daily Digest – March 02, 2026

Good morning, the US military used the same AI it just banned to help plan strikes on Iran, OpenAI rushed a Pentagon deal and is now defending the fine print, and Anthropic’s Claude just became the #1 app in America. Here’s what happened 👇


1. The US Used Anthropic’s AI for Iran Strikes — Hours After Banning It

On Friday, President Trump announced a ban on all federal use of Anthropic’s Claude AI, calling the company’s leaders “leftwing nut jobs” and directing every agency to phase it out within six months. Defense Secretary Pete Hegseth designated Anthropic a “supply chain risk,” meaning no military contractor can do business with the company either.

Then, on Saturday, the US launched a major air assault on Iran — using Claude for intelligence assessments and target identification. The same tool Trump had just publicly banned was helping plan the strikes. As the Wall Street Journal reported: “Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools.”

The six-month phaseout — instead of Trump’s initial demand to “IMMEDIATELY CEASE” — likely exists precisely because the military already depends on Claude for operations like this.

Why it matters: The gap between the political statement and the operational reality tells you everything. AI isn’t a nice-to-have for the military anymore — it’s embedded in how operations actually work. Banning it by tweet doesn’t change that.

Source: The Verge | Wall Street Journal


2. OpenAI Rushed a Pentagon Deal — And Admitted It

While Anthropic was getting banned, OpenAI was signing on the dotted line.

Sam Altman announced a new agreement letting OpenAI’s models be used on the Pentagon’s classified network. He said the deal includes the same red lines Anthropic wanted — no mass surveillance of Americans, no AI making kill decisions without a human involved. OpenAI also says it keeps control of its own safety rules and will have its own engineers on-site at the Pentagon.

Sounds good on paper. But critics quickly pointed out that the deal’s fine print references old laws the NSA has used to collect Americans’ data through overseas channels. And Altman himself admitted: “This was definitely rushed. The optics don’t look good.”

His reasoning? “We really wanted to de-escalate things.” He’s asking the Pentagon to offer the same deal to all AI companies — including Anthropic.

Why it matters: When AI contracts shape how wars are fought, “the optics don’t look good” isn’t reassuring. The question isn’t what the blog post says — it’s what the actual agreement allows.

Source: TechCrunch | OpenAI Blog


3. Anthropic’s Claude Hits #1 in the App Store

Sometimes standing up for your principles is also great marketing.

Anthropic’s Claude app surged past ChatGPT to claim the #1 free app position in Apple’s US App Store on Saturday — a spot it still held on Sunday morning. According to SensorTower data, Claude was barely in the top 100 at the end of January. It climbed to the top 20 in February, hit #6 on Wednesday, #4 on Thursday, and #1 by Saturday.

Anthropic says daily signups have broken the all-time record every day this past week. Free users are up more than 60% since January. Paid subscribers have more than doubled this year. The company’s refusal to comply with the Pentagon’s demands — and the very public fallout — seems to have turned a policy stance into a consumer movement.

Why it matters: For years, AI companies have debated whether safety principles help or hurt the business. Anthropic just got its answer: taking a public stand on AI ethics can make you the most downloaded app in America.

Source: TechCrunch


Quick Hits

  • The Federal Reserve doesn’t know what to do about AI and jobs. Fed officials are split — some think AI will make things cheaper, others worry it’ll eliminate jobs without creating new ones. Fed Governor Lisa Cook basically said: if AI takes your job, lower interest rates won’t fix it. The Block layoffs made this feel a lot less theoretical. (Reuters)

  • Amazon is pouring another $21 billion into Spain for AI data centers. That brings the company’s total investment in Spain to $33.7 billion — a sign the global AI infrastructure buildout is accelerating, not slowing down. (Reuters)

  • ChatGPT now has 900 million weekly active users. That’s up from 400 million reported just months ago — a staggering growth rate that coincides with OpenAI’s record $110 billion funding round valuing the company at $840 billion. (TechCrunch)

  • Nvidia is building a new chip to make AI answers faster. Partnering with startup Groq in a $20 billion deal, Nvidia plans to unveil the new platform next month. The goal: speed up the part of AI that generates your ChatGPT responses. (Reuters)


That’s it for today. The Anthropic-Pentagon saga just revealed something most people hadn’t fully grasped: AI is already woven into military operations so deeply that you can’t rip it out by executive order — and the companies building it are now being forced to decide what kind of world they want their tools to create.

Forward this to someone who needs to stay in the loop.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *