AI Daily Digest – March 03, 2026

Good morning, the Supreme Court just settled the AI copyright question, ChatGPT is losing users at a historic rate over the Pentagon deal, and drone strikes in the Middle East hit Amazon’s data centers for the first time ever. Here’s what happened 👇


1. Supreme Court: AI-Generated Art Can’t Be Copyrighted. Case Closed.

The US Supreme Court declined to hear an appeal from computer scientist Stephen Thaler, who has been fighting since 2019 to copyright an image created entirely by his AI system. The image, called A Recent Entrance to Paradise, was generated by an algorithm Thaler built — with no human creative input. The Copyright Office rejected it, a district court upheld the rejection, and a federal appeals court agreed. Now the Supreme Court has refused to even hear the case.

The ruling that stands: “Human authorship is a bedrock requirement of copyright.” If a machine made it and no human shaped the creative choices, it doesn’t get legal protection. Period.

This follows the Copyright Office’s guidance from last year that AI-generated artwork based on text prompts alone isn’t copyrightable either.

Why it matters: If you’re using AI to generate images, text, or music for your business, you don’t own what comes out — legally, nobody does. You can still use AI as a tool in your creative process, but the human has to be making meaningful creative decisions, not just typing a prompt and hitting enter.

Source: The Verge | Reuters


2. ChatGPT Uninstalls Surge 295% as Users Flee to Claude

The Pentagon-OpenAI deal isn’t just a PR problem — it’s costing OpenAI actual users. According to app analytics data reported by TechCrunch, ChatGPT uninstalls surged 295% in the days following the announcement of OpenAI’s military agreement. Meanwhile, Claude’s downloads have been climbing all week, and the app remains near the top of the App Store after hitting #1 over the weekend.

TechCrunch separately published a guide titled “Users are ditching ChatGPT for Claude — here’s how to make the switch,” which tells you everything about the current mood. Anthropic has also rolled out a new memory import tool that makes it easy to bring your data over from other AI platforms — perfectly timed.

Why it matters: This is the first time a major AI company has lost significant users over a political decision rather than a product one. People aren’t leaving because Claude is better at coding — they’re leaving because they don’t want their AI provider working with the military on classified operations. That’s a brand new dynamic in the AI market.

Source: TechCrunch | TechCrunch


3. Drone Strikes Hit Amazon Data Centers in the Middle East — a First

Iranian drones struck Amazon Web Services data centers in the UAE and Bahrain, marking the first time a major US tech company’s cloud infrastructure has been damaged by military action. Two AWS facilities in the UAE were directly hit, and a third in Bahrain sustained damage from a nearby strike. The result: structural damage, power outages, fire suppression flooding, and a “prolonged” recovery timeline.

The outage disrupted cloud services across the region, including banking platforms. AWS told customers to back up data and shift operations to unaffected regions.

This matters because US tech giants have been pouring billions into the Gulf as a regional AI computing hub. Microsoft alone has committed $15 billion to UAE data centers by 2029. A Washington think tank warned last week that adversaries could target “data centers, energy infrastructure supporting compute, and fiber chokepoints” — and that’s exactly what happened.

Why it matters: The AI boom depends on physical infrastructure — actual buildings, cables, and power supplies in actual places. When those places become conflict zones, the cloud isn’t as untouchable as the name implies. Companies and governments betting on Middle East AI hubs are now facing a risk they didn’t price in.

Source: Reuters


Quick Hits

  • AI can now identify anonymous social media users. Researchers found that LLMs can unmask pseudonymous accounts with up to 90% precision by analyzing writing patterns across platforms — no structured data needed, just free text. The researchers warn this “invalidates the assumption” that pseudonymity provides adequate privacy. (Ars Technica)

  • Cursor hits $2 billion in annualized revenue. The AI coding assistant doubled its revenue run rate in just three months, with corporate customers now making up 60% of sales. The $29 billion startup is fending off competition from Claude Code and OpenAI’s Codex. (TechCrunch)

  • More US agencies dropping Anthropic. The State Department, Treasury, and HHS have all moved to end use of Anthropic products, switching to OpenAI and other providers under the White House directive. (Reuters)


That’s it for today. The AI industry used to argue about whose model was smarter — now the fight is about who your AI provider works with, who owns what AI creates, and whether the buildings that power it all can survive a war.

Forward this to someone who needs to stay in the loop.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *