Good morning, Nvidia actually beat the already-sky-high numbers Wall Street was expecting, the Pentagon gave Anthropic a Friday deadline to hand over unrestricted military control of its AI or get blacklisted, Burger King is now using AI to monitor whether your cashier said “please,” and YouTube is feeding AI-generated slop to kids after CoComelon ends. Here’s what happened 👇
1. Nvidia Just Posted $68 Billion in One Quarter
The results are in. Nvidia reported $68.1 billion in revenue for its most recent quarter — up 73% from the same period last year and ahead of the $66.1 billion Wall Street was expecting. Of that, $62 billion came from the data center business alone, with $51 billion in GPU compute and $11 billion in networking. Full-year revenue: $215 billion.
CEO Jensen Huang didn’t hold back on the call: “The demand for tokens in the world has gone completely exponential. I think we’re all seeing that, to the point where even our six-year-old GPUs in the cloud are completely consumed and the pricing is going up.” He also addressed the sustainability questions analysts keep asking about tech companies’ massive AI spending: “In this new world of AI, compute is revenue. Without compute, there’s no way to generate tokens. Without tokens, there’s no way to grow revenues.” The company also disclosed it’s in talks to invest up to $30 billion in OpenAI — though it emphasized there’s “no assurance” the deal will close.
On China: despite the U.S. government lifting some export restrictions, Nvidia reported zero revenue from Chinese customers so far — and the CFO flagged that domestic Chinese chip companies like Moore Threads are gaining ground.
Why it matters: Nvidia’s numbers are the clearest real-time signal of whether AI spending is slowing down or not. The answer, for now, is not.
Source: TechCrunch | Perplexity Discover
2. Anthropic vs. The Pentagon — And a Friday Deadline
This is the AI ethics story with the highest stakes we’ve seen yet. The Department of Defense gave Anthropic an ultimatum this week: grant the U.S. military unrestricted access to its Claude AI — no guardrails, no restrictions — or be banned from all government contracts.
Here’s what triggered it: Claude has been deployed on the Pentagon’s classified networks through a $200 million contract (Anthropic is currently the only AI company running on those classified systems, via a Palantir partnership). The standoff reportedly started after the military used Claude during the operation to capture former Venezuelan President Nicolás Maduro in January. Anthropic wasn’t consulted about that use. The company then pushed back, asking the Pentagon to agree to two specific restrictions: don’t use Claude for mass surveillance of American citizens, and don’t let Claude make final targeting decisions in military strikes without human review.
The Pentagon’s response: those guardrails could prevent the military from acting in a crisis. Defense Secretary Pete Hegseth has been blunt: “We will not employ AI models that won’t allow you to fight wars.” He gave Anthropic until Friday at 5pm to comply. If Anthropic refuses, the Pentagon is considering invoking the Defense Production Act to force compliance — or declaring Anthropic a “supply chain risk” to push it out of government entirely.
Why it matters: This is the first direct public clash between an AI company’s safety principles and a government’s demand for unrestricted control. Whatever happens by Friday sets a precedent — either companies can hold their ethical lines with government customers, or they can’t.
3. Burger King Is Listening to Its Employees — Via AI
Burger King launched an OpenAI-powered voice chatbot called “Patty” that lives inside the headsets employees wear while working. It’s not just a helpful assistant — Patty is also evaluating whether employees are being friendly enough with customers.
The chain trained its AI system to recognize specific words and phrases: “welcome to Burger King,” “please,” “thank you.” Managers can ask the AI how their location is scoring on friendliness. Burger King’s chief digital officer called it “a coaching tool” and says they’re also “iterating” on capturing the tone of conversations, not just the words. Beyond the friendliness monitoring, Patty answers employee questions (how many bacon strips on the Maple Bourbon Whopper?), alerts managers when kitchen equipment goes down, and automatically updates digital menus and kiosks within 15 minutes when an item goes out of stock. The full BK Assistant platform is set to roll out to all U.S. restaurants by end of 2026. Patty is currently piloting in 500 restaurants.
Burger King is still testing AI drive-thru ordering separately, in fewer than 100 locations — noting it’s “still a risky bet” and “not every guest is ready for this.”
Why it matters: When the AI monitoring your mood at work is the same AI monitoring your customers’ experience, the line between helpful tool and performance surveillance gets very thin very fast.
Source: The Verge
4. YouTube’s Algorithm Is Feeding AI Slop to Kids
After your kid finishes watching CoComelon, Bluey, or Ms. Rachel on YouTube, what does the algorithm recommend next? According to a New York Times investigation published today: more than 40% of Shorts automatically recommended after those channels “appeared to contain AI-generated visuals.”
These videos look like children’s content. They’re colorful, they feature recognizable characters and simple songs. But they’re AI-generated — often lowest-effort content produced at mass scale to capture ad revenue from kids’ watch time. YouTube doesn’t require these videos to be labeled as AI-generated. The platform places the entire burden of filtering this content on parents, not on itself.
Why it matters: Your kids are already in an algorithm-driven environment. The difference now is that a large chunk of what the algorithm serves them isn’t made by humans at all — and there’s no label telling anyone that. If you have young kids who use YouTube, this is a reason to check what they’re actually watching, not just what channel they started on.
Source: The Verge | New York Times
Quick Hits
-
Anthropic acquired a computer-use AI startup called Vercept: Vercept built software for AI agents that can control computers — clicking, typing, navigating apps. The acquisition came after Meta reportedly poached one of Vercept’s founders, accelerating the deal. This fits Anthropic’s Claude Computer Use push directly. (TechCrunch)
-
US rare earth shortages are deepening as Chinese suppliers halt production: China just restricted exports of several rare earth minerals critical for AI chips and advanced electronics. US suppliers are struggling to find alternatives at scale, and several have paused production. The AI chip supply chain has another vulnerability — this one geopolitical, not technical. (Perplexity Discover)
-
Instagram now alerts parents when teens search for suicide or self-harm content: A new feature in Instagram rolls out alerts to connected parent accounts when teens search for those terms — with resources provided to both. It’s a reactive fix to years of criticism about the platform’s effect on teen mental health, and it marks a notable shift toward algorithmic accountability for younger users. (TechCrunch)
That’s it for today. Three of today’s four big stories are about the same thing: who controls AI when it’s already inside your life — your workplace headset, your kid’s screen, your country’s military systems. The question isn’t theoretical anymore.
Forward this to someone who needs to stay in the loop.





Leave a Reply