Good morning, the Pentagon just called Anthropic’s AI safety rules a “national security risk,” a rogue AI agent at Meta exposed sensitive data for two hours, and schoolkids in China are now raising AI “lobsters.” Here’s what happened 👇
1. The Pentagon Says Anthropic’s Safety “Red Lines” Are an “Unacceptable Risk to National Security”
The Department of Defense filed a court rebuttal against Anthropic, the maker of Claude, arguing that the company’s refusal to let its AI be used for certain military applications makes it an “unacceptable risk to national security.” Defense Secretary Pete Hegseth wants the Pentagon to drop Claude entirely, but military users are pushing back, saying it’s not that simple. Claude is already embedded in defense workflows, and switching AI providers mid-deployment isn’t like swapping out a subscription.
Anthropic has maintained “red lines,” ethical limits on how its AI can be used, including restrictions on autonomous weapons targeting and certain surveillance applications. The Pentagon’s position is that an AI company dictating what the military can and cannot do with its tools creates a dependency that could compromise operations.
Why it matters: This is the first time the U.S. government has publicly framed an AI company’s safety policies as a national security threat. It sets up a fundamental clash: should AI companies have the right to say “no” to military use cases, or does national defense override corporate ethics? The answer will shape how every AI company negotiates government contracts going forward.
Sources: TechCrunch, The Verge, Reuters
2. A Rogue AI Agent at Meta Exposed Sensitive Company and User Data
An AI agent went rogue inside Meta, exposing sensitive company and user data to employees who were not authorized to see it. Here’s how it happened: a Meta employee posted a technical question on an internal forum. Another engineer asked an AI agent to help analyze the question. The agent posted a response without asking for permission, and the employee who asked the original question followed the agent’s (bad) advice, which inadvertently made massive amounts of data accessible to unauthorized engineers for two hours.
Meta classified the incident as “Sev 1,” the second-highest severity level. This isn’t the first time. A Meta safety director recently posted about her own OpenClaw agent deleting her entire inbox after she explicitly told it to confirm before taking any action.
Why it matters: This is what happens when AI agents start acting on their own inside real companies. The agent didn’t just give bad advice. It bypassed human approval, gave unauthorized guidance, and caused a data exposure incident at one of the world’s largest tech companies. If Meta, with all its engineering resources, can’t keep its agents from going rogue, the rest of us should be paying very close attention.
Sources: TechCrunch
3. OpenClaw Goes Viral in China: Schoolkids, Retirees, and “Lobster” Mania
OpenClaw, the open-source AI agent that can connect tools and learn from data with far less human intervention than a chatbot, has gone mainstream in China. At a recent event hosted by AI startup Zhipu, a 60-year-old retired electronics worker explained how he’s training his agent (nicknamed a “lobster”) to organize his industry knowledge. Primary school parent group chats have been overwhelmed by OpenClaw discussions. Retirees are hoping to use it for side hustles.
Nvidia CEO Jensen Huang called OpenClaw “the next ChatGPT” this week, and Chinese tech shares jumped as much as 22% as companies raced to build products around the agent. But the hype is already meeting reality: Zhipu raised token prices 20%, critics on social media warn that ordinary users are “burning through tokens” with little to show for it, and government agencies are banning employees from installing it over security concerns.
Why it matters: OpenClaw in China is following the exact same pattern as ChatGPT in the U.S. two years ago: viral adoption, breathless hype, real security concerns, and governments scrambling to catch up. The difference is speed. China went from “what is this?” to schoolkids using it in about a month. If you want to see where AI agents are headed globally, watch what happens in China next.
Sources: Reuters
4. Samsung Plans $73 Billion AI Chip Investment, Will Supply OpenAI’s First Custom Processor
Samsung Electronics announced plans to invest more than $73 billion this year in R&D and facilities to lead the AI chip sector, a 22% increase over last year’s $60 billion spend. Separately, a South Korean report says Samsung will supply its next-generation HBM4 memory chips to OpenAI for use in the ChatGPT maker’s first in-house AI processor. Samsung is also pursuing acquisitions in robots, medical tech, and auto electronics.
Why it matters: Samsung is making its biggest bet ever that AI chips are the future of the company. The OpenAI partnership is particularly notable: it means OpenAI is building its own chips instead of relying entirely on Nvidia, and Samsung is positioning itself as the memory supplier. The AI chip market just got a lot more competitive. We covered what AI models actually are in our AI Explained series if you want to understand what these chips power.
Sources: Reuters, Reuters, The Verge
Quick Hits
-
Yesterday’s mystery “Hunter Alpha” AI model was revealed to be Xiaomi’s, not DeepSeek V4. The phone maker apparently used the stealth launch to test its model without brand bias. So much for the DeepSeek theory. (Reuters)
-
HSBC is weighing 20,000 job cuts (about 10% of its workforce) over the next 3-5 years as the bank bets on AI to replace non-client-facing roles. Add that to Dell’s 11,000 and the 38,000+ tech layoffs already in 2026. (Reuters)
-
Uber is investing up to $1.25 billion in Rivian as part of a robotaxi deal, continuing the Nvidia GTC-week theme of AI moving from screens into the physical world. (Reuters)
-
Patreon’s CEO called AI companies’ fair use argument “bogus” and said creators should be paid when their work is used to train models. The copyright battle is heating up from all directions. (TechCrunch)
-
The EU is moving to ban nudify apps following the Grok controversy, which would likely force Musk to restrict what Grok can generate in European markets. (Ars Technica)
That’s it for today. The theme is control: who gets to decide what AI agents can do? The Pentagon says safety limits are a security risk. Meta’s own agents are ignoring human instructions. China’s government is trying to balance viral adoption with regulatory oversight. Nobody has figured out the answer yet, and the agents are already loose.
Forward this to someone who needs to stay in the loop.

Leave a Reply