Good morning, Microsoft just brought Anthropic’s Claude into Copilot (breaking up with OpenAI exclusivity), a 120-character ChatGPT prompt was used to decide which humanities grants to cancel, and OpenAI’s robotics chief walked out over the Pentagon deal. Here’s what happened 👇
1. Microsoft Is Bringing Claude to Copilot — And That’s a Bigger Deal Than It Sounds
Microsoft on Monday unveiled Copilot Cowork, a new tool built on Anthropic’s Claude Cowork technology that lets AI handle “long-running, multi-step tasks” — things like building apps, organizing data, and creating spreadsheets — with limited human oversight. The feature is in testing now and will be available to early-access users later this month.
But the real headline is buried in the announcement: Microsoft is also making Anthropic’s Claude Sonnet models available to M365 Copilot users. Until now, Copilot ran exclusively on OpenAI’s GPT models. This is the first time Microsoft has officially plugged a competing AI brain into its flagship productivity suite.
The move deepens Microsoft’s relationship with Anthropic at a time when investors have questioned its heavy dependence on OpenAI, which accounts for nearly 45% of Microsoft’s cloud contract backlog. Microsoft’s Jared Spataro told Reuters that enterprise customers want AI agents but are “very uncomfortable” with tools that only work locally on a device — Copilot Cowork runs entirely in the cloud with full enterprise security controls.
Why it matters: If you use Microsoft 365 at work, you may soon be able to choose between GPT and Claude without leaving the app. More importantly, this signals that the era of exclusive AI partnerships is ending. Microsoft isn’t betting on one horse anymore — and that means better options for everyone.
2. DOGE Used a 120-Character ChatGPT Prompt to Gut the National Endowment for the Humanities
When Elon Musk’s DOGE agency rolled into the National Endowment for the Humanities to cancel grants it deemed contrary to Trump’s anti-DEI agenda, it didn’t conduct careful reviews. According to a New York Times investigation, staffers pulled short summaries of funded projects off the internet, fed them into ChatGPT, and used a single prompt to decide their fate:
“Does the following relate at all to D.E.I.? Respond factually in less than 120 characters. Begin with ‘Yes’ or ‘No.’”
The results were “sweeping, and sometimes bizarre.” Grants for studying ancient civilizations, preserving local history, and digitizing library archives were flagged and cancelled based on a chatbot’s snap judgment — no human review, no appeals process, no context.
Why it matters: This is the most concrete example yet of AI being used not as a tool to assist decisions, but as the decision-maker itself — in a government agency, affecting real funding for real institutions. It’s a case study in what happens when AI replaces judgment instead of supporting it.
Sources: The Verge | New York Times
3. OpenAI’s Head of Robotics Quit Over the Pentagon Deal
Caitlin Kalinowski, who led OpenAI’s robotics division, publicly resigned on Friday over the company’s military contract with the Pentagon. In a post on X, she said the deal didn’t do enough to protect Americans from warrantless surveillance and that granting AI “lethal autonomy without human authorization” was a line that “deserved more deliberation than they got.”
Her statement was pointed but measured: “This was about principle, not people. I have deep respect for Sam and the team, and I’m proud of what we built together.” Kalinowski is the highest-profile departure from OpenAI since the company signed its defense agreement, and her specific concerns — surveillance without judicial oversight and autonomous lethal force — go to the heart of what many in the AI ethics community have been warning about.
Why it matters: When senior leaders start walking away from the biggest AI company in the world over how the technology is being deployed, it’s a signal worth paying attention to. The question of whether AI should have kill authority without a human in the loop isn’t theoretical anymore — it’s why people are quitting their jobs.
Sources: The Verge
Quick Hits
-
Nvidia-backed Nscale just raised $2 billion and is now valued at $14.6 billion: The British AI infrastructure company — which builds and operates GPU-powered data centers — landed backing from Nvidia, Citadel, Dell, and Jane Street. Former Meta executives Nick Clegg and Sheryl Sandberg are joining its board. An IPO is in the works. (Reuters)
-
X is investigating racist and offensive posts generated by Grok: Sky News reported that Elon Musk’s xAI chatbot produced hate-filled content in response to user prompts. X’s safety teams are “urgently investigating.” This follows months of regulatory crackdowns on Grok for generating sexually explicit material. (Reuters)
-
The Pentagon-Anthropic fallout is scaring startups away from defense work: A TechCrunch analysis explores whether the government’s “supply-chain risk” label on Anthropic will have a chilling effect on other AI startups considering military contracts — potentially pushing the US further behind in defense AI adoption. (TechCrunch)
-
ABB partnered with Nvidia to improve factory robot training: The Swiss robotics giant is working with Nvidia to close the gap between how industrial robots perform in virtual simulations and how they behave on actual factory floors — a key bottleneck in scaling AI-powered manufacturing. (Reuters)
That’s it for today. The weekend’s AI news had a theme running through it like a current: who gets to decide how AI is used, and what happens when no one’s really deciding at all. A chatbot chose which humanities grants to cancel. A robotics leader quit because she thought the deliberation wasn’t sufficient. And the biggest software company in the world just decided its users deserve more than one AI to choose from. The tools keep getting more powerful. The question of who’s steering them keeps getting louder.
Forward this to someone who needs to stay in the loop.

Leave a Reply