Cognitive Surrender Study, Copilot’s “Entertainment Only” Terms, Britain Courts Anthropic

Good morning, a major study just put a name on something we all suspected about how people use AI, Microsoft got caught telling users not to trust the product it’s selling to every enterprise on Earth, and Britain is making a play for Anthropic while the U.S. pushes the company away. Here’s what happened 👇


1. Study: 73% of AI Users Accept Wrong Answers Without Thinking Twice

Researchers at the University of Pennsylvania ran a study across 1,372 participants and over 9,500 individual trials. They gave people access to an AI chatbot that was secretly modified to give wrong answers about half the time. The result: 73.2% of the time, people accepted the faulty reasoning without questioning it. Only 19.7% overruled the AI when it was wrong. The researchers call this “cognitive surrender,” a state where users stop reasoning for themselves and treat AI output as authoritative simply because it sounds confident. Even more telling, people who used the AI rated their own confidence 11.7% higher than the control group, despite the AI being wrong half the time. When financial incentives were added, people were 19 percentage points more likely to catch bad AI answers. When time pressure was added, they were 12 percentage points less likely to catch mistakes.

Why it matters: This is the first rigorous framework for something most of us have felt: the more fluent and confident an AI sounds, the less we think for ourselves. We covered how AI actually learns in our AI Explained series, but understanding how we learn to stop thinking when AI is around might be the more urgent lesson. The study’s conclusion is simple but uncomfortable: your reasoning is only ever as good as the AI you’ve surrendered it to.

Source: Ars Technica


2. Microsoft’s Own Terms of Service: Copilot Is “For Entertainment Purposes Only”

Microsoft is spending billions convincing businesses to pay for Copilot. But the product’s own terms of use, last updated in October 2025, say something different: “Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.” The terms went viral on social media this week. A Microsoft spokesperson told PCMag that the language is “legacy” and “no longer reflective of how Copilot is used today.” They said it will be updated. Microsoft is not alone in this: OpenAI warns users not to treat output as a “sole source of truth or factual information,” and xAI says not to rely on Grok as “the truth.”

Why it matters: Every major AI company is racing to sell tools for high-stakes professional use. Writing code. Analyzing contracts. Making medical recommendations. But their own legal teams are quietly telling you not to trust any of it. When the company’s marketing says “transform your business” and the fine print says “for entertainment only,” one of those messages is designed to protect the company, not you.

Source: TechCrunch


3. Britain Courts Anthropic With London Expansion After U.S. Blacklisting

The British government is actively pitching Anthropic on expanding its presence in the UK. Proposals range from a larger London office to a dual stock listing, according to the Financial Times. The outreach comes after the U.S. government blacklisted Anthropic, designating it a national security supply chain risk after the company refused to let the military use Claude for surveillance or autonomous weapons. A U.S. judge temporarily blocked the blacklisting, and Anthropic has a second lawsuit pending over the designation. Prime Minister Keir Starmer’s office is supporting the effort, which will be presented to Anthropic CEO Dario Amodei during a visit to London in late May.

Why it matters: One country punishes an AI company for setting ethical boundaries. Another country sees that same stance as an opportunity. Britain’s pitch is essentially: “If the U.S. doesn’t want companies that say no to military AI, we do.” This is how the global AI landscape is reshaping itself. Not just by who builds the best models, but by which governments align with which values.

Source: Reuters


Quick Hits

  • DeepSeek’s V4 model will run on Huawei chips, with Alibaba, ByteDance, and Tencent placing bulk orders for hundreds of thousands of Huawei’s upcoming processors. DeepSeek has been rewriting parts of V4’s code to optimize for Chinese chips. The model is expected to launch in weeks. Source: Reuters

  • The Writers Guild reached a tentative four-year deal with studios that bolsters protections against works being used to train AI, increases health plan and pension funding, and raises streaming residuals. The contract still needs ratification by union members. Source: The Verge

  • Suno’s AI music platform is a copyright nightmare, making it trivially easy to generate convincing covers of real artists and flood streaming services with AI-generated imitations. Source: The Verge


That’s it for today. A study proves what many suspected: most people have already stopped thinking critically about AI output. And while companies sell AI for serious work, their legal teams still call it entertainment. The gap between what AI companies promise and what they’ll stand behind has never been wider.

Forward this to someone who needs to stay in the loop.

Subscribe now

Leave a comment

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *