Category: Uncategorized

  • Anthropic Mythos, Intel Joins Musk Terafab, Google AI Overviews Wrong

    Anthropic Mythos, Intel Joins Musk Terafab, Google AI Overviews Wrong

    Good morning, Anthropic dropped a new frontier model into the hands of 12 companies to hunt zero-day vulnerabilities, Intel signed on to Elon Musk’s most ambitious chip project yet, and a fresh test of Google’s AI Overviews puts the error rate at 10 percent. Here’s what happened 👇


    1. Anthropic Quietly Hands “Mythos” to Microsoft, Apple, and Amazon for Cybersecurity Work

    Anthropic on Tuesday released a preview of a new frontier model called Mythos as part of a security initiative it is calling Project Glasswing. Twelve partner organizations are getting first access: Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks among them. The model is being used to scan first-party and open source software for code vulnerabilities, and Anthropic claims that in just the past few weeks, Mythos has already identified “thousands of zero-day vulnerabilities, many of them critical,” with some of the bugs sitting undiscovered in code for one to two decades. A previously leaked internal memo described Mythos as “one of the most powerful” models the company has ever built, and Anthropic says it has been in “ongoing discussions” with federal officials about deploying it, though those talks are complicated by the company’s active legal fight with the Trump administration.

    Why it matters: This is the first real glimpse of what frontier AI looks like when you point it at the security of the software the world actually runs on. Decades-old bugs that humans missed are now being surfaced in weeks. That cuts both ways. The same capability that helps Microsoft patch Windows also helps an attacker find the same hole first. Anthropic chose to give it to defenders, but the gap between defensive and offensive use of these models is shrinking by the month.

    Source: TechCrunch


    2. Intel Joins Elon Musk’s Terafab to Build the Chips Powering Humanoid Robots

    Intel announced Tuesday that it is joining Terafab, Elon Musk’s chip-manufacturing megaproject with SpaceX and Tesla, with the stated goal of producing one terawatt of compute per year for AI and robotics. The handshake came after Intel CEO Lip-Bu Tan hosted Musk at Intel’s campus over the weekend. Musk has previously laid out plans to build two advanced chip factories in Austin, Texas: one to power Tesla cars and humanoid robots, the other to feed AI data centers in space. Intel’s stock jumped more than 2 percent on the news. For Intel, which lost $10.32 billion in its foundry business last year, the deal is a lifeline for its turnaround story and a chance to prove its 18A manufacturing tech can win the largest customers.

    Why it matters: A terawatt per year is a number that did not exist in the chip industry before this week. To put that in scale, the entire global semiconductor industry today produces a small fraction of that. Musk is betting that humanoid robots and orbital data centers will need so much silicon that the only way to get there is to build the factories himself. Intel just bet its turnaround on that future being real.

    Source: Reuters


    3. Google AI Overviews Tell “Millions of Lies Per Hour,” New Study Finds

    A New York Times analysis published Tuesday tested Google’s AI Overviews using SimpleQA, a benchmark with more than 4,000 verifiable factual questions, and found the system gets roughly 1 in 10 answers wrong. Extrapolated across all Google searches, that works out to tens of millions of incorrect answers per day. The study, run with help from AI startup Oumi, showed accuracy improved from 85 percent under Gemini 2.5 to 91 percent after the Gemini 3 update, but the misses are striking. Asked when Bob Marley’s home became a museum, AI Overviews picked the wrong year from a Wikipedia page that listed two. Asked when Yo Yo Ma was inducted into the Classical Music Hall of Fame, it cited the organization’s own website and then claimed the hall does not exist. Google pushed back, saying the SimpleQA test “has serious holes” and does not reflect what people actually search for.

    Why it matters: Nine out of ten sounds great until you realize Google handles billions of searches a day. AI Overviews now sits at the very top of the results page, ahead of the blue links it cites, which means most users never check the source. The product is designed to make you stop reading right there. When the answer is wrong, that confidence becomes a problem. And the trade-off Google made is buried in the model selection: the fast, cheap Gemini Flash model handles most queries, not the more accurate Pro model, because speed wins on a search page.

    Source: Ars Technica


    Quick Hits

    • Anthropic also expanded its compute deal with Google and Broadcom this week, locking in 3.5 gigawatts of new TPU capacity coming online in 2027. The expansion is part of Anthropic’s $50 billion U.S. infrastructure commitment, and comes as the company’s run-rate revenue surges past $30 billion. Source: TechCrunch

    • Uber became the latest major company to switch to Amazon’s custom AI chips, using AWS Trainium2 to train the AI models that power its ride and delivery business. Another sign that Nvidia’s grip on AI training is loosening at the top of the market. Source: Reuters

    • PIMCO is weighing a $14 billion debt deal to finance Oracle’s new Michigan data center, according to Bloomberg. AI infrastructure is now being financed at scales that used to belong to oil pipelines and toll roads. Source: Reuters

    • Atlassian launched visual AI tools and third-party agents inside Confluence, letting AI agents from outside vendors operate directly inside the workspace where teams already write docs and run projects. Source: TechCrunch


    That’s it for today. Yesterday OpenAI was pitching the public on robot taxes and a four-day workweek. Today Anthropic is quietly handing decade-old security bugs to Microsoft, Intel is signing onto a one-terawatt chip project, and Google is being told its flagship AI is wrong tens of millions of times a day. The public-facing story and the actual buildout are running on different tracks, and the buildout is moving faster.

    Forward this to someone who needs to stay in the loop.

    Subscribe now

    Leave a comment

  • The Last Generation of Great Engineers May Have Already Been Born

    The Last Generation of Great Engineers May Have Already Been Born

    AI won’t make everyone equal. It will make the gap between exceptional and average wider than ever.


    The Reality

    There’s a comforting story floating around the tech world right now. It goes like this: AI tools will level the playing field. Junior developers will code like seniors. Non-technical people will build like engineers. Everyone will be elevated.

    Max Brodeur-Urbas, founder of Gumloop, an automation platform processing 4 million workflows daily for Instacart, Shopify, and DoorDash, sees something different happening.

    “It’s possible that the last generation of great engineers has been born,” he says. “Because there was this era of actually needing to understand what’s going on and then getting accelerated by AI. But now people can skip the understanding part and just accelerate.”

    The people who learned the fundamentals first, who understand why things work and not just how to make them work, are now getting turbocharged by AI. They’re becoming exceptional at a speed that wasn’t possible before.

    Everyone else is generating slop.


    The Shift

    AI creates a fork in the road, not a rising tide.

    On one path: people who use AI as a learning tool. They pause. They try to understand the problem. They ask AI to teach them what they don’t know. They build on genuine comprehension.

    On the other path: people who use AI as a shortcut. Their website works. The feature does what they wanted. But they never took the time to understand why it worked, what could break, or what knock-on effects it might create.

    The Old Way: Everyone needed to learn the fundamentals. The bar was high. Progress was slow but solid.
    The New Reality: You can skip the fundamentals entirely. Your code compiles. Your app runs. But you’re building on a foundation you can’t see, can’t debug, and can’t improve.

    “It’s so easy to just not want to understand why something works,” Max says. “Your website worked, the feature did what you wanted, but you didn’t take the time to really dig into why.”

    This is the trap. AI makes it effortless to skip understanding. And skipping understanding feels productive in the moment. You shipped the feature. You launched the product. But when something breaks in a way the AI can’t fix, you’re stuck.

    “If you can actually have the determination to pause, try to understand the problem, have AI teach you the things you don’t understand, you’ll become exceptional even faster than before. And then the average person will just kind of fall to the slop.”


    What To Do Next

    The next time AI gives you a solution that works, don’t move on. Spend five minutes understanding why it works.

    Ask the AI to explain its reasoning. Change one variable and see what breaks. Read the output instead of just running it.

    This is the new competitive advantage. Not using AI. Everyone will use AI. The advantage is using AI while maintaining the discipline to actually learn from it.

    The engineers, marketers, analysts, and operators who do this will pull ahead so fast that the gap becomes permanent. The ones who don’t will produce work that looks right on the surface and falls apart under pressure.


    The One Thing to Remember

    AI doesn’t level the playing field. It amplifies the gap. The people who understand the fundamentals and use AI to go faster will become unreachable. The people who skip understanding will produce impressive-looking work that breaks at the first unexpected input.


    This insight comes from “50 AI Agents Running My Company Is a Lie” featuring Max Brodeur-Urbas, founder of Gumloop. The AI Shift curates wisdom from AI leaders for busy professionals navigating the AI era. Are you using AI to learn faster, or to skip learning entirely?

    Subscribe now

    Leave a comment

  • OpenAI’s Robot Tax Vision, Sam Altman Trust Crisis, Iran Targets Stargate

    OpenAI’s Robot Tax Vision, Sam Altman Trust Crisis, Iran Targets Stargate

    Good morning, OpenAI dropped a utopian policy blueprint for the AI economy, The New Yorker dropped a 100-source investigation questioning whether Sam Altman can be trusted to deliver any of it, and Iran posted satellite imagery of the Stargate data center with a threat attached. Here’s what happened 👇


    1. OpenAI’s Wish List: Robot Taxes, Public Wealth Funds, and a Four-Day Workweek

    OpenAI released a 30-page policy document titled “Industrial Policy for the Intelligence Age” laying out how it thinks governments should handle the economic fallout of superintelligent AI. The proposals are striking because they read more like a Bernie Sanders white paper than a Silicon Valley wish list. OpenAI suggests shifting the tax burden from labor to capital, floating a “robot tax” that would force AI systems to pay the same payroll taxes as the human workers they replace. It proposes a Public Wealth Fund that would give every American an automatic stake in AI companies, with returns distributed directly to citizens. It calls for subsidized 32-hour, four-day workweek pilots with no loss in pay, expanded retirement matches, employer-covered childcare, and portable benefits that follow workers across jobs. The document acknowledges that AI-driven growth could “hollow out the tax base that funds Social Security, Medicaid, SNAP, and housing assistance” if nothing changes.

    Why it matters: This is the $852 billion company that built ChatGPT openly admitting that the current economic model cannot survive the technology it is selling. When the people building the thing tell you it will gut the tax base, replace the workers, and require a robot tax to fix it, that is not a marketing pitch. That is a confession dressed up as policy. The question is whether anyone in Washington is going to take a redistribution agenda seriously when it comes from a for-profit company whose CEO has spent the last year lobbying against AI safety laws.

    Source: TechCrunch


    2. “The Problem Is Sam Altman”: New Yorker Investigation Lands the Same Day

    Hours after OpenAI published its policy vision, The New Yorker published a massive investigation into whether Sam Altman can be trusted to deliver on any of it. The reporters interviewed more than 100 people familiar with how Altman operates, reviewed internal memos, and interviewed Altman himself more than 12 times. The portrait is brutal. One board member described Altman as having “two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.” Internal messages from former chief scientist Ilya Sutskever and former research head Dario Amodei (now CEO of Anthropic) document what they called “an accumulation of alleged deceptions and manipulations.” Amodei wrote bluntly: “The problem with OpenAI is Sam himself.” One current OpenAI researcher told The New Yorker that Altman “sets up structures that, on paper, constrain him in the future. But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was.”

    Why it matters: The timing is not a coincidence. OpenAI’s chief global affairs officer told The Wall Street Journal that the company is urgently concerned about negative public opinion. The policy document reads like an attempt to reset a narrative that is slipping. But trust is the entire product when you are asking the public to let you build superintelligence. If the people who worked closest with Altman are saying out loud that he tells everyone what they want to hear and then walks away from the constraints he agreed to, no policy white paper fixes that.

    Source: Ars Technica


    3. Iran Threatens to Bomb the $500B Stargate Data Center in Abu Dhabi

    Iran’s military released a video this weekend showing satellite imagery of OpenAI’s Stargate data center in the United Arab Emirates, with a message that read “nothing stays hidden to our sight, though hidden by Google.” Military spokesperson Ebrahim Zolfaghari warned that if the U.S. follows through on threats to strike Iranian power and water infrastructure, Iran will hit U.S. tech and energy infrastructure across the Middle East in return. This is not an empty threat. Iranian missiles have already struck AWS data centers in Bahrain and an Oracle data center in Dubai earlier in the war that began in February. Iran has also publicly named Nvidia and Apple as targets. Stargate is the $500 billion joint venture between OpenAI, SoftBank, and Oracle to build out global AI infrastructure, originally announced in January 2025. The Trump administration has threatened further strikes on Iranian civilian infrastructure if Iran does not reopen the Strait of Hormuz by Tuesday.

    Why it matters: AI data centers are no longer just real estate. They are now strategic military targets, the way oil refineries became targets in the 20th century. The race to build AI infrastructure overseas, especially in the Gulf, was supposed to solve power and land constraints at home. Instead it has put the most expensive computing assets in the world in the middle of an active war zone. Every company building toward “agentic AI” depends on physical buildings that can be hit by a missile. That is the part of the AI boom no one prices in.

    Source: TechCrunch


    Quick Hits

    • Samsung said Q1 operating profit will jump roughly eightfold on red-hot AI chip prices, a quarterly record that nearly equals what the company earned in all of last year. Source: Reuters

    • Robotics company Generalist released GEN-1, a new physical AI model hitting 99% success rates on tasks like folding boxes, packing phones, and servicing robot vacuums. The model can improvise when objects move unexpectedly: “Nobody has programmed the robot to make mistakes, therefore nobody has programmed the robot to recover from mistakes. And that just happens for free.” Source: Ars Technica

    • OpenAI is asking the California and Delaware attorneys general to investigate Elon Musk for what it calls “anti-competitive behavior” related to xAI and his ongoing legal battles with the company. Source: Reuters

    • Google quietly launched an offline-first AI dictation app on iOS, processing speech-to-text on-device with no cloud roundtrip. A small but meaningful shift toward local AI on phones. Source: TechCrunch


    That’s it for today. OpenAI wants you to imagine a future where AI funds your retirement and gives you a four-day workweek. The same week, the people who built the company are telling reporters they don’t trust the man pitching it, and the buildings that would deliver it are being targeted by missiles. The vision and the reality are diverging fast.

    Forward this to someone who needs to stay in the loop.

    Subscribe now

    Leave a comment

  • The 10-Year Head Start Your Kids Don’t Know They Have

    The 10-Year Head Start Your Kids Don’t Know They Have

    Kids who grow up with personalized AI tutors will arrive at 18 with a completely different foundation than kids who didn’t.


    The Reality

    There’s a quiet revolution happening in education that most parents haven’t fully grasped yet.

    Google’s NotebookLM lets you upload a stack of files and have a conversation with them. But that’s the simple version. The real shift is what happens when you combine that with personalization: explain gravity to a 10-year-old who loves soccer. Relevel a college textbook for a middle schooler. Turn a dense research paper into a podcast, an infographic, or an interactive lesson.

    Yasi Matias and his team at Google Research have been experimenting with exactly this. “Can we reimagine the textbook?” he asked. “Can we take a textbook and use AI to give it different experiences that are going to be personalized and contextualized?”

    The answer, even in these early days, is yes. Immersive experiences. Conversational learning. Sketchbook-style interaction. All adapted to the specific child, their age, their interests, their level.


    The Shift

    The Old Way: One textbook. One level. Same material for every student. The kid who’s ahead is bored. The kid who’s behind is lost. The teacher tries to serve 30 different levels at once.

    The New Reality: Every child gets a tutor that knows their level, speaks their language, and connects every concept to something they already care about. Available 24/7. Infinitely patient. And it gets better every month.

    The model where everyone learns the same thing at the same pace is 200 years old. AI is breaking it.

    Here’s what makes this urgent: kids who grow up with these tools from age five are going to arrive at 18 with a completely different intellectual foundation than kids who didn’t. That’s not a one-year gap. It’s potentially a ten-year advantage in how they think, what they know, and how quickly they can learn new things.

    As one parent noticed, kids are already expected to read before starting school now. The baseline keeps rising. And AI is about to raise it again, dramatically.

    Matias sees this as the natural evolution: “When Google made it possible for everybody to get facts, people said, ‘What about homework?’ But kids didn’t get lazy. We just expected them to go to the next level, to synthesize. With AI, we’re just going to uplevel what we expect again.”


    What To Do Next

    If you have kids, start exploring AI learning tools with them now. NotebookLM, Khan Academy’s AI tutor, and ChatGPT are all free or cheap starting points. Don’t just hand them the tool. Sit with them and show them how to ask better questions. That meta-skill, learning how to learn with AI, is the real advantage.

    If you’re reskilling yourself, the same principle applies. Stop consuming content passively. Upload what you’re studying into an AI tool and have a conversation with it. Ask it to explain concepts at your level. Quiz yourself. Get feedback. The tools that are reshaping education for kids work just as well for adults.


    The One Thing to Remember

    Every child will soon have a polymath in their pocket: a tutor that knows everything about every subject, adapts to their level, and connects ideas to what they care about. The kids who learn to use it well will have a decade-long advantage over those who don’t. And the same is true for adults who start now.


    This insight comes from “Google VP: The AI Shift Is Done and the Gap Between People Is Growing” featuring Yasi Matias, head of Google Research. The AI Shift curates wisdom from AI leaders for busy professionals navigating the AI era. How are you using AI to learn right now?

    Subscribe now

    Leave a comment

  • Cognitive Surrender Study, Copilot’s “Entertainment Only” Terms, Britain Courts Anthropic

    Cognitive Surrender Study, Copilot’s “Entertainment Only” Terms, Britain Courts Anthropic

    Good morning, a major study just put a name on something we all suspected about how people use AI, Microsoft got caught telling users not to trust the product it’s selling to every enterprise on Earth, and Britain is making a play for Anthropic while the U.S. pushes the company away. Here’s what happened 👇


    1. Study: 73% of AI Users Accept Wrong Answers Without Thinking Twice

    Researchers at the University of Pennsylvania ran a study across 1,372 participants and over 9,500 individual trials. They gave people access to an AI chatbot that was secretly modified to give wrong answers about half the time. The result: 73.2% of the time, people accepted the faulty reasoning without questioning it. Only 19.7% overruled the AI when it was wrong. The researchers call this “cognitive surrender,” a state where users stop reasoning for themselves and treat AI output as authoritative simply because it sounds confident. Even more telling, people who used the AI rated their own confidence 11.7% higher than the control group, despite the AI being wrong half the time. When financial incentives were added, people were 19 percentage points more likely to catch bad AI answers. When time pressure was added, they were 12 percentage points less likely to catch mistakes.

    Why it matters: This is the first rigorous framework for something most of us have felt: the more fluent and confident an AI sounds, the less we think for ourselves. We covered how AI actually learns in our AI Explained series, but understanding how we learn to stop thinking when AI is around might be the more urgent lesson. The study’s conclusion is simple but uncomfortable: your reasoning is only ever as good as the AI you’ve surrendered it to.

    Source: Ars Technica


    2. Microsoft’s Own Terms of Service: Copilot Is “For Entertainment Purposes Only”

    Microsoft is spending billions convincing businesses to pay for Copilot. But the product’s own terms of use, last updated in October 2025, say something different: “Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.” The terms went viral on social media this week. A Microsoft spokesperson told PCMag that the language is “legacy” and “no longer reflective of how Copilot is used today.” They said it will be updated. Microsoft is not alone in this: OpenAI warns users not to treat output as a “sole source of truth or factual information,” and xAI says not to rely on Grok as “the truth.”

    Why it matters: Every major AI company is racing to sell tools for high-stakes professional use. Writing code. Analyzing contracts. Making medical recommendations. But their own legal teams are quietly telling you not to trust any of it. When the company’s marketing says “transform your business” and the fine print says “for entertainment only,” one of those messages is designed to protect the company, not you.

    Source: TechCrunch


    3. Britain Courts Anthropic With London Expansion After U.S. Blacklisting

    The British government is actively pitching Anthropic on expanding its presence in the UK. Proposals range from a larger London office to a dual stock listing, according to the Financial Times. The outreach comes after the U.S. government blacklisted Anthropic, designating it a national security supply chain risk after the company refused to let the military use Claude for surveillance or autonomous weapons. A U.S. judge temporarily blocked the blacklisting, and Anthropic has a second lawsuit pending over the designation. Prime Minister Keir Starmer’s office is supporting the effort, which will be presented to Anthropic CEO Dario Amodei during a visit to London in late May.

    Why it matters: One country punishes an AI company for setting ethical boundaries. Another country sees that same stance as an opportunity. Britain’s pitch is essentially: “If the U.S. doesn’t want companies that say no to military AI, we do.” This is how the global AI landscape is reshaping itself. Not just by who builds the best models, but by which governments align with which values.

    Source: Reuters


    Quick Hits

    • DeepSeek’s V4 model will run on Huawei chips, with Alibaba, ByteDance, and Tencent placing bulk orders for hundreds of thousands of Huawei’s upcoming processors. DeepSeek has been rewriting parts of V4’s code to optimize for Chinese chips. The model is expected to launch in weeks. Source: Reuters

    • The Writers Guild reached a tentative four-year deal with studios that bolsters protections against works being used to train AI, increases health plan and pension funding, and raises streaming residuals. The contract still needs ratification by union members. Source: The Verge

    • Suno’s AI music platform is a copyright nightmare, making it trivially easy to generate convincing covers of real artists and flood streaming services with AI-generated imitations. Source: The Verge


    That’s it for today. A study proves what many suspected: most people have already stopped thinking critically about AI output. And while companies sell AI for serious work, their legal teams still call it entertainment. The gap between what AI companies promise and what they’ll stand behind has never been wider.

    Forward this to someone who needs to stay in the loop.

    Subscribe now

    Leave a comment

  • “50 AI Agents Running My Company”

    “50 AI Agents Running My Company”

    If someone is selling you the dream of effortless AI automation, they found the only business model that actually prints money: selling hope.


    The Reality

    You’ve seen the posts. “I automated everything. I work one hour a week. I made $10 million this weekend with my SaaS app.”

    It’s everywhere. Twitter. LinkedIn. YouTube. A new flavor of the same pitch that’s been recycled through every hype cycle from crypto to NFTs to AI: skip the hard work, shortcut directly to the value.

    Max Brodeur-Urbas runs Gumloop, an automation platform processing 4 million workflows daily for companies like Instacart, Shopify, and DoorDash. He’s seen what real AI automation looks like at scale, and he has a simple message about the “50 agents running my company” crowd.

    “Most of that is just marketing. They’re lying to you.”

    There’s a category he calls “course bros,” people who sell the dream of effortless income through AI. They post workflows, promise $30,000 weekends, and charge for courses that reveal the “recipe.” The pitch targets people who are vulnerable, easily persuaded, convinced that something will save them from their current situation.

    “You can sell hope really easily,” he says. “But you’re selling this vision of skipping the hard work, shortcutting directly to the value, which will never happen. It’ll never work. But for the person selling you that course, they’re going to make a ton of money. They found the way to print money.”


    The Shift

    So what does real AI automation look like? Not 50 agents. Not zero effort. Something much less glamorous and much more effective.

    The most productive people generating the most value with AI share one trait: they apply it to something they already understand deeply.

    The Old Way: Follow the guru. Buy the course. Copy the workflow. Hope for magic.
    The New Reality: Take something you know inside and out. Apply AI to the repetitive parts. Keep your hands on the parts that require judgment.

    “If you’re automating something you don’t understand, it’s just going to be a slot machine,” Max says. “If you’re using AI to code and you don’t know how to code at all, you’re making malware at the end of the day.”

    The best users of Gumloop aren’t the ones who automated everything. They’re the ones who automated the repetitive parts and kept the human touch where it matters. The marketer who uses AI to process data but writes the strategy herself. The ops person who automates reporting but makes the decisions manually. The salesperson who uses AI to research prospects but builds relationships in person.

    “I apply AI to speed myself up and take the things I do understand, do it way faster, so I can learn more things and grow as a person. But I’m never trying to shortcut understanding something or expanding my skill set by having AI just replace me.”

    If there was a magic solution that would make you $30,000 in a weekend, they wouldn’t be giving it to you on Twitter.


    What To Do Next

    Stop looking for the shortcut. Start looking for the repetitive task.

    Pick one thing you do every week that’s tedious but that you understand completely. Automate that. Not your entire job. Not your entire workflow. One thing.

    The value comes from depth, not breadth. One well-automated process you understand beats fifty agents you can’t explain.

    And the next time someone posts about their 50 AI agents, ask them one question: which of those agents would break if you changed one variable? If they can’t answer, they don’t understand their own system.


    The One Thing to Remember

    The people making the most money from AI automation aren’t using 50 agents. They’re using AI to go faster at the things they already know how to do. That’s it. Everything else is marketing.


    This insight comes from “50 AI Agents Running My Company Is a Lie” featuring Max Brodeur-Urbas, founder of Gumloop. The AI Shift curates wisdom from AI leaders for busy professionals navigating the AI era. What’s the one task you’d automate first?

    Subscribe now

    Leave a comment

  • Google Opens Gemma 4, Microsoft Builds Its Own AI Stack

    Google Opens Gemma 4, Microsoft Builds Its Own AI Stack

    Good morning, Google just made its best open AI models actually open, Microsoft is quietly building a path away from OpenAI, and a lawsuit claims Perplexity’s Incognito Mode doesn’t protect anything. Here’s what happened 👇


    1. Google Launches Gemma 4, Drops Restrictive License for Apache 2.0

    Google released Gemma 4, its most capable family of open AI models, in four sizes: 2B and 4B for mobile devices, 26B Mixture of Experts, and 31B Dense for local hardware. The models are based on the same technology as Google’s Gemini 3 and support agentic workflows, function calling, structured JSON output, code generation, and vision tasks. The 26B MoE model activates only 3.8 billion of its 26 billion parameters during inference, delivering much higher speed than similarly sized models. Context windows reach 256k tokens for the larger variants. But the biggest news may be the licensing: Google ditched its restrictive custom Gemma license, which let Google change terms unilaterally, for Apache 2.0. Developers now have full freedom to build commercially without Google’s oversight.

    Why it matters: Licensing was the main reason many developers avoided Google’s open models. Apache 2.0 removes that barrier entirely. With Gemma 4 ranking #3 on the open model leaderboard at a fraction of the size of competing models, Google just made the strongest case yet that you don’t need a cloud subscription to run capable AI.

    Source: Ars Technica


    2. Microsoft Launches Three Foundational AI Models to Reduce OpenAI Dependence

    Microsoft AI, led by Mustafa Suleyman, released three new foundational models: MAI-Transcribe-1 (speech-to-text across 25 languages, 2.5x faster than Azure Fast), MAI-Voice-1 (generates 60 seconds of audio in one second with custom voice creation), and MAI-Image-2 (image generation). The models are available through Microsoft Foundry and priced to undercut Google and OpenAI. Suleyman called it “Humanist AI,” focused on how people actually communicate. While Microsoft reaffirmed its OpenAI partnership, a recent renegotiation of that deal is what allowed Microsoft to pursue its own superintelligence research. This is the first major output from the MAI Superintelligence team formed in November 2025.

    Why it matters: Microsoft invested $13 billion in OpenAI. Now it’s building competing models. The message is clear: Microsoft wants to be an AI platform, not just an OpenAI reseller. If these models are genuinely cheaper and good enough, enterprise customers get a reason to stay in the Microsoft ecosystem without paying OpenAI prices.

    Source: TechCrunch


    3. Lawsuit: Perplexity Shares Your “Private” AI Chats with Google and Meta

    A class action lawsuit alleges that Perplexity’s AI search engine secretly shares complete chat transcripts with Google and Meta through embedded ad trackers, including the Facebook Meta Pixel, Google Ads, and Google DoubleClick. The lawsuit claims this happens to every user, whether they have an account or not. Worse, even paid users who enabled “Incognito Mode” had their conversations shared along with their email addresses and other personal identifiers. The complaint describes the Incognito feature as a “sham.” Users’ financial data, health questions, and legal queries were allegedly shared without consent. Perplexity’s privacy policy doesn’t mention specific trackers and isn’t even linked on its homepage. The proposed class covers chats from December 2022 through February 2026.

    Why it matters: People use AI search engines to ask things they wouldn’t ask another person, from health scares to financial problems to legal questions. If this lawsuit’s claims hold up, millions of people’s most private queries were being fed to advertising companies the entire time. “Incognito Mode” meaning nothing is the kind of betrayal that erodes trust in the entire AI industry.

    Source: Ars Technica


    Quick Hits

    • OpenAI acquires TBPN, a popular founder-led business talk show, saying the deal will help “create a space for a real, constructive conversation about the changes AI creates.” OpenAI is now in the media business. Source: Reuters

    • China drafts regulations for “digital humans,” requiring clear labeling and banning AI services designed to be addictive for children. Source: Reuters

    • Samsung is expected to report a record quarterly profit as AI chip demand drives a surge in memory sales. Source: Reuters


    That’s it for today. The AI industry is fracturing in interesting ways. Google is making open models truly open. Microsoft is building its own stack while still paying OpenAI billions. And the companies that promised privacy are allegedly doing the opposite. The question isn’t who has the best model anymore. It’s who you can actually trust.

    Forward this to someone who needs to stay in the loop.

    Subscribe now

    Leave a comment

  • Prove Yourself Wrong as Fast as Possible

    Prove Yourself Wrong as Fast as Possible

    The best thing that can happen to your idea is someone telling you why it won’t work. The worst thing is months of silence.


    The Reality

    Max Brodeur-Urbas got deported from the United States and banned for five years. He wasn’t doing anything illegal. He’d quit his job at Microsoft, moved back to Vancouver, and drove down to visit old roommates in Seattle for a weekend. Border agents turned him around on suspicion he planned to stay longer. That came with a five-year ban.

    “That was kind of the moment where I realized I had to build a company because I had no fallback plan.”

    So he went back to his apartment and started building. A video game moderation tool. Trust and safety software. Bot detection. An anti-scam platform. A new idea nearly every week for months.

    Almost all of them failed.

    But the failure wasn’t the interesting part. The interesting part was how he learned to fail.


    The Shift

    In the beginning, Max spent months building each idea before showing it to anyone. He’d invest weeks into a product, polish it, and then hope someone would validate it.

    That’s the wrong order.

    “In the beginning, I was building ideas for months and then hoping someone would prove me right. But that’s the opposite of what you should be doing.”

    The breakthrough came when he flipped the process: instead of building first and validating later, he started hunting for reasons his ideas wouldn’t work.

    The Old Way: Build for months. Hope someone says yes. Feel devastated when they say no.
    The New Reality: Hunt for the “no” as fast as possible. If you can’t find a strong reason something won’t work, then you might actually have something worth building.

    “You should actually be hunting for someone to tell you why this won’t work. If you can’t find a reason it won’t work, then you actually have some sort of tangible idea you should pursue.”

    This is what eventually led to Gumloop. Max noticed people in the AutoGPT Discord asking basic questions: “What is GitHub? How do I install something locally?” He built a simple UI to solve that problem. It wasn’t glamorous. It wasn’t his grand vision. But people actually wanted it.

    Then came the real insight: the AI agents people were so excited about were unreliable. His users were frustrated. So he gave them what they were secretly asking for: not smarter agents, but predictable, reliable automation. The non-technical users, the business admins, the ops people, went wild for it.

    “I kind of gave them what they were secretly asking for, which is just reliability, predictability.”

    The company that now processes 4 million workflows a day for Instacart, Shopify, and DoorDash was born from listening to frustration, not from a brilliant idea.


    What To Do Next

    Whatever you’re working on right now, whether it’s a side project, a business idea, or a new initiative at work, find one person who will tell you why it won’t work.

    Not a friend who will be nice. Not a colleague who owes you a favor. Someone with no incentive to protect your feelings.

    Their objection is worth more than your confidence. If they can’t break your idea, keep going. If they can, you just saved yourself months.

    And if you’re building with AI: talk to users before you build features. The most successful AI products aren’t the smartest ones. They’re the ones that solved the problem people actually had, not the one the founder imagined.


    The One Thing to Remember

    The fastest way to build something great is to get really good at proving yourself wrong. Every idea that fails fast is a week saved. Every “no” you hear early is a month you didn’t waste.


    This insight comes from “50 AI Agents Running My Company Is a Lie” featuring Max Brodeur-Urbas, founder of Gumloop. The AI Shift curates wisdom from AI leaders for busy professionals navigating the AI era. What idea have you been holding onto that needs to be tested?

    Subscribe now

    Leave a comment

  • Anthropic’s GitHub Takedown Backfire, Oracle Cuts 30,000 Jobs

    Anthropic’s GitHub Takedown Backfire, Oracle Cuts 30,000 Jobs

    Good morning, Anthropic’s week went from bad to worse, Intel is making a $14.2 billion bet that AI will fuel its comeback, and Oracle just laid off 18% of its workforce by email to pay for data centers. Here’s what happened 👇


    1. Anthropic Accidentally Takes Down Thousands of GitHub Repos in Leak Cleanup

    Anthropic’s response to its Claude Code source leak made things worse. The company filed a DMCA takedown notice with GitHub to remove repositories containing the leaked code, but the notice was executed against roughly 8,100 repositories, including legitimate forks of Anthropic’s own publicly released Claude Code repo. Developers whose unrelated code got blocked were furious. Anthropic’s head of Claude Code, Boris Cherny, called it an accident: the targeted repo was part of a fork network connected to their own public repo, so the takedown “reached more repositories than intended.” Anthropic retracted the notice for everything except the original leak and 96 forks, and GitHub restored access. But the damage to Anthropic’s reputation compounds at the worst time, as the company reportedly plans an IPO.

    Why it matters: First you accidentally leak 512,000 lines of source code. Then you accidentally take down 8,000 repos trying to clean it up. For a company preparing to go public, execution matters, and this is two unforced errors in 48 hours.

    Source: TechCrunch


    2. Intel Spends $14.2 Billion to Buy Back Its Ireland Chip Factory

    Intel is buying back the 49% stake it sold to Apollo Global Management in its Ireland manufacturing facility for $14.2 billion, taking full ownership of the plant. Apollo had paid $11.2 billion for that stake in 2024 when Intel was struggling financially. Since then, Intel changed CEOs, cut jobs aggressively, and received billions from Nvidia and the U.S. government. The turnaround is being driven by rising demand for Intel’s processors in AI data centers, specifically for inference: the process by which AI tools like ChatGPT respond to queries. Intel shares rose more than 10% on the news. The deal will be funded with cash and about $6.5 billion in new debt.

    Why it matters: Intel sat out the first three years of the AI boom. This buyback signals the company believes it’s finally caught up enough to invest aggressively. If AI inference demand keeps growing, Intel’s bet could pay off. If it doesn’t, they just took on $6.5 billion in debt for a factory they already owned two years ago.

    Source: Reuters


    3. Oracle Lays Off 30,000 Workers to Fund AI Data Centers

    Oracle has begun cutting up to 30,000 employees, roughly 18% of its 162,000-person workforce, to free up cash for its massive AI infrastructure buildout. Workers across the US, India, Canada, and Mexico were notified via 6 AM termination emails. The cuts span sales, engineering, and security roles. TD Cowen analysts estimated in January that layoffs of this scale could free up $8-10 billion in cash flow. Oracle has been raising tens of billions in debt to build AI data centers, and carries $553 billion in contracted but unrecognized revenue, much of it tied to a $300 billion deal with OpenAI. Despite all that, the stock is down 25% this year as investors worry about the company’s debt load and competitive position.

    Why it matters: Oracle is making the starkest trade-off in the AI era: cut nearly one in five employees to pay for the machines that replace them. If the AI infrastructure bet pays off, the math works. If it doesn’t, 30,000 people lost their jobs for a buildout that never generated returns.

    Source: MarketWatch


    Quick Hits

    • Swiss Finance Minister sues over Grok’s misogynistic “roasts.” Karin Keller-Sutter filed a criminal complaint after an X user prompted Grok to generate vulgar content about her, asking prosecutors to investigate whether X also bears responsibility. Swiss defamation law carries up to three years in prison. Source: Ars Technica

    • Penguin Random House sues OpenAI in Munich after ChatGPT generated text and images “virtually indistinguishable” from a popular German children’s book series about Coconut the Dragon, including a cover, blurb, and self-publishing instructions. Source: The Verge

    • Baidu’s robotaxis froze in traffic in China, creating road chaos as the autonomous vehicles stopped responding and couldn’t be manually overridden. Source: The Verge


    That’s it for today. The theme is consequences. Anthropic learns that cleaning up a leak can be worse than the leak itself. Intel bets $14.2 billion that its comeback is real. And Oracle shows what happens when the AI buildout bill comes due: 30,000 people get a 6 AM email.

    Forward this to someone who needs to stay in the loop.

    Subscribe now

    Leave a comment

  • OpenAI’s $852B Valuation, Claude Code Source Leak

    OpenAI’s $852B Valuation, Claude Code Source Leak

    Good morning, OpenAI just became the most valuable private company in history, Anthropic gave the whole internet a free look at how Claude Code works, and your office Slackbot learned 30 new tricks. Here’s what happened 👇


    1. OpenAI Raises $122 Billion at an $852 Billion Valuation

    OpenAI has closed its largest funding round ever, raising $122 billion at a valuation that dwarfs most public companies. SoftBank co-led with Andreessen Horowitz, D.E. Shaw, and others, while Amazon, Nvidia, and Microsoft also participated. About $3 billion came from individual investors through bank channels, and OpenAI will soon be included in ARK Invest ETFs, broadening its shareholder base ahead of a widely expected IPO this year.

    The numbers behind the round tell the bigger story. OpenAI says it now generates $2 billion per month in revenue, has 900 million weekly active users, and over 50 million subscribers. Its ads pilot is already pulling in over $100 million in annualized recurring revenue after just six weeks. Business revenue now makes up 40% of total income, up from 30% last year. OpenAI called itself an “AI superapp” in its press release, making it clear this round is as much about setting IPO expectations as it is about the capital.

    Why it matters: This isn’t just a funding round. It’s a dress rehearsal for the biggest tech IPO in years. When a private company starts publishing user metrics and flywheel narratives, it’s talking to Wall Street, not just investors.

    Source: TechCrunch


    2. Anthropic Accidentally Leaks Claude Code’s Entire Source Code

    Anthropic published version 2.1.88 of its Claude Code npm package with an exposed source map file, accidentally giving the internet access to the tool’s entire codebase: nearly 2,000 TypeScript files and over 512,000 lines of code. Security researcher Chaofan Shou spotted the leak, and the code was quickly uploaded to a public GitHub repository where it has been forked tens of thousands of times. Anthropic confirmed it was a “release packaging issue caused by human error, not a security breach” and said no customer data or credentials were involved. Developers have already started analyzing the code, posting detailed breakdowns of Claude Code’s memory architecture, plugin system, and query infrastructure.

    Why it matters: Claude Code is the most popular AI coding tool on the market right now. Competitors now have a blueprint to study, security researchers have a map to probe, and Anthropic has lost a significant piece of its competitive advantage overnight because of a packaging mistake.

    Source: Ars Technica


    3. Salesforce Gives Slack 30 New AI Features, Turns Slackbot Into an Agent

    Salesforce unveiled a major AI overhaul for Slack, adding 30 new features that transform the workplace chat app into an AI agent platform. The biggest change: Slackbot now supports “reusable AI skills” that let users define specific tasks (like creating a budget) that the bot can execute by pulling data from channels, apps, and connected sources. It also functions as an MCP (Model Context Protocol) client, meaning it can connect to outside services and route work to Salesforce’s Agentforce platform or any enterprise agent. New capabilities include meeting transcription, real-time summaries, and a desktop monitoring feature that tracks your calendar, conversations, and habits to suggest follow-ups. Salesforce says a million businesses now run on Slack, with 2.5x revenue growth since its acquisition.

    Why it matters: The race to embed AI agents into work tools just got real. If Slackbot can schedule your meetings, draft your emails, and pull data from your CRM without you leaving the chat window, the line between “messaging app” and “work operating system” disappears.

    Source: TechCrunch


    4. Nvidia Invests $2 Billion in Marvell to Lock Down AI Infrastructure

    Nvidia has made a $2 billion investment in chip designer Marvell Technology, creating a partnership focused on advanced networking solutions for AI data centers. The deal centers on optical interconnects and silicon photonics, the technology that enables high-speed, energy-efficient data transmission between AI chips. Marvell will contribute custom chips compatible with Nvidia’s NVLink Fusion, while Nvidia supplies CPUs, network interface cards, and interconnects. Big Tech firms including Alphabet and Meta are expected to spend at least $630 billion on AI infrastructure this year, and this deal positions Nvidia to stay central to that buildout even as customers explore custom chip alternatives.

    Why it matters: Nvidia isn’t just selling GPUs anymore. By investing in the networking layer that connects all the chips in a data center, it’s making sure even companies that use competitors’ processors still need Nvidia’s ecosystem to make everything talk to each other.

    Source: Reuters


    Quick Hits

    • Chinese chipmakers now claim nearly half of their domestic market as Nvidia’s share shrinks under ongoing U.S. export restrictions, according to IDC data. Source: Reuters

    • Meta launches two $499 Ray-Ban prescription smart glasses, expanding its AI-powered wearables line into the prescription market for the first time. Source: Reuters

    • Anthropic signs an AI safety and economic data tracking deal with Australia, its latest move to expand internationally while navigating its ongoing conflict with the U.S. Defense Department. Source: Reuters


    That’s it for today. The through-line: AI companies are building empires so fast that even their mistakes create industry-shifting moments. A $122 billion funding round, 512,000 lines of leaked code, and a chat app that now runs your workday. The scale is hard to process, but the direction is clear.

    Forward this to someone who needs to stay in the loop.

    Subscribe now

    Leave a comment