Good morning, OpenAI employees saw warning signs before Canada’s deadliest school shooting in years and said nothing, a Google VP just told thousands of AI startups their business models won’t survive, and Samsung is rebuilding your phone’s AI around a team of specialists. Here’s what happened over the weekend 👇
1. ChatGPT Had Warnings Before Canada’s School Shooting. OpenAI Didn’t Call Police.
On February 10th, a shooting at Tumbler Ridge Secondary School in British Columbia killed 9 people and injured 27 others — Canada’s deadliest mass shooting since 2020. The suspect, Jesse Van Rootselaar, had described detailed violent scenarios to ChatGPT months earlier, in June 2025. Those conversations triggered OpenAI’s automated content review system, and several OpenAI employees raised serious internal concerns — some arguing the posts could be a precursor to real-world violence. Company leadership reviewed the case and concluded it did not rise to the level of “imminent and credible risk” to others. They banned the account. They did not call police.
After the shooting, OpenAI said it “proactively reached out” to the Royal Canadian Mounted Police with information — but that outreach happened after 9 people were already dead. OpenAI’s position: the company must balance user privacy against safety, and can’t trigger law enforcement referrals for every disturbing conversation without risking harm to innocent users.
Why it matters: This is one of the hardest questions the AI era has produced — and there are currently no laws telling companies what to do. If an AI tool flags something alarming, who is responsible for acting on it? OpenAI’s argument is that over-referral could harm innocent people and erode user trust. That may be right. But for 9 families in Tumbler Ridge, it’s also very cold comfort.
Source: The Verge | TechCrunch
2. A Google VP Just Told AI Startups: Two Business Models Are Already Dead
Darren Mowry, the VP who runs Google’s global startup program across Cloud, DeepMind, and Alphabet, gave a blunt warning this week: two types of AI companies that exploded during the boom are now “check engine light” businesses — and most won’t make it.
LLM wrappers — startups that build a product interface on top of existing AI models like ChatGPT, Claude, or Gemini — are getting squeezed. “If you’re really just counting on the back-end model to do all the work and you’re almost white-labeling that model, the industry doesn’t have a lot of patience for that anymore,” Mowry said.
AI aggregators — platforms that give you access to multiple AI models in one place — face the same fate. Model providers are building their own enterprise tools, cutting out middlemen. “Stay out of the aggregator business,” Mowry said flatly. His historical parallel: this is exactly what happened to startups that resold AWS cloud infrastructure in the early 2010s. When Amazon built its own enterprise tools, most got wiped out. Only the ones with real, deep services on top survived.
What’s actually working? Mowry is bullish on vibe coding tools (Cursor, Replit), deep vertical AI (legal, medical, manufacturing with proprietary data), and developer platforms. The through-line: differentiation that a foundation model can’t just copy next quarter.
Why it matters: Most AI products you’ve tried — “chat with your PDFs,” “summarize your emails,” “AI for [industry]” — are exactly the wrapper businesses Mowry is describing. Whether you’re building with AI or just using it, this is a useful filter: does this product have something genuinely unique underneath it, or is it just a nice interface on top of a smarter model?
Source: TechCrunch
3. Amazon’s AI Coding Agent Made a Mistake — So Amazon Blamed Its Human Employees
Amazon’s internal AI software engineering agent was given a task: fix a bug in a codebase. It fixed it — then introduced five new bugs in the process. When internal teams reviewed what happened, Amazon’s official position was that human employees hadn’t given the agent proper context and supervision. The AI didn’t fail, they said. The humans who deployed it did.
This is a real pattern emerging as AI agents take on longer, multi-step tasks. When an agent takes 20 autonomous steps and something breaks on step 17, figuring out accountability is genuinely hard. Amazon’s framing — “the humans should have supervised better” — is likely to become a standard corporate response as agents are deployed across industries.
Why it matters: If AI agents make mistakes in your workplace, the burden may fall on you for not supervising them properly. That shift is already happening — and there’s no industry standard yet for what “proper oversight” of an AI agent even looks like. Understanding how to work alongside AI, document your supervision, and know when to intervene is becoming a practical skill, not just a theoretical one.
Source: The Verge
4. Samsung Is Rebuilding Galaxy AI Around a Team of AI Specialists — Perplexity Is In
Samsung announced this weekend that it’s adding Perplexity directly into Galaxy AI — the AI suite built into Samsung phones and devices. The addition is part of Samsung’s bet on a “multi-agent AI ecosystem”: instead of one assistant that tries to do everything, your phone routes requests to whichever AI is best suited for that specific task. Perplexity handles search-heavy queries. Gemini handles tasks that need Google’s knowledge graph. Specialized models handle productivity. The phone becomes the router.
Think of it like how your phone today uses different apps for different jobs — except here, the AI decides which AI to use on your behalf.
Why it matters: Samsung phones are used by roughly 1 in 5 people on earth. If multi-agent AI takes hold on devices at that scale, it changes what “AI assistant” even means — from one chatbot trying to do everything, to a coordinated team of specialized models working in the background. It also means companies like Perplexity know their survival depends on being embedded in devices before users ever think to download an app.
Source: The Verge
Quick Hits
-
OpenAI may be building a smart speaker with a camera: Reporting suggests OpenAI’s first consumer hardware could be a ChatGPT-powered device that can see its surroundings — closer to an Amazon Echo with eyes than a phone. (The Verge)
-
Nvidia earnings Wednesday: Nvidia reports quarterly results on February 25th — the clearest signal yet of whether AI data center spending is holding up in 2026. Wall Street is watching closely. (Motley Fool)
-
Sam Altman on AI energy: “Humans use energy too”: Responding to criticism about AI’s massive electricity consumption, Altman argued the value AI creates is worth the energy cost. Critics weren’t impressed. (TechCrunch)
That’s it for today. The weekend gave us a strange, uncomfortable mirror: AI seeing warning signs before a mass shooting and doing nothing, AI making mistakes and humans taking the blame, and AI companies being told their core business models are already obsolete. The technology is moving fast — but the rules, the responsibility, and the accountability are still very much being figured out in real time.
Forward this to someone who needs to stay in the loop.
AI for Common Folks — Making AI understandable, one concept at a time.





Leave a Reply