Good morning, someone threw a Molotov cocktail at Sam Altman’s San Francisco home and he responded with a blog post calling for de-escalation, the US government is now encouraging Wall Street banks to test Anthropic’s Mythos model while UK regulators scramble to assess its risks, and North Korean hackers compromised a widely used developer tool that exposed OpenAI’s macOS app signing certificates. Here’s what happened 👇
1. Someone Threw a Molotov Cocktail at Sam Altman’s Home
Someone allegedly threw a Molotov cocktail at OpenAI CEO Sam Altman’s San Francisco home early Friday morning. No one was hurt. Police later arrested a suspect at OpenAI headquarters, where he was threatening to burn down the building. The incident came days after a lengthy New Yorker profile by Ronan Farrow and Andrew Marantz that questioned Altman’s trustworthiness, with sources describing “a relentless will to power” and “a sociopathic lack of concern for the consequences” of deceiving people. Altman published a blog post Friday night, acknowledging mistakes and a tendency toward being “conflict-averse.” He called the New Yorker piece “incendiary” and said he had “underestimated the power of words and narratives.” He also invoked a Lord of the Rings metaphor, arguing that no one should try to “control AGI” and that the technology should be shared broadly.
Why it matters: The physical attack on a tech CEO’s home is a crossing of a line. AI anxiety is real and growing, and no matter what you think of Altman or OpenAI, Molotov cocktails are not criticism. But the deeper story is the New Yorker profile itself. More than 100 sources raised questions about whether the person steering the most ambitious AI company in the world can be trusted with that responsibility. That question is not going away.
2. US Pushes Banks to Test Mythos While UK Regulators Scramble
The Anthropic Mythos saga escalated on two fronts this weekend. Bloomberg reported that Treasury Secretary Bessent and Fed Chair Powell did not just warn bank executives about Mythos at their emergency meeting. They encouraged the banks to test it for defensive cybersecurity purposes. Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are now reportedly testing the model alongside JPMorgan Chase, which was one of the original 40 partner organizations. Separately, the Financial Times reported that UK financial regulators, including the Bank of England and the Financial Conduct Authority, are holding urgent talks with the National Cyber Security Centre to assess Mythos risks. British banks, insurers, and exchanges are expected to be briefed on those risks within two weeks.
Why it matters: The US government is now actively pushing banks to use a model from a company that the Pentagon blacklisted as a supply chain risk. That contradiction tells you everything about how fast AI cybersecurity is moving. Governments are not choosing between fear and adoption. They are doing both at the same time. The UK response shows this is not just an American problem. If a model can find vulnerabilities in every major operating system, every financial regulator on earth needs a plan for it.
Source: TechCrunch | Source: Reuters
3. North Korean Hackers Hit OpenAI Through Axios Supply Chain Attack
OpenAI disclosed Friday that a widely used developer library called Axios was compromised on March 31 as part of a broader software supply chain attack believed to be linked to North Korea. The attack caused a GitHub Actions workflow used by OpenAI to download and execute a malicious version of Axios. That workflow had access to a certificate and notarization material used for signing macOS applications, including ChatGPT Desktop, Codex, Codex-cli, and Atlas. OpenAI said its analysis concluded the signing certificate was likely not successfully stolen, and no user data was accessed. But as a precaution, the company is updating all security certifications and requiring macOS users to update their apps. Older versions of OpenAI’s macOS desktop apps will stop receiving updates or support after May 8 and may stop working entirely.
Why it matters: Supply chain attacks are the hardest kind of cybersecurity threat to defend against because you are not being attacked directly. You are being attacked through a tool you trust. Axios is one of the most widely used JavaScript libraries in the world. If North Korean hackers can compromise it, they can reach thousands of companies at once. OpenAI caught this one, but the broader lesson is that every company building AI is a target, and the tools they depend on are the weakest link.
Quick Hits
-
Apple is testing four designs for AI-powered smart glasses it plans to sell in 2027, with a possible unveiling later this year, Bloomberg’s Mark Gurman reported. The glasses will not have displays but will support photos, video, phone calls, music, and Siri. Source: TechCrunch
-
Claude dominated the conversation at the HumanX AI conference in San Francisco this week, with vendors and panelists repeatedly naming Anthropic’s chatbot as their tool of choice over ChatGPT. TechCrunch reported that the perception OpenAI has “fallen off” is becoming widespread among enterprise users. Source: TechCrunch
-
South Africa unveiled a draft national AI policy seeking public comment on sweeping proposals to regulate and accelerate AI adoption, including new institutions and incentive programs. Source: Reuters
That’s it for today. The week ended with a physical attack on a tech CEO, two governments racing to figure out the same AI model, and a reminder that the software supply chain is only as strong as its weakest dependency. AI is no longer a technology story. It is a security story, a political story, and now, a personal safety story.
Forward this to someone who needs to stay in the loop.

Leave a Reply