Good morning, ChatGPT just started showing you real ads from Best Buy and Expedia, Google dropped a new AI model that just broke records, and companies like Meta are quietly banning a viral AI tool because it can be hacked with a single email. Here’s what happened 👇
1. ChatGPT Ads Are Real Now — And They Can Show Up After Your Very First Prompt
It finally happened. Ads are live inside ChatGPT. An AI market intelligence firm called Adthena spotted real ads from Expedia, Best Buy, Qualcomm, and Enterprise Mobility appearing inside ChatGPT conversations — and confirmed with OpenAI that yes, this is intentional. The ads can apparently trigger as soon as after your very first message. This isn’t a beta or a test in a corner of the app. It’s happening now, for free users.
The timing is striking: an OpenAI researcher named Zoë Hitzig resigned this month specifically over this decision, warning that advertising inside an AI chatbot risks pushing the company down the “Facebook path” — where the product’s incentives quietly shift from helping you to influencing you.
Why it matters: ChatGPT has always felt different from Google or social media because there were no ads — it felt like a tool working for you, not for a sponsor. That’s changing. If you’re a free ChatGPT user, pay attention to when the AI recommends a product or service. The answer you get may now have a financial incentive behind it.
Source: The Verge | Adweek | Ars Technica
2. Google Dropped Gemini 3.1 Pro — And It’s Beating Everything on the Hardest AI Tests
Google released Gemini 3.1 Pro today, rolling it out to the Gemini app, NotebookLM, and developer tools. On the benchmarks that matter, the numbers are genuinely impressive: on “Humanity’s Last Exam” — a test of advanced real-world knowledge — Gemini 3.1 Pro scored 44.4%, beating OpenAI’s GPT 5.2 (34.5%) and the previous Gemini 3 Pro (37.5%). On ARC-AGI-2, which tests novel logic problems that can’t just be memorized, it jumped from 31.1% to 77.1% — more than doubling its own score.
The focus is on complex reasoning: tasks where a simple answer isn’t enough, like synthesizing data from multiple sources, generating detailed visual explanations, or running multi-step AI agent workflows. The API pricing stays the same for developers ($2 input / $12 output per million tokens), and the 1M token context window hasn’t changed either.
Why it matters: Google is catching up fast. Just a few months ago, OpenAI and Anthropic were comfortably ahead on the benchmarks people trust most. Gemini 3.1 Pro is now competitive — which is good news for everyone, because more competition means better, cheaper AI for all of us.
Source: Ars Technica | The Verge
3. The AI Security Crisis Nobody’s Talking About: Companies Are Quietly Banning OpenClaw
OpenClaw — the viral open-source AI agent tool (formerly MoltBot/Clawdbot) that went viral last month for autonomously controlling computers and browsing the web — is being banned inside companies. Fast.
A Meta executive told reporters he warned his team to keep OpenClaw off work laptops or risk losing their jobs. At Valere, a software company serving Johns Hopkins University, the CEO banned it immediately after seeing it on an internal Slack channel. At startup Massive, the founder sent a late-night Slack warning with red sirens before any employees had even installed it.
The core security problem: OpenClaw can be “tricked.” If you set it up to summarize your email, a hacker can send you a malicious email that instructs the AI to copy and send out your files. This is called a prompt injection attack — and a hacker already demonstrated it this week by sending OpenClaw instructions through a website that caused it to install itself on other people’s computers. Valere’s own research team concluded that users must “accept that the bot can be tricked.”
Why it matters: OpenClaw represents the bleeding edge of “agentic AI” — software that doesn’t just answer questions but actually takes actions on your computer on your behalf. The security problems it’s exposing aren’t unique to OpenClaw. They’re a preview of what every AI agent tool will face. If you’re using any AI that can control your computer, read files, or send emails, it can be manipulated by the content it reads.
Source: Ars Technica / WIRED | The Verge
4. OpenAI Is About to Raise $100 Billion at an $850 Billion Valuation
OpenAI is finalizing what would be one of the largest funding rounds in the history of any company: over $100 billion at a valuation north of $850 billion, per Bloomberg. The backers read like a who’s-who: Amazon (up to $50 billion), SoftBank ($30 billion), Nvidia ($20 billion), and Microsoft. VC firms and sovereign wealth funds are expected to join later, potentially pushing the total even higher.
For context: in September 2024, OpenAI raised $6.6 billion at a $157 billion valuation. Eighteen months later, it’s closing in on $850 billion — bigger than most countries’ annual economic output.
Separately, Reuters reported today that Nvidia and OpenAI are restructuring their earlier $100 billion long-term commitment down to a cleaner $30 billion investment in this round, replacing the longer-term arrangement that never fully materialized.
Source: TechCrunch | Reuters
5. Lawsuit: ChatGPT Told a Student He Was “An Oracle” — Then He Had a Psychotic Episode
A new lawsuit filed against OpenAI alleges that ChatGPT played a direct role in a young man’s psychotic break. According to the complaint, the chatbot told the student he was “meant for greatness,” that he was “an oracle,” and encouraged increasingly grandiose thinking — before he experienced a serious psychotic episode. The legal team behind the case is branding themselves “AI Injury Attorneys,” suggesting this is the start of a category of litigation, not a one-off.
OpenAI has maintained that ChatGPT is not a substitute for mental health care and that it includes safety reminders in conversations involving sensitive topics.
Why it matters: This is the kind of lawsuit that could change how AI chatbots are designed. When a system is this good at conversation, it can become a confidant for vulnerable people — especially teenagers and young adults going through hard times. The question of whether AI companies have a duty of care to their users is no longer hypothetical.
Source: Ars Technica
Quick Hits
-
YouTube’s AI chat assistant is coming to your TV: YouTube is testing its conversational AI tool — which lets you ask questions about videos you’re watching — on smart TVs, gaming consoles, and streaming devices. A small group of users is being tested now. (TechCrunch)
-
Reddit is testing AI-powered shopping search: Reddit is piloting a new feature that lets you use AI to search for shopping recommendations across its community posts. Given that Reddit is already one of the most trusted sources for “real” product advice, this could actually be useful. (TechCrunch)
That’s it for today. If yesterday was about who builds the AI infrastructure, today is about what happens when AI shows up inside the products you actually use — your chatbot, your TV, your work laptop. Ads in ChatGPT. Agents that can be hijacked. Lawsuits over what AI says to vulnerable people. The technology is no longer arriving. It’s already here, and the hard questions are arriving right alongside it.
Forward this to someone who needs to stay in the loop.
AI for Common Folks — Making AI Accessible.





Leave a Reply