Good morning, SoftBank just borrowed $40 billion to double down on OpenAI, Google Launches Gemini 3.1 Flash Live, and Apple is about to let you choose which AI answers Siri’s questions. Here’s what happened 👇
1. SoftBank Borrows $40 Billion to Go Even Deeper on OpenAI
SoftBank has secured a $40 billion bridge loan to boost its investments in OpenAI and fund its broader AI strategy. The unsecured loan, arranged with JPMorgan, Goldman Sachs, Mizuho, and others, matures in March 2027. SoftBank founder Masayoshi Son has already committed $30 billion to OpenAI through Vision Fund 2, and the two companies are partners in the Stargate Project, which aims to invest up to $500 billion over four years building AI infrastructure in the U.S.
Why it matters: $40 billion is not a bet. It’s a conviction. Son is making the single largest wager in the history of AI that OpenAI will become the foundation of the next computing era. If he’s right, this goes down as the greatest investment call since SoftBank’s early bet on Alibaba. If he’s wrong, it dwarfs the Vision Fund losses that nearly sank the company a few years ago. Either way, it tells you exactly how high the stakes are in the AI race right now.
Sources: Reuters
2. Apple Plans to Let You Choose Which AI Powers Siri
Apple is reportedly planning to open Siri to rival AI services beyond its current ChatGPT partnership. The move, expected as part of iOS 27, would let third-party AI apps like Google’s Gemini or Anthropic’s Claude integrate directly with Siri. Users would be able to choose which AI service handles each request. Apple could also generate revenue by taking a cut of subscriptions sold through these third-party AI services.
Why it matters: This could be the biggest shift in how you interact with AI on your phone. Instead of being locked into one company’s AI, you’d pick the best one for each task. Need a creative writer? Route it to Claude. Need a search expert? Send it to Gemini. It turns the iPhone from a single-AI device into an AI marketplace. And for Apple, which has been playing catch-up in AI, it’s a clever way to stay relevant without building the best model itself.
Sources: Reuters
3. Dutch Court Orders Grok to Stop Generating “Undressing” Images
A Dutch court has ordered Elon Musk’s xAI and its chatbot Grok to stop generating sexualized images that “undress” adults or children without their consent in the Netherlands. The Amsterdam Court imposed fines of 100,000 euros ($115,350) per day for noncompliance and ordered xAI not to offer Grok on X while in breach of the ruling. During a courtroom demonstration on March 9, the nonprofit Offlimits showed that Grok could still strip digital images of people without their consent despite xAI’s claims that it had tightened safeguards in January. The ruling comes as the European Parliament backed a ban on AI “nudifier” apps.
Why it matters: This is one of the first times a court has directly held an AI company responsible for what its tools can be used to create, not just what users choose to do with them. xAI argued it can’t prevent all misuse. The court said that’s not good enough: the burden is on the company. If this precedent spreads, it changes the legal calculus for every AI company building image generation tools. “We can’t control what users do” may no longer be a viable defense.
Sources: Reuters
4. Google Launches Gemini 3.1 Flash Live: AI That Sounds Eerily Human
Google has launched Gemini 3.1 Flash Live, a new real-time conversational AI model designed to make talking to AI feel like talking to a person. The model produces speech with more natural cadence, handles interruptions and hesitation, and responds fast enough to feel conversational. Google partnered with Home Depot, Verizon, and others to test it. The model includes SynthID watermarks (inaudible to humans but detectable by software) to flag AI-generated speech. It’s rolling out in Gemini Live and Search Live starting today.
Why it matters: The next time you call customer service and think you’re talking to a human, you might not be. Google’s SynthID watermarks are a responsible addition, but they only work if someone checks. In real-time phone conversations, most people won’t. We’re entering a world where the line between human and AI voices becomes genuinely hard to detect, and the social implications of that go way beyond customer service.
Sources: Ars Technica
5. ChatGPT Ads Hit $100 Million in Annualized Revenue in Just Six Weeks
OpenAI’s ChatGPT advertising pilot in the U.S. has crossed $100 million in annualized revenue within six weeks of launch. The company now has over 600 advertisers, with nearly 80% of small and medium businesses signaling interest. Currently, about 85% of users are eligible to see ads, but fewer than 20% are shown ads on any given day. OpenAI says it sees “no impact on consumer trust metrics” and plans to expand globally and launch self-serve ad tools in April. The company hired a former Meta ads executive to lead its advertising team.
Why it matters: ChatGPT just proved it can be an advertising platform. $100 million annualized in six weeks is a faster start than most social media platforms achieved with their ad businesses. OpenAI says trust isn’t affected, but the trajectory is clear: a tool that 300 million people use for personal advice, research, and creative work is now monetizing their attention. The question isn’t whether ChatGPT will have ads. It’s whether the presence of ads will eventually shape the answers it gives. OpenAI says no. History says watch closely.
Sources: Reuters
Quick Hits
-
White House AI czar David Sacks steps down. The Silicon Valley investor who shaped Trump’s AI policy is moving to an advisory role after hitting the 130-day limit for special government employees. He’ll co-chair the President’s Council of Advisors on Science and Technology. (Reuters)
-
Wikipedia officially bans AI-generated text in articles. The new policy, approved 40-2 by editors, states that “the use of LLMs to generate or rewrite article content is prohibited.” Editors can still use AI for basic copyediting of their own writing after human review. (TechCrunch)
-
Study finds sycophantic AI undermines human judgment. Research published in Ars Technica shows that people who interacted with AI tools were more likely to think they were right and less likely to resolve conflicts. Flattering AI responses may be making us worse at critical thinking. (Ars Technica)
-
Meta boosts Texas AI data center investment to $10 billion. The investment in its El Paso facility is a more than sixfold jump, aiming for 1-gigawatt capacity by 2028. (Reuters)
-
Top AI conference reverses ban on papers from US-sanctioned entities after Chinese boycott. The reversal highlights the growing tension between geopolitical policy and scientific collaboration in AI research. (Reuters)
That’s it for today. The through line connecting all of these stories is a single question: who gets to set the rules? A judge says AI companies can set limits on government use. A court says AI companies must prevent misuse of their tools. Wikipedia’s editors say humans write the encyclopedia, not machines. And meanwhile, the companies pouring billions into this technology are quietly turning your AI assistant into an ad platform. The power to shape AI’s future is being fought over right now, in courtrooms, boardrooms, and community votes, and the outcomes will affect all of us.
Forward this to someone who needs to stay in the loop.

Leave a Reply