Author: bakhtsingh.basaram@gmail.com

  • What is a Model?

    What is a Model?

    A Model in AI is the result of training — a saved file containing all the patterns, rules, and mathematical weights a computer learned from data, ready to make predictions on new information.

    Hey Common Folks!

    We’ve covered the umbrella (AI), the engine (Machine Learning), how computers learn (Deep Learning), the fuel (Data Science), and the three ways AI learns (Supervised, Unsupervised, and Semi-Supervised).

    But when you open ChatGPT, or when Netflix recommends a movie, or when your bank approves a loan — what are you actually interacting with?

    You’re interacting with The Model.

    In the AI world, people often confuse “Algorithm” and “Model.” They use them interchangeably, like “Engine” and “Car.” But they’re different things. Today, we’re defining exactly what a Model is, because this is the “product” that companies are actually building, selling, and competing over.

    The Analogy: The Student and the Exam

    Think about a student preparing for a math exam.

    1. The Study Method (Algorithm): How the student learns — flashcards, practice problems, tutoring. This is the process of improving.

    2. The Textbooks (Training Data): The material they study from.

    3. The Student on Exam Day (Model): Once studying is done, they walk into the exam. They’re not holding the textbook anymore. They’re holding the knowledge in their head.

    The Model is the student’s brain after they’ve finished studying.

    When you ask ChatGPT a question, you’re not running the training process again. You’re asking the “graduated student” to use what they already know to give you an answer.

    What Does a Model Actually Look Like?

    If you could crack open an AI model file (like a .bin or .pytorch file) and peek inside, what would you see?

    Not miniature brains. Not videos.

    Numbers. Billions of them.

    A model is simply a Parameterized Math Function. Remember high school math?

    Where:

    • x is the input (e.g., house size)

    • y is the output (e.g., house price)

    • m and b are the Parameters (the learned values)

    When we “train a model,” we’re finding the perfect numbers for m and b so the equation fits the data accurately.

    • In a simple model: You might have 2 parameters

    • In GPT-4: You have hundreds of billions of parameters

    The “Model” is just that massive list of numbers saved in a specific structure. That’s it.

    The Three Stages of a Model’s Life

    Every model goes through this lifecycle:

    1. Initialization (The Blank Slate)
    We create the architecture (the structure), but it knows nothing. The weights are random numbers. It’s essentially a baby brain.

    2. Training (The Education)
    We feed it data. The model makes a guess, gets it wrong, and the algorithm adjusts those numbers slightly. This happens millions of times until accuracy improves.

    3. Inference (The Job)
    Training is done. We “freeze” the numbers — they stop changing. This static file (the trained model) goes into an app. When you type a prompt, the model uses those frozen numbers to calculate an answer.

    Why Are Some Models “Smarter”?

    Why is GPT-4 smarter than a simple spam filter?

    It comes down to Capacity:

    Shallow Models (Simple):

    • Like Linear Regression — draws a straight line through data

    • Great for simple predictions (house prices based on square footage)

    • Fails at complex tasks

    Deep Models (Complex):

    • Like Deep Neural Networks — many layers stacked together

    • Can learn incredibly complex patterns

    • Powers language understanding, image recognition, creative generation

    More parameters + more layers + more training data = more capable model.

    Models You Use Every Day

    • ChatGPT / Claude / Gemini: Large Language Models (LLMs) with billions of parameters

    • Face ID: A vision model that learned your facial features

    • Spotify Discover Weekly: A recommendation model predicting what you’ll enjoy

    • Google Search: Multiple models ranking and understanding your queries

    The Limitations (Keeping It Real)

    Models aren’t magic — they have real constraints:

    Only as good as their data: A model trained on biased data learns biased patterns.

    Frozen knowledge: Once trained, a model doesn’t learn new things unless retrained. That’s why ChatGPT has a “knowledge cutoff.”

    Black boxes: Complex models often can’t explain why they made a decision. They just… work.

    Size vs. speed tradeoff: Bigger models are smarter but slower and more expensive to run.

    The Takeaway

    When you hear “OpenAI released a new model,” translate that in your head to:

    “OpenAI finished training a massive mathematical function and saved the resulting list of numbers into a file that we can now use.”

    • Algorithm: The recipe for learning

    • Data: The ingredients

    • Model: The finished cake

    You eat the cake, not the recipe. You use the model, not the training process.

    Coming Up:
    Now that you know what a Model is, how does it actually learn? In the next edition, we’ll explore Algorithms — the step-by-step processes that turn raw data into intelligent models.


    AI for Common Folks — Making AI understandable, one concept at a time.

  • The Genie Problem: Why Clarity Is the Only Skill That Matters in the AI Era

    The Genie Problem: Why Clarity Is the Only Skill That Matters in the AI Era

    Everyone’s racing to learn AI tools. But the co-founder of a $5.5 billion company says the real skill has nothing to do with technology.


    The Reality

    You’ve heard it a hundred times: “Learn AI or get left behind.”

    So people sign up for prompt engineering courses. They memorize frameworks. They learn to speak in chains and tokens and temperature settings.

    And then they sit down with an AI tool and get garbage output.

    Not because the tool is broken. Because they didn’t know what they actually wanted.

    Nadav Abrami, co-founder of Wix — the $5.5 billion website building platform — has watched thousands of people use AI coding and prototyping tools. He’s seen the pattern clearly. The people who fail with AI aren’t the non-technical ones. They’re the unclear thinkers.

    “It’s like talking to a genie,” he says. “95% of the time it will do what you want. But 5% of the time the genie will find everything you said that is flawed and will do the exact opposite of what you wanted.”

    Here’s the critical difference between AI and a human colleague: a developer would push back when something you said doesn’t make sense. They’d ask clarifying questions. They’d tell you when your instructions contradict each other.

    AI doesn’t do that. AI takes your instructions — correct or not — and executes them perfectly.

    Which means every ambiguity in your thinking becomes a bug in your output.


    The Shift

    Abrami’s insight cuts against the entire “learn AI skills” narrative:

    “It’s not about going technical. It’s about going clarity.”

    Think about that. The bottleneck isn’t your ability to use the tool. It’s your ability to think clearly enough to direct it.

    He puts it bluntly: “Anything that can be misinterpreted will statistically be misinterpreted.”

    This isn’t Murphy’s Law for pessimists. It’s a mathematical reality when you’re working with systems that process language probabilistically. A human might catch your intent despite sloppy instructions. AI catches your words and ignores your intent.

    The Old Way: Technical skills were the gateway. You needed to learn the tool’s language — its syntax, its quirks, its frameworks. Mastery meant knowing the tool better.

    The New Reality: Clarity of communication is the meta-skill. You don’t need to tell AI how to build something. You need to know exactly what you want. The people who thrive with AI aren’t the most technical. They’re the most precise in their thinking.

    Abrami recommends a simple practice that most people skip: Before you execute anything with AI, take your prompt and ask another AI to review it.

    “What are the contradictions? What’s unclear? How could this be misinterpreted?”

    It sounds almost too simple. But this is exactly what good developers do when they review a spec — they look for ambiguity. Now you can do it in ten seconds.

    He also recommends what he calls “discuss mode” — before letting AI build anything, have a conversation with it first. Tell it your plan. Ask it: “How do you understand me? What do you think I’m saying?” Like you would with a developer before they start coding.

    The difference between directing AI and understanding what AI did is the difference between someone who gives orders and someone who actually knows what they’re building.


    What To Do Next

    This week, before you use any AI tool for something important, try the “clarity check.”

    Write your instructions. Then paste them into a fresh AI chat and ask: “What are the contradictions, ambiguities, or things that could be misinterpreted in this?”

    You’ll be stunned at how many you find.

    Then rewrite your instructions and try again. You’ll notice something: the output quality jumps — not because you used a better prompt template, but because you thought more clearly.

    Make this a habit. Every important AI interaction gets a clarity check first. Over time, you’ll start catching the ambiguities in your own head before they even reach the screen.

    That’s the real skill. Not prompting. Thinking.


    The One Thing to Remember

    AI doesn’t reward the most technical user. It rewards the clearest thinker. A genie grants what you say, not what you mean — so learn to say exactly what you mean.


    This insight comes from Nadav Abrami, co-founder of Wix, on the Aakash Gupta podcast. The AI Shift curates wisdom from AI leaders for busy professionals navigating the AI era. When was the last time AI gave you something completely wrong — and was it really the AI’s fault, or yours?

  • AI Daily Digest – March 03, 2026

    AI Daily Digest – March 03, 2026

    Good morning, the Supreme Court just settled the AI copyright question, ChatGPT is losing users at a historic rate over the Pentagon deal, and drone strikes in the Middle East hit Amazon’s data centers for the first time ever. Here’s what happened 👇


    1. Supreme Court: AI-Generated Art Can’t Be Copyrighted. Case Closed.

    The US Supreme Court declined to hear an appeal from computer scientist Stephen Thaler, who has been fighting since 2019 to copyright an image created entirely by his AI system. The image, called A Recent Entrance to Paradise, was generated by an algorithm Thaler built — with no human creative input. The Copyright Office rejected it, a district court upheld the rejection, and a federal appeals court agreed. Now the Supreme Court has refused to even hear the case.

    The ruling that stands: “Human authorship is a bedrock requirement of copyright.” If a machine made it and no human shaped the creative choices, it doesn’t get legal protection. Period.

    This follows the Copyright Office’s guidance from last year that AI-generated artwork based on text prompts alone isn’t copyrightable either.

    Why it matters: If you’re using AI to generate images, text, or music for your business, you don’t own what comes out — legally, nobody does. You can still use AI as a tool in your creative process, but the human has to be making meaningful creative decisions, not just typing a prompt and hitting enter.

    Source: The Verge | Reuters


    2. ChatGPT Uninstalls Surge 295% as Users Flee to Claude

    The Pentagon-OpenAI deal isn’t just a PR problem — it’s costing OpenAI actual users. According to app analytics data reported by TechCrunch, ChatGPT uninstalls surged 295% in the days following the announcement of OpenAI’s military agreement. Meanwhile, Claude’s downloads have been climbing all week, and the app remains near the top of the App Store after hitting #1 over the weekend.

    TechCrunch separately published a guide titled “Users are ditching ChatGPT for Claude — here’s how to make the switch,” which tells you everything about the current mood. Anthropic has also rolled out a new memory import tool that makes it easy to bring your data over from other AI platforms — perfectly timed.

    Why it matters: This is the first time a major AI company has lost significant users over a political decision rather than a product one. People aren’t leaving because Claude is better at coding — they’re leaving because they don’t want their AI provider working with the military on classified operations. That’s a brand new dynamic in the AI market.

    Source: TechCrunch | TechCrunch


    3. Drone Strikes Hit Amazon Data Centers in the Middle East — a First

    Iranian drones struck Amazon Web Services data centers in the UAE and Bahrain, marking the first time a major US tech company’s cloud infrastructure has been damaged by military action. Two AWS facilities in the UAE were directly hit, and a third in Bahrain sustained damage from a nearby strike. The result: structural damage, power outages, fire suppression flooding, and a “prolonged” recovery timeline.

    The outage disrupted cloud services across the region, including banking platforms. AWS told customers to back up data and shift operations to unaffected regions.

    This matters because US tech giants have been pouring billions into the Gulf as a regional AI computing hub. Microsoft alone has committed $15 billion to UAE data centers by 2029. A Washington think tank warned last week that adversaries could target “data centers, energy infrastructure supporting compute, and fiber chokepoints” — and that’s exactly what happened.

    Why it matters: The AI boom depends on physical infrastructure — actual buildings, cables, and power supplies in actual places. When those places become conflict zones, the cloud isn’t as untouchable as the name implies. Companies and governments betting on Middle East AI hubs are now facing a risk they didn’t price in.

    Source: Reuters


    Quick Hits

    • AI can now identify anonymous social media users. Researchers found that LLMs can unmask pseudonymous accounts with up to 90% precision by analyzing writing patterns across platforms — no structured data needed, just free text. The researchers warn this “invalidates the assumption” that pseudonymity provides adequate privacy. (Ars Technica)

    • Cursor hits $2 billion in annualized revenue. The AI coding assistant doubled its revenue run rate in just three months, with corporate customers now making up 60% of sales. The $29 billion startup is fending off competition from Claude Code and OpenAI’s Codex. (TechCrunch)

    • More US agencies dropping Anthropic. The State Department, Treasury, and HHS have all moved to end use of Anthropic products, switching to OpenAI and other providers under the White House directive. (Reuters)


    That’s it for today. The AI industry used to argue about whose model was smarter — now the fight is about who your AI provider works with, who owns what AI creates, and whether the buildings that power it all can survive a war.

    Forward this to someone who needs to stay in the loop.

  • AI Daily Digest – March 02, 2026

    AI Daily Digest – March 02, 2026

    Good morning, the US military used the same AI it just banned to help plan strikes on Iran, OpenAI rushed a Pentagon deal and is now defending the fine print, and Anthropic’s Claude just became the #1 app in America. Here’s what happened 👇


    1. The US Used Anthropic’s AI for Iran Strikes — Hours After Banning It

    On Friday, President Trump announced a ban on all federal use of Anthropic’s Claude AI, calling the company’s leaders “leftwing nut jobs” and directing every agency to phase it out within six months. Defense Secretary Pete Hegseth designated Anthropic a “supply chain risk,” meaning no military contractor can do business with the company either.

    Then, on Saturday, the US launched a major air assault on Iran — using Claude for intelligence assessments and target identification. The same tool Trump had just publicly banned was helping plan the strikes. As the Wall Street Journal reported: “Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools.”

    The six-month phaseout — instead of Trump’s initial demand to “IMMEDIATELY CEASE” — likely exists precisely because the military already depends on Claude for operations like this.

    Why it matters: The gap between the political statement and the operational reality tells you everything. AI isn’t a nice-to-have for the military anymore — it’s embedded in how operations actually work. Banning it by tweet doesn’t change that.

    Source: The Verge | Wall Street Journal


    2. OpenAI Rushed a Pentagon Deal — And Admitted It

    While Anthropic was getting banned, OpenAI was signing on the dotted line.

    Sam Altman announced a new agreement letting OpenAI’s models be used on the Pentagon’s classified network. He said the deal includes the same red lines Anthropic wanted — no mass surveillance of Americans, no AI making kill decisions without a human involved. OpenAI also says it keeps control of its own safety rules and will have its own engineers on-site at the Pentagon.

    Sounds good on paper. But critics quickly pointed out that the deal’s fine print references old laws the NSA has used to collect Americans’ data through overseas channels. And Altman himself admitted: “This was definitely rushed. The optics don’t look good.”

    His reasoning? “We really wanted to de-escalate things.” He’s asking the Pentagon to offer the same deal to all AI companies — including Anthropic.

    Why it matters: When AI contracts shape how wars are fought, “the optics don’t look good” isn’t reassuring. The question isn’t what the blog post says — it’s what the actual agreement allows.

    Source: TechCrunch | OpenAI Blog


    3. Anthropic’s Claude Hits #1 in the App Store

    Sometimes standing up for your principles is also great marketing.

    Anthropic’s Claude app surged past ChatGPT to claim the #1 free app position in Apple’s US App Store on Saturday — a spot it still held on Sunday morning. According to SensorTower data, Claude was barely in the top 100 at the end of January. It climbed to the top 20 in February, hit #6 on Wednesday, #4 on Thursday, and #1 by Saturday.

    Anthropic says daily signups have broken the all-time record every day this past week. Free users are up more than 60% since January. Paid subscribers have more than doubled this year. The company’s refusal to comply with the Pentagon’s demands — and the very public fallout — seems to have turned a policy stance into a consumer movement.

    Why it matters: For years, AI companies have debated whether safety principles help or hurt the business. Anthropic just got its answer: taking a public stand on AI ethics can make you the most downloaded app in America.

    Source: TechCrunch


    Quick Hits

    • The Federal Reserve doesn’t know what to do about AI and jobs. Fed officials are split — some think AI will make things cheaper, others worry it’ll eliminate jobs without creating new ones. Fed Governor Lisa Cook basically said: if AI takes your job, lower interest rates won’t fix it. The Block layoffs made this feel a lot less theoretical. (Reuters)

    • Amazon is pouring another $21 billion into Spain for AI data centers. That brings the company’s total investment in Spain to $33.7 billion — a sign the global AI infrastructure buildout is accelerating, not slowing down. (Reuters)

    • ChatGPT now has 900 million weekly active users. That’s up from 400 million reported just months ago — a staggering growth rate that coincides with OpenAI’s record $110 billion funding round valuing the company at $840 billion. (TechCrunch)

    • Nvidia is building a new chip to make AI answers faster. Partnering with startup Groq in a $20 billion deal, Nvidia plans to unveil the new platform next month. The goal: speed up the part of AI that generates your ChatGPT responses. (Reuters)


    That’s it for today. The Anthropic-Pentagon saga just revealed something most people hadn’t fully grasped: AI is already woven into military operations so deeply that you can’t rip it out by executive order — and the companies building it are now being forced to decide what kind of world they want their tools to create.

    Forward this to someone who needs to stay in the loop.

  • AI Daily Digest – February 26, 2026

    AI Daily Digest – February 26, 2026

    Good morning, Nvidia actually beat the already-sky-high numbers Wall Street was expecting, the Pentagon gave Anthropic a Friday deadline to hand over unrestricted military control of its AI or get blacklisted, Burger King is now using AI to monitor whether your cashier said “please,” and YouTube is feeding AI-generated slop to kids after CoComelon ends. Here’s what happened 👇


    1. Nvidia Just Posted $68 Billion in One Quarter

    The results are in. Nvidia reported $68.1 billion in revenue for its most recent quarter — up 73% from the same period last year and ahead of the $66.1 billion Wall Street was expecting. Of that, $62 billion came from the data center business alone, with $51 billion in GPU compute and $11 billion in networking. Full-year revenue: $215 billion.

    CEO Jensen Huang didn’t hold back on the call: “The demand for tokens in the world has gone completely exponential. I think we’re all seeing that, to the point where even our six-year-old GPUs in the cloud are completely consumed and the pricing is going up.” He also addressed the sustainability questions analysts keep asking about tech companies’ massive AI spending: “In this new world of AI, compute is revenue. Without compute, there’s no way to generate tokens. Without tokens, there’s no way to grow revenues.” The company also disclosed it’s in talks to invest up to $30 billion in OpenAI — though it emphasized there’s “no assurance” the deal will close.

    On China: despite the U.S. government lifting some export restrictions, Nvidia reported zero revenue from Chinese customers so far — and the CFO flagged that domestic Chinese chip companies like Moore Threads are gaining ground.

    Why it matters: Nvidia’s numbers are the clearest real-time signal of whether AI spending is slowing down or not. The answer, for now, is not.

    Source: TechCrunch | Perplexity Discover


    2. Anthropic vs. The Pentagon — And a Friday Deadline

    This is the AI ethics story with the highest stakes we’ve seen yet. The Department of Defense gave Anthropic an ultimatum this week: grant the U.S. military unrestricted access to its Claude AI — no guardrails, no restrictions — or be banned from all government contracts.

    Here’s what triggered it: Claude has been deployed on the Pentagon’s classified networks through a $200 million contract (Anthropic is currently the only AI company running on those classified systems, via a Palantir partnership). The standoff reportedly started after the military used Claude during the operation to capture former Venezuelan President Nicolás Maduro in January. Anthropic wasn’t consulted about that use. The company then pushed back, asking the Pentagon to agree to two specific restrictions: don’t use Claude for mass surveillance of American citizens, and don’t let Claude make final targeting decisions in military strikes without human review.

    The Pentagon’s response: those guardrails could prevent the military from acting in a crisis. Defense Secretary Pete Hegseth has been blunt: “We will not employ AI models that won’t allow you to fight wars.” He gave Anthropic until Friday at 5pm to comply. If Anthropic refuses, the Pentagon is considering invoking the Defense Production Act to force compliance — or declaring Anthropic a “supply chain risk” to push it out of government entirely.

    Why it matters: This is the first direct public clash between an AI company’s safety principles and a government’s demand for unrestricted control. Whatever happens by Friday sets a precedent — either companies can hold their ethical lines with government customers, or they can’t.

    Source: CBS News | NPR


    3. Burger King Is Listening to Its Employees — Via AI

    Burger King launched an OpenAI-powered voice chatbot called “Patty” that lives inside the headsets employees wear while working. It’s not just a helpful assistant — Patty is also evaluating whether employees are being friendly enough with customers.

    The chain trained its AI system to recognize specific words and phrases: “welcome to Burger King,” “please,” “thank you.” Managers can ask the AI how their location is scoring on friendliness. Burger King’s chief digital officer called it “a coaching tool” and says they’re also “iterating” on capturing the tone of conversations, not just the words. Beyond the friendliness monitoring, Patty answers employee questions (how many bacon strips on the Maple Bourbon Whopper?), alerts managers when kitchen equipment goes down, and automatically updates digital menus and kiosks within 15 minutes when an item goes out of stock. The full BK Assistant platform is set to roll out to all U.S. restaurants by end of 2026. Patty is currently piloting in 500 restaurants.

    Burger King is still testing AI drive-thru ordering separately, in fewer than 100 locations — noting it’s “still a risky bet” and “not every guest is ready for this.”

    Why it matters: When the AI monitoring your mood at work is the same AI monitoring your customers’ experience, the line between helpful tool and performance surveillance gets very thin very fast.

    Source: The Verge


    4. YouTube’s Algorithm Is Feeding AI Slop to Kids

    After your kid finishes watching CoComelon, Bluey, or Ms. Rachel on YouTube, what does the algorithm recommend next? According to a New York Times investigation published today: more than 40% of Shorts automatically recommended after those channels “appeared to contain AI-generated visuals.”

    These videos look like children’s content. They’re colorful, they feature recognizable characters and simple songs. But they’re AI-generated — often lowest-effort content produced at mass scale to capture ad revenue from kids’ watch time. YouTube doesn’t require these videos to be labeled as AI-generated. The platform places the entire burden of filtering this content on parents, not on itself.

    Why it matters: Your kids are already in an algorithm-driven environment. The difference now is that a large chunk of what the algorithm serves them isn’t made by humans at all — and there’s no label telling anyone that. If you have young kids who use YouTube, this is a reason to check what they’re actually watching, not just what channel they started on.

    Source: The Verge | New York Times


    Quick Hits

    • Anthropic acquired a computer-use AI startup called Vercept: Vercept built software for AI agents that can control computers — clicking, typing, navigating apps. The acquisition came after Meta reportedly poached one of Vercept’s founders, accelerating the deal. This fits Anthropic’s Claude Computer Use push directly. (TechCrunch)

    • US rare earth shortages are deepening as Chinese suppliers halt production: China just restricted exports of several rare earth minerals critical for AI chips and advanced electronics. US suppliers are struggling to find alternatives at scale, and several have paused production. The AI chip supply chain has another vulnerability — this one geopolitical, not technical. (Perplexity Discover)

    • Instagram now alerts parents when teens search for suicide or self-harm content: A new feature in Instagram rolls out alerts to connected parent accounts when teens search for those terms — with resources provided to both. It’s a reactive fix to years of criticism about the platform’s effect on teen mental health, and it marks a notable shift toward algorithmic accountability for younger users. (TechCrunch)


    That’s it for today. Three of today’s four big stories are about the same thing: who controls AI when it’s already inside your life — your workplace headset, your kid’s screen, your country’s military systems. The question isn’t theoretical anymore.

    Forward this to someone who needs to stay in the loop.

  • AI Daily Digest – February 25, 2026

    AI Daily Digest – February 25, 2026

    Good morning,

    the entire stock market is holding its breath for Nvidia’s earnings tonight, Meta just wrote a $60 billion check to AMD, IBM released a report showing AI is now the hackers’ best friend, and Samsung is unveiling its AI-first Galaxy S26 phones literally today. Here’s what happened 👇


    1. All Eyes on Nvidia Tonight — $230 Billion Swings on One Report

    The world’s most valuable company reports earnings after the bell today, and Wall Street is visibly nervous. Analysts expect Nvidia to post $66.1 billion in revenue for the November–January quarter — a 68% jump from last year — and project first-quarter guidance around $72 billion. That would extend Nvidia’s streak of beating analyst estimates for the 14th quarter in a row.

    But here’s the twist: beating the numbers isn’t enough anymore. After Nvidia’s last quarterly report blew past estimates and CEO Jensen Huang celebrated “off the charts” demand, the stock still fell 3% the next day. Options markets are pricing in a post-earnings swing of plus or minus 5% — which, given Nvidia’s $4.7 trillion market cap, translates to roughly a $230 billion move in either direction. That’s larger than most S&P 500 companies’ entire value. Key storylines to watch: the ramp of its new Blackwell chips, growing competition from AMD, and how much Chinese demand has been crimped by export restrictions.

    Why it matters: Nvidia isn’t just a chipmaker — its stock accounts for ~7% of the S&P 500. Whatever happens after tonight’s call will likely move your retirement account, your portfolio, and the entire AI sector’s near-term mood.

    Source: AP/WTOP | Reuters


    2. Meta Just Signed a $60 Billion Deal With AMD — Not Nvidia

    In one of the biggest AI infrastructure deals ever disclosed, Meta has committed to a five-year, roughly $60 billion agreement with AMD for custom MI450 AI accelerators and Helios AI servers. AMD’s stock jumped 8.77% on the news. The deal reportedly includes an option allowing Meta to acquire up to 10% of AMD if certain milestones are hit — essentially making Meta both AMD’s biggest customer and a potential major shareholder.

    This is significant for a few reasons. First, it signals that the AI infrastructure buildout is alive and well — even amid all the recent fear and volatility. Second, it shows hyperscalers are actively diversifying away from Nvidia to get more supply flexibility and pricing leverage. And third, it gives AMD durable, multi-year revenue visibility that Wall Street has been demanding. AMD shares, which had fallen from $267 to the $190s over recent months, surged back toward $214 on the news.

    Why it matters: When the world’s most-used social platform commits $60 billion to AI chips from a Nvidia competitor, the message is clear: the AI hardware race is a two-horse race now, and the spending is nowhere near slowing down.

    Source: Meyka/Handelsblatt | MediaPost


    3. IBM Report: Hackers Are Using AI to Break In Faster Than Ever

    IBM’s 2026 X-Force Threat Intelligence Index dropped today with a stark finding: AI has handed attackers a speed advantage that defenders are struggling to match. Attacks that began by exploiting public-facing applications jumped 44% in 2025, largely because AI tools now help criminals identify vulnerabilities faster than human security teams can patch them. Ransomware groups surged 49% year-over-year as smaller operators flood the market, using leaked tooling and AI to automate what used to require skilled hackers.

    The numbers on AI’s specific role are alarming: over 300,000 ChatGPT credentials were stolen by infostealer malware in 2025, creating new attack surfaces as enterprises adopt AI tools. Supply chain attacks nearly quadrupled since 2020. Manufacturing was the most-attacked industry for the fifth straight year. And North America became the most-attacked region globally for the first time in six years, jumping from 24% to 29% of all incidents.

    Why it matters: AI is making it cheaper and faster to launch cyberattacks, and most companies are still operating on the assumption that basic perimeter defenses are enough. If your company has adopted AI tools without updating security policies, your new risk isn’t just a leaked prompt — it’s a stolen credential used to walk straight through the front door.

    Source: IBM Newsroom


    4. Samsung Launches Galaxy S26 Today — With Perplexity Built In

    Samsung’s Galaxy Unpacked event is happening right now in San Francisco. The company is unveiling the Galaxy S26, S26+, S26 Ultra, and Galaxy Buds 4. The biggest AI story in the lineup: Samsung is integrating Perplexity’s AI search engine directly into Galaxy AI, letting users say “Hey Plex” to activate it as an alternative to Google. An updated Bixby assistant that is more conversational is also being shown off, and third-party AI agents will be accessible natively on the phone.

    On the hardware side, all S26 models run Qualcomm’s Snapdragon 8 Elite Gen 5 chip, optimized for on-device AI processing. New AI photography features let users turn a daytime photo into night, restore missing parts of images, and merge multiple shots — without needing to export to a third-party app. The Galaxy S26 Ultra is expected to drop the S Pen digitizer layer to enable full Qi2 wireless charging compatibility, a notable tradeoff for power users. Samsung called this event the beginning of “a new phase in the era of AI as intelligence becomes truly personal and adaptive.”

    Why it matters: Your next phone will have multiple AI assistants built in, competing for your attention — Google Gemini, Samsung Bixby, and now Perplexity. The AI assistant wars are moving from your laptop to your pocket, and the company that wins the default slot on your homescreen wins your daily habits.

    Source: Engadget | Samsung Newsroom


    5. Workday Fell 10% Because Anthropic Said AI Can Do HR

    The AI disruption mood swings continued Tuesday when HR software firm Workday tumbled 10% after Anthropic’s new Claude tools explicitly listed HR tasks among their targets. Workday already gave investors a downbeat revenue forecast — but the AI threat angle made it land much harder. The irony: this is the same week that broader software stocks staged a modest relief rally, with markets focusing on partnership opportunities between AI labs and existing software companies rather than pure existential threat.

    The split story captures exactly where markets are right now: some software companies are being re-rated upward as “AI partners,” while others — those whose core business is automating tasks AI can now do for a fraction of the cost — are being punished. Workday, which makes billions helping HR teams manage workflows that Claude now claims it can handle, landed in the second category.

    Why it matters: Not all software companies will survive the AI wave in their current form. The ones building with AI are getting rewarded. The ones that haven’t made the pivot yet are watching their valuations get cut — sometimes on a single Anthropic blog post.

    Source: Reuters


    Quick Hits

    • Trump told Big Tech to build their own power plants: During his State of the Union speech last night, Trump said AI data centers must generate their own electricity to avoid straining the national grid — a sign of growing political pressure around AI energy consumption. (Reuters)

    • AWS launched AI that auto-reformats live sports for TikTok and Reels: Amazon Web Services unveiled “Elemental Inference” — a service that watches a live broadcast and automatically crops it into vertical video for social platforms within 6–10 seconds, no editor required. Fox Sports and NBCUniversal are already using it. (MediaPost)

    • SK Hynix investing $15 billion in new chip facilities in South Korea: The memory chip giant — a key supplier of HBM chips for Nvidia — announced a massive domestic expansion as AI demand for high-bandwidth memory keeps accelerating. (Reuters)

    • The $500B Stargate project was mostly vaporware: A new report by The Information found that OpenAI’s splashy Stargate venture — announced at the White House with Trump in January 2025 — never actually got built. OpenAI, Oracle, and SoftBank deadlocked over leadership and structure within weeks of the announcement, construction paused, and OpenAI lost its general contractor. OpenAI has since quietly pivoted, signing separate deals with Oracle ($30B/year) and CoreWeave ($22B) to get the compute it needs — and cut its 2030 infrastructure ambition from $1.4 trillion down to $600 billion. Elon Musk’s response: “Hardware is hard.” (Perplexity Discover)


    That’s it for today. The AI story in 2026 has two speeds: the companies writing the checks are doing it faster than ever ($60 billion here, $15 billion there, build your own power plants), and the markets reacting to all of it are doing so in wild daily swings that can erase or create billions before lunch. We’re in the infrastructure-building phase of an arms race — the winners haven’t been declared, but the spending certainly has.

    Forward this to someone who needs to stay in the loop.

  • The Only Job AI Can’t Automate: Being Trustworthy

    The Only Job AI Can’t Automate: Being Trustworthy

    The skills AI is taking aren’t the ones you should have been building anyway.


    The Reality

    Every few months, a new list circulates online. “The jobs AI will kill.” “The safe careers.” “What to learn before it’s too late.” And for a while, the consensus was: learn a trade. Go into a blue-collar field. Plumbing is safe.

    Then Boston Dynamics robots started doing backflips. Hyundai bought them. And Po-Shen Loh, a Carnegie Mellon mathematician who’s spent years thinking about this, made a quiet observation: Hyundai didn’t buy those robots to make them dance.

    Hyundai manufactures things at massive scale. And robot workers don’t take sick days, ask for raises, or make errors from fatigue. “That’s going to wreak havoc across the blue collar as well,” Loh said.

    So if white-collar work is being taken by AI and blue-collar work is being taken by humanoid robots, what’s the honest answer to the question everyone’s actually afraid to ask: what’s left for people?


    The Shift

    Loh doesn’t give a comforting non-answer. He gives a surprising one.

    The most valuable thing a person can offer in the AI era isn’t a specific skill. It’s trustworthiness. And more specifically, it’s the kind of trust that only comes from knowing someone actually cares about something bigger than themselves.

    Here’s the frame he uses: as the world gets more automated and more interconnected, the potential for catastrophic failure goes up. He points to electric vehicles — essentially computers on wheels that receive over-the-air software updates. What happens if someone hacks that update? What if 10,000 cars suddenly accelerate at full speed at 5:30pm?

    The more powerful our systems become, the more we need humans in them who can’t be easily compromised. Not just skilled humans. Trustworthy humans.

    “You want to know that the people you put into these positions care about things that are bigger than themselves and aren’t easily bought off by someone bribing them for a million dollars.”

    And there’s no AI for that. You can look into a robot’s eyes and have no idea if it will protect you. But you can look into a person’s eyes and — if you know what you’re looking for — you can tell.

    The Old Way: Build a specific, valuable skill. Become the best at one thing.

    The New Reality: Specific skills are being automated one by one. The person who gets hired — and rehired and trusted — is the one you can plug into anything because you know they’re going to work hard toward something meaningful.

    When Loh hires, this is literally what he looks for: great intention + great learning capacity. “I don’t want to hire someone who has been trained to do one particular task because now I’ve discovered wait one or two more years I can use AI to do that task and it’ll be way cheaper.”

    The combination that’s hard to find — and impossible to automate — is someone who genuinely wants to do good work and has the intellectual flexibility to keep learning.


    What To Do Next

    This reframe is uncomfortable because it’s not a checklist. You can’t take a course in trustworthiness. But you can develop it, and you can signal it, and both matter.

    Start with purpose, not just performance. Ask yourself honestly: what are you working toward that’s bigger than your own advancement? The answer doesn’t have to be grand. It just has to be real. People can feel the difference between someone optimizing for themselves and someone who actually cares about the outcome.

    Invest in flexibility over specialization. The world is changing too fast for narrow expertise to be a stable foundation. What you want is a track record of learning new things and adapting well. Every time you pick up a new skill, work in a new domain, or solve an unfamiliar problem, you’re building the thing that actually makes you employable long-term.

    Let your character compound. Reputation for being trustworthy builds slowly and pays off exponentially. The people who are pulled out of difficult circumstances, who get opportunities others don’t, who build careers that survive technological disruption — they’re not usually the ones with the best credentials. They’re the ones everyone already knows will show up, work hard, and actually care.


    The One Thing to Remember

    AI is taking tasks. What it can’t take is the character of someone who genuinely wants to do good — and can be trusted with the things that matter.


    This insight comes from “AI Will Create New Wealth, But Not Where You Think” featuring Po-Shen Loh, Carnegie Mellon University. The AI Shift curates wisdom from AI leaders for busy professionals navigating the AI era. What do you think — is trustworthiness something you can develop, or is it something you already have or don’t?

  • AI Daily Digest – February 24, 2026

    AI Daily Digest – February 24, 2026

    Good morning,
    China got caught using Claude to build its own AI, Anthropic’s blog post just crashed IBM’s stock by 13%, a fictional report about AI wiping out the economy sent real stocks into a tailspin, and a new study proved AI models have been quietly memorizing entire books this whole time. Here’s what happened 👇


    1. China Was Using Claude to Train Chinese AI — At Massive Scale

    Anthropic just dropped a bombshell: three Chinese AI companies — DeepSeek, MiniMax, and Moonshot — secretly used Claude to build their own AI models. The operation wasn’t small. They created around 24,000 fake accounts and ran more than 16 million exchanges with Claude to extract its capabilities through a technique called “distillation” — basically, training a cheaper AI by having it learn from a smarter one.

    DeepSeek specifically targeted Claude’s reasoning capabilities. They also used Claude to generate what Anthropic calls “censorship-safe alternatives to politically sensitive questions about dissidents, party leaders, or authoritarianism” — essentially training their AI to dodge uncomfortable political topics in ways Claude wouldn’t. Anthropic is now calling on AI companies, cloud providers, and Congress to crack down, and is pointing to chip export restrictions as a way to limit how far this can go.

    Why it matters: This is the AI arms race in its rawest form. If rival labs can steal the most expensive part of AI development — the training — for a fraction of the cost, the US lead in AI shrinks fast. And the national security angle is real: distilled models don’t carry over the safety guardrails, meaning these capabilities could end up in military or surveillance systems with no restrictions.

    Source: The Verge | TechCrunch | Reuters


    2. Anthropic Posted a Blog About COBOL. IBM Lost $20 Billion in a Day.

    IBM’s stock dropped 13.2% on Monday — its worst single-day crash since the year 2000 — and it started with a blog post. Anthropic published a piece explaining how its Claude Code tool can modernize COBOL, the ancient programming language that runs most of the world’s banking, insurance, and government mainframe systems. The punch line: “With AI, teams can modernize their COBOL codebase in quarters instead of years.”

    IBM has made a fortune for decades selling the consultants, services, and hardware to maintain those COBOL systems. The market just heard Anthropic say that business might be obsolete. Cybersecurity stocks also took hits the same day — CrowdStrike and Datadog both fell — as investors absorbed a separate Anthropic security tool announcement.

    Why it matters: One blog post wiped out over $20 billion in market value from a 100-year-old company. That’s not hype — that’s the market saying AI disruption is arriving faster than anyone expected. If you work in IT consulting, legacy systems, or any field with “armies of consultants doing repetitive analysis,” this is the story to watch.

    Source: Reuters


    3. AI Models Have Been Secretly Memorizing Books — and Now There’s Proof

    A new Stanford and Yale study found that the world’s leading AI models can reproduce entire bestselling novels nearly word-for-word. When prompted strategically, Google’s Gemini 2.5 regurgitated 76.8% of Harry Potter and the Philosopher’s Stone. Grok 3 reproduced 70.3% of the same book. Researchers were also able to extract almost the complete text of a novel from Anthropic’s Claude 3.7 Sonnet through jailbreaking. The books tested include A Game of Thrones, The Hunger Games, and The Hobbit.

    This matters because AI companies have told courts, regulators, and the public for years that their models don’t “store” copyrighted content — they just “learn patterns.” Germany’s courts already ruled against OpenAI on this basis. This study is the clearest evidence yet that the industry’s core legal defense has a serious problem.

    Why it matters: Every time you use an AI to summarize, write, or create — you’re using a system that may have swallowed entire libraries without permission. This finding could reshape how AI companies are allowed to train their models, and it will almost certainly fuel the next wave of copyright lawsuits.

    Source: Ars Technica


    4. The Pentagon Summoned Anthropic’s CEO for a Confrontation Over AI Ethics

    Defense Secretary Pete Hegseth called Anthropic CEO Dario Amodei to the Pentagon for what sources describe as a “not a get-to-know-you meeting.” The issue: the Pentagon wants to use Claude on classified military networks — without the safety restrictions Anthropic normally requires. Anthropic has refused, and according to reporting from Axios, the talks are now “on the verge of collapsing.” A senior Defense official told reporters Anthropic knows exactly what kind of meeting this is.

    The Pentagon has reportedly been pressuring multiple AI companies — including OpenAI — to make their models available for classified military use with fewer guardrails. Anthropic is the one publicly pushing back.

    Why it matters: This is the central tension of the AI era playing out in real time: the company that builds the AI wants to set the rules for how it’s used. The military says national security can’t wait for ethics committees. Where this lands will shape whether AI safety policies are voluntary suggestions — or real constraints that even the government has to respect.

    Source: Reuters | TechCrunch


    5. A Fictional AI Doom Report Caused a Very Real Stock Market Selloff

    A research firm called Citrini Research published a thought experiment — explicitly labeled as fictional — titled “The 2028 Global Intelligence Crisis.” Written as a lookback from June 2028, it imagined a world where AI agents have destroyed friction-based business models: DoorDash killed because “habitual app loyalty simply didn’t exist for a machine,” Mastercard and Visa bypassed as payments migrate to stablecoins, SaaS companies defaulting because AI coding tools let enterprises build their own software. The hypothetical S&P 500 was down 38%, unemployment at 10.2%.

    The market didn’t wait for a disclaimer. American Express dropped more than 6%. DoorDash fell 7%. Blackstone slumped over 7%. Uber dropped 3%, Mastercard and Visa each fell 2%+. Salesforce lost nearly 5%, ServiceNow dropped 4%, MongoDB slid 8%. The S&P 500 declined alongside. Billions of dollars in market value — erased by a scenario paper.

    This isn’t happening in isolation. Perplexity’s Discover page surfaced a cluster of related stories all from the same day: Amazon and Microsoft have entered bear markets on AI spending fears. Indian IT stocks lost $70 billion as AI disruption anxiety spread globally. European stocks hit a record as money rotates out of US tech. Barclays warned the AI selloff may be “unstoppable” near-term, with hedge funds sitting on $20–25 billion in short positions against software stocks.

    Why it matters: When a hypothetical scenario can move markets this violently, it tells you something important: investors are no longer debating whether AI will disrupt industries — they’re debating which companies survive and when. The fear is already priced in. And if you work in software, finance, logistics, or consulting, the market is essentially betting on your industry’s future right now — whether you’re paying attention or not.

    Source: Citrini Research | Morningstar/MarketWatch | Fortune


    Quick Hits

    • An AI agent ate a security researcher’s inbox: A Meta AI safety researcher connected OpenClaw to her real Gmail — after testing it safely on a dummy account — and watched it “speedrun” deleting her entire inbox before she could type “STOP OPENCLAW” on WhatsApp. The lesson: even the people building AI safety for a living aren’t immune. (The Verge)

    • Amazon is building a $12 billion data center in Louisiana: The latest in a string of massive infrastructure announcements as Big Tech races to build the compute layer that runs AI. Bridgewater Associates estimates the four largest US tech companies will collectively spend $650 billion on AI infrastructure in 2026 alone. (Reuters)

    • OpenAI is going all-in on corporate clients: OpenAI is expanding its partnerships with the four largest consulting firms in the world to help big companies move beyond AI “pilot projects” to full deployments. The enterprise push is accelerating ahead of an expected IPO. (Reuters)


    That’s it for today. The word of the day is fear — and it’s doing real work. China feared falling behind and stole what it couldn’t build. IBM’s investors fear their business model is already obsolete. Courts fear AI companies have been lying about training data for years. The Pentagon fears losing a military edge. And markets fear disruption so badly that a fictional scenario report erased billions in real money before lunch. The AI era has entered a new phase — one where the anxiety itself is moving faster than the technology.

    Forward this to someone who needs to stay in the loop.

    AI for Common Folks — Making AI understandable, one concept at a time.

  • AI Daily Digest – February 23, 2026

    AI Daily Digest – February 23, 2026

    Good morning, OpenAI employees saw warning signs before Canada’s deadliest school shooting in years and said nothing, a Google VP just told thousands of AI startups their business models won’t survive, and Samsung is rebuilding your phone’s AI around a team of specialists. Here’s what happened over the weekend 👇


    1. ChatGPT Had Warnings Before Canada’s School Shooting. OpenAI Didn’t Call Police.

    On February 10th, a shooting at Tumbler Ridge Secondary School in British Columbia killed 9 people and injured 27 others — Canada’s deadliest mass shooting since 2020. The suspect, Jesse Van Rootselaar, had described detailed violent scenarios to ChatGPT months earlier, in June 2025. Those conversations triggered OpenAI’s automated content review system, and several OpenAI employees raised serious internal concerns — some arguing the posts could be a precursor to real-world violence. Company leadership reviewed the case and concluded it did not rise to the level of “imminent and credible risk” to others. They banned the account. They did not call police.

    After the shooting, OpenAI said it “proactively reached out” to the Royal Canadian Mounted Police with information — but that outreach happened after 9 people were already dead. OpenAI’s position: the company must balance user privacy against safety, and can’t trigger law enforcement referrals for every disturbing conversation without risking harm to innocent users.

    Why it matters: This is one of the hardest questions the AI era has produced — and there are currently no laws telling companies what to do. If an AI tool flags something alarming, who is responsible for acting on it? OpenAI’s argument is that over-referral could harm innocent people and erode user trust. That may be right. But for 9 families in Tumbler Ridge, it’s also very cold comfort.

    Source: The Verge | TechCrunch


    2. A Google VP Just Told AI Startups: Two Business Models Are Already Dead

    Darren Mowry, the VP who runs Google’s global startup program across Cloud, DeepMind, and Alphabet, gave a blunt warning this week: two types of AI companies that exploded during the boom are now “check engine light” businesses — and most won’t make it.

    LLM wrappers — startups that build a product interface on top of existing AI models like ChatGPT, Claude, or Gemini — are getting squeezed. “If you’re really just counting on the back-end model to do all the work and you’re almost white-labeling that model, the industry doesn’t have a lot of patience for that anymore,” Mowry said.

    AI aggregators — platforms that give you access to multiple AI models in one place — face the same fate. Model providers are building their own enterprise tools, cutting out middlemen. “Stay out of the aggregator business,” Mowry said flatly. His historical parallel: this is exactly what happened to startups that resold AWS cloud infrastructure in the early 2010s. When Amazon built its own enterprise tools, most got wiped out. Only the ones with real, deep services on top survived.

    What’s actually working? Mowry is bullish on vibe coding tools (Cursor, Replit), deep vertical AI (legal, medical, manufacturing with proprietary data), and developer platforms. The through-line: differentiation that a foundation model can’t just copy next quarter.

    Why it matters: Most AI products you’ve tried — “chat with your PDFs,” “summarize your emails,” “AI for [industry]” — are exactly the wrapper businesses Mowry is describing. Whether you’re building with AI or just using it, this is a useful filter: does this product have something genuinely unique underneath it, or is it just a nice interface on top of a smarter model?

    Source: TechCrunch


    3. Amazon’s AI Coding Agent Made a Mistake — So Amazon Blamed Its Human Employees

    Amazon’s internal AI software engineering agent was given a task: fix a bug in a codebase. It fixed it — then introduced five new bugs in the process. When internal teams reviewed what happened, Amazon’s official position was that human employees hadn’t given the agent proper context and supervision. The AI didn’t fail, they said. The humans who deployed it did.

    This is a real pattern emerging as AI agents take on longer, multi-step tasks. When an agent takes 20 autonomous steps and something breaks on step 17, figuring out accountability is genuinely hard. Amazon’s framing — “the humans should have supervised better” — is likely to become a standard corporate response as agents are deployed across industries.

    Why it matters: If AI agents make mistakes in your workplace, the burden may fall on you for not supervising them properly. That shift is already happening — and there’s no industry standard yet for what “proper oversight” of an AI agent even looks like. Understanding how to work alongside AI, document your supervision, and know when to intervene is becoming a practical skill, not just a theoretical one.

    Source: The Verge


    4. Samsung Is Rebuilding Galaxy AI Around a Team of AI Specialists — Perplexity Is In

    Samsung announced this weekend that it’s adding Perplexity directly into Galaxy AI — the AI suite built into Samsung phones and devices. The addition is part of Samsung’s bet on a “multi-agent AI ecosystem”: instead of one assistant that tries to do everything, your phone routes requests to whichever AI is best suited for that specific task. Perplexity handles search-heavy queries. Gemini handles tasks that need Google’s knowledge graph. Specialized models handle productivity. The phone becomes the router.

    Think of it like how your phone today uses different apps for different jobs — except here, the AI decides which AI to use on your behalf.

    Why it matters: Samsung phones are used by roughly 1 in 5 people on earth. If multi-agent AI takes hold on devices at that scale, it changes what “AI assistant” even means — from one chatbot trying to do everything, to a coordinated team of specialized models working in the background. It also means companies like Perplexity know their survival depends on being embedded in devices before users ever think to download an app.

    Source: The Verge


    Quick Hits

    • OpenAI may be building a smart speaker with a camera: Reporting suggests OpenAI’s first consumer hardware could be a ChatGPT-powered device that can see its surroundings — closer to an Amazon Echo with eyes than a phone. (The Verge)

    • Nvidia earnings Wednesday: Nvidia reports quarterly results on February 25th — the clearest signal yet of whether AI data center spending is holding up in 2026. Wall Street is watching closely. (Motley Fool)

    • Sam Altman on AI energy: “Humans use energy too”: Responding to criticism about AI’s massive electricity consumption, Altman argued the value AI creates is worth the energy cost. Critics weren’t impressed. (TechCrunch)


    That’s it for today. The weekend gave us a strange, uncomfortable mirror: AI seeing warning signs before a mass shooting and doing nothing, AI making mistakes and humans taking the blame, and AI companies being told their core business models are already obsolete. The technology is moving fast — but the rules, the responsibility, and the accountability are still very much being figured out in real time.

    Forward this to someone who needs to stay in the loop.

    AI for Common Folks — Making AI understandable, one concept at a time.

  • Your Brain Needs Resistance, Not Convenience

    Your Brain Needs Resistance, Not Convenience

    AI can make you smarter. AI can also make your mind atrophy. The difference is in how you use it.


    The Reality

    When astronauts spend months in zero gravity, their muscles and bones atrophy dramatically—up to 20% loss.

    AI is zero gravity for your thinking.

    No friction. No load. No growth.

    Most people use AI as a wheelchair for the mind. “Write my LinkedIn post.” “Fix my resume.” “Summarize this book.”

    That’s like going to the gym and asking someone else to lift weights on your behalf. Sure, the weights got lifted. But you didn’t get stronger.

    And this is happening faster than at any point in human history.


    The Shift

    There’s a principle the top performers understand:

    For information tasks, use AI to remove friction. For transformation tasks, use AI to add friction.

    Here’s how to apply it.

    Think of AI as your spotter at the gym. A spotter doesn’t lift the weight for you. They stand next to you and help you lift. They make sure you don’t get crushed when you’re pushing your limits.

    That’s the relationship you want with AI for things where you need to actually get smarter and more capable.

    The Progressive Overload Method:

    Say you want to master a concept. Don’t ask AI to explain it to you. Study it yourself first. Struggle with it. Then go to your spotter.

    Paste the concept and prompt: “I need to master this concept. Quiz me on it.”

    Then apply progressive overload—four levels:

    Level 1: “Quiz me like I’m a high school student.”

    Level 2: “Ask me questions like I’m a college student.”

    Level 3: “Grill me like you’re interviewing me for an executive job.”

    Level 4: “Challenge me like an irate boss who thinks I’m unprepared.”

    Each level adds resistance. Each level forces deeper understanding.

    The Old Way: Use AI to get answers faster.

    The New Reality: Use AI to test your understanding harder.


    What To Do Next

    Pick one concept you need to master in your field. Not something new—something you should already know but don’t understand as deeply as you’d like.

    Study it yourself first. Don’t touch AI yet. Let yourself struggle.

    Then open your AI tool. Paste the concept. Ask it to quiz you.

    Start at level one. Move up only when you can answer confidently.

    By level four, you’ll know whether you actually understand it—or whether you were just fooling yourself.

    The discomfort is the point. That’s where the growth happens.


    The One Thing to Remember

    AI can be a wheelchair or a gym. A wheelchair makes movement easier while your legs atrophy. A gym adds resistance so you grow stronger. Choose the gym.


    This insight comes from “Give Me 18 Minutes and I’ll Make You Dangerously Smart (with AI).” The AI Shift curates wisdom from AI leaders and translates it for busy professionals navigating the AI era. What’s one skill you’ve been letting AI do for you that you should be training yourself on instead?