Category: Research

  • What the 2025 Mary Meeker AI Report Means for Work, Strategy, and GTM

    What the 2025 Mary Meeker AI Report Means for Work, Strategy, and GTM

    It’s been a while since Mary Meeker’s Internet Trends report provided a pulse check on where technology is headed. The first such report in a while, now focused on AI, does more than describe trends — it lays out a new operating reality.

    This post distills 8 critical insights from the report — and what they mean for enterprise leaders, GTM strategists, product owners, and those shaping the future of work.

    📄 Full Report →


    1. This Time, the Machines Move Faster Than We Do

    The report opens with a bold observation:

    “AI usage is ramping faster than any prior computing platform — even faster than the internet.”

    This isn’t just fast. It’s compounding.

    For teams and organizations, that means:

    • Planning cycles must adapt to faster execution rhythms
    • Feedback loops need compression and real-time recalibration
    • Legacy workflows aren’t built for this pace

    If your GTM or delivery cadence still runs on quarterly inertia, it’s time to rethink.


    2. Time-to-Value Just Got Compressed. Again.

    The biggest unlock from GenAI? Time compression.

    From prompt → prototype

    From draft → delivery

    From insight → action

    This collapse in cycle time transforms:

    • Productivity metrics
    • Product development lifecycles
    • Org-wide alignment rhythms

    🚀 Output velocity is the new KPI.


    3. Your Next Teammate Might Not Be Human

    We’re entering the era of embedded AI agents — not just assistants.

    AI is no longer a tool on the side. It’s part of the team:

    • Summarizing meetings
    • Writing first drafts
    • Managing workflows

    That means:

    • Rethinking team design
    • Clarifying AI vs human task ownership
    • Measuring contribution beyond headcount

    AI is a teammate now. Time to onboard accordingly.


    4. It’s Not Risk Slowing AI — It’s Friction

    One of the most important insights in the report:

    Employees aren’t blocked by fear — they’re blocked by poor UX.

    Adoption stalls when AI:

    • Doesn’t fit into existing workflows
    • Requires tool-switching
    • Has unclear value props

    🛠️ The fix? Product thinking:

    • Reduce toggle tax
    • Integrate into natural habits
    • Onboard like a consumer-grade app

    5. AI Fluency Is the New Excel

    The most valuable skill in 2025 isn’t coding — it’s prompt fluency.

    AI Fluency = knowing how to ask, guide, and evaluate AI output:

    • What to prompt
    • What to ignore
    • How to refine

    Every function — from marketing to HR — needs this literacy.

    We’re in the age of human-in-the-loop as a capability, not a compliance checkbox.


    6. Follow the Money. It’s Flowing to AI

    The report outlines the capital story behind the hype:

    • Enterprise GenAI spend is ramping fast
    • Compute infrastructure is scaling explosively
    • VC and corporate funding is prioritizing AI-native bets

    For leaders, this isn’t a trend — it’s a reallocation cycle.

    Infra budgets, product bets, and partnerships must now align with where the ecosystem is heading — not where it’s been.


    7. Go Deep, Not Just Wide

    Horizontal AI gets you buzz.

    Vertical AI gets you impact.

    The report shows real traction in:

    • Healthcare
    • Legal
    • Education
    • Financial services

    Where AI is tuned to real-world roles and workflows, it sticks.

    If you’re shipping AI without domain context, you’re leaving retention on the table.


    8. Infrastructure Is Strategy. Again.

    The biggest shift in the back half of the report?

    AI is putting infrastructure back in the spotlight.

    From model training to agent orchestration to secure runtimes:

    • The AI stack is now a competitive moat
    • Data pipelines and prompt layers are shaping outcomes
    • Infra is no longer invisible — it’s strategic

    What cloud was to the last decade, AI-native infra may be to the next.


    Final Thoughts

    The Mary Meeker 2025 AI Trends report isn’t just a forecast — it’s a framing device. One that challenges every enterprise leader to rethink:

    • How fast we move
    • What value looks like
    • Who (or what) we collaborate with
    • Where advantage is shifting

    It’s not enough to adopt AI.

    We have to redesign around it.

    📄 You can access the full report here

  • From Productivity to Progress: What the New MIT-Stanford AI Study Really Tells Us About the Future of Work

    From Productivity to Progress: What the New MIT-Stanford AI Study Really Tells Us About the Future of Work

    A new study from MIT and Stanford just rewrote the AI-in-the-workplace narrative.

    Published in Fortune this week, the research shows that generative AI tools — specifically chatbots — are not only boosting productivity by up to 14%, but they’re also raising earnings without reducing work hours.

    “Rather than displacing workers, AI adoption led to higher earnings, especially for lower-performing employees.”

    Let that sink in.


    🧠 AI as a Floor-Raiser, Not a Ceiling-Breaker

    The most surprising finding?
    AI’s greatest impact was seen not among the top performers, but among lower-skilled or newer workers.

    In customer service teams, the AI tools essentially became real-time coaches — suggesting responses, guiding tone, and summarizing queries. The result: a productivity uplift and quality improvement that evened out performance levels across the team.

    This is a quiet revolution in workforce design.

    In many traditional orgs, productivity initiatives often widen the gap between high and average performers. But with AI augmentation, we’re seeing the inverse — a democratization of capability.


    💼 What This Means for Enterprise Leaders

    This research confirms a pattern I’ve observed firsthand in consulting:
    The impact of AI is not just technical, it’s organizational.

    To translate AI gains into business value, leaders need to:

    ✅ 1. Shift from Efficiency to Enablement

    Don’t chase cost-cutting alone. Use AI to empower more team members to operate at higher skill levels.

    ✅ 2. Invest in Workflow Design

    Tool adoption isn’t enough. Embed AI into daily rituals — response writing, research, meeting prep — where the marginal gains accumulate.

    ✅ 3. Reframe KPIs

    Move beyond “time saved” metrics. Start tracking value added — better resolutions, improved CSAT, faster ramp-up for new hires.


    🔄 A Playbook for Augmented Teams

    From piloting GPT agents to reimagining onboarding flows, I’ve worked with startups and enterprise teams navigating this shift. The ones who succeed typically follow this arc:

    1. Pilot AI in a high-volume, low-risk function
    2. Co-create use cases with users (not for them)
    3. Build layered systems: AI support + human escalation
    4. Train managers to interpret, not just supervise, AI-led work
    5. Feed learnings back into process improvement loops

    🔚 Not AI vs Jobs. AI Plus Better Jobs.

    The real story here isn’t about productivity stats. It’s about potential unlocked.

    AI is no longer a futuristic experiment. It’s a present-day differentiator — especially for teams willing to rethink how work gets done.

    As leaders, we now face a simple choice:

    Will we augment the talent we have, or continue to chase the talent we can’t find?

    Your answer will shape the next 3 years of your business.


    🔗 Read the original article here:

    Fortune: AI chatbots boost earnings and hours, not job loss


    Want to go deeper? I’m working on a new AI augmentation playbook — DM me or sign up for updates.

    #AI #FutureOfWork #EnterpriseStrategy #GTM #DigitalTransformation #Chatbots #Productivity #ConsultingInsights

  • Human Learning about AI

    Human Learning about AI

    The paper abstract:

    We study how humans form expectations about the performance of artificial intelligence (AI) and consequences for AI adoption. Our main hypothesis is that people project human-relevant task features onto AI. People then over-infer from AI failures on human-easy tasks, and from AI successes on human-difficult tasks. Lab experiments provide strong evidence for projection of human difficulty onto AI, predictably distorting subjects’ expectations. Resulting adoption can be sub-optimal, as failing human-easy tasks need not imply poor overall performance in the case of AI. A field experiment with an AI giving parenting advice shows evidence for projection of human textual similarity. Users strongly infer from answers that are equally uninformative but less humanly-similar to expected answers, significantly reducing trust and engagement. Results suggest AI “anthropomorphism” can backfire by increasing projection and de-aligning human expectations and AI performance.

    raphaelraux – Research

    And a simplified explanation by Copilot (seemed apt to use in this case given the topic):

    The paper explores how people form expectations about AI performance and how this impacts their willingness to use AI. The researchers’ main idea is that people tend to think of AI as if it should perform tasks in the same way humans do. This leads to two key behaviors:

    1. Overestimating AI failures: When AI makes mistakes on tasks that are easy for humans, people think the AI is not very capable overall.
    2. Overestimating AI successes: When AI does well on tasks that are hard for humans, people think the AI is more capable than it actually is.

    Experiments show that these assumptions distort people’s expectations of AI. For example, if an AI struggles with simple tasks, people might avoid using it, even if it’s actually quite effective at other things. On the flip side, if it excels at complex tasks, people might over-trust it.

    The researchers conducted a real-world experiment with an AI that provides parenting advice. They found that users were less trusting of the AI if its answers didn’t resemble what a human would say, even if the information was the same. This shows that making AI seem human-like (anthropomorphism) can sometimes backfire, leading to misaligned expectations between what AI can do and what people expect from it.

    In essence, the study highlights that our human biases can lead us to misunderstand AI capabilities, which can affect how we adopt and use AI technologies.

    From: https://sites.google.com/view/raphaelraux/research?authuser=0

    via https://marginalrevolution.com/marginalrevolution/2024/11/how-badly-do-humans-misjudge-ais.html

    Full paper here – https://www.dropbox.com/scl/fi/pvo3ozkqfrmlwo3ndscdz/HLA_latest.pdf?rlkey=mmz8f71xm0a2t6nvixl7aih23&e=1&dl=0