Tag: AI

  • What the 2025 Mary Meeker AI Report Means for Work, Strategy, and GTM

    What the 2025 Mary Meeker AI Report Means for Work, Strategy, and GTM

    It’s been a while since Mary Meeker’s Internet Trends report provided a pulse check on where technology is headed. The first such report in a while, now focused on AI, does more than describe trends — it lays out a new operating reality.

    This post distills 8 critical insights from the report — and what they mean for enterprise leaders, GTM strategists, product owners, and those shaping the future of work.

    📄 Full Report →


    1. This Time, the Machines Move Faster Than We Do

    The report opens with a bold observation:

    “AI usage is ramping faster than any prior computing platform — even faster than the internet.”

    This isn’t just fast. It’s compounding.

    For teams and organizations, that means:

    • Planning cycles must adapt to faster execution rhythms
    • Feedback loops need compression and real-time recalibration
    • Legacy workflows aren’t built for this pace

    If your GTM or delivery cadence still runs on quarterly inertia, it’s time to rethink.


    2. Time-to-Value Just Got Compressed. Again.

    The biggest unlock from GenAI? Time compression.

    From prompt → prototype

    From draft → delivery

    From insight → action

    This collapse in cycle time transforms:

    • Productivity metrics
    • Product development lifecycles
    • Org-wide alignment rhythms

    🚀 Output velocity is the new KPI.


    3. Your Next Teammate Might Not Be Human

    We’re entering the era of embedded AI agents — not just assistants.

    AI is no longer a tool on the side. It’s part of the team:

    • Summarizing meetings
    • Writing first drafts
    • Managing workflows

    That means:

    • Rethinking team design
    • Clarifying AI vs human task ownership
    • Measuring contribution beyond headcount

    AI is a teammate now. Time to onboard accordingly.


    4. It’s Not Risk Slowing AI — It’s Friction

    One of the most important insights in the report:

    Employees aren’t blocked by fear — they’re blocked by poor UX.

    Adoption stalls when AI:

    • Doesn’t fit into existing workflows
    • Requires tool-switching
    • Has unclear value props

    🛠️ The fix? Product thinking:

    • Reduce toggle tax
    • Integrate into natural habits
    • Onboard like a consumer-grade app

    5. AI Fluency Is the New Excel

    The most valuable skill in 2025 isn’t coding — it’s prompt fluency.

    AI Fluency = knowing how to ask, guide, and evaluate AI output:

    • What to prompt
    • What to ignore
    • How to refine

    Every function — from marketing to HR — needs this literacy.

    We’re in the age of human-in-the-loop as a capability, not a compliance checkbox.


    6. Follow the Money. It’s Flowing to AI

    The report outlines the capital story behind the hype:

    • Enterprise GenAI spend is ramping fast
    • Compute infrastructure is scaling explosively
    • VC and corporate funding is prioritizing AI-native bets

    For leaders, this isn’t a trend — it’s a reallocation cycle.

    Infra budgets, product bets, and partnerships must now align with where the ecosystem is heading — not where it’s been.


    7. Go Deep, Not Just Wide

    Horizontal AI gets you buzz.

    Vertical AI gets you impact.

    The report shows real traction in:

    • Healthcare
    • Legal
    • Education
    • Financial services

    Where AI is tuned to real-world roles and workflows, it sticks.

    If you’re shipping AI without domain context, you’re leaving retention on the table.


    8. Infrastructure Is Strategy. Again.

    The biggest shift in the back half of the report?

    AI is putting infrastructure back in the spotlight.

    From model training to agent orchestration to secure runtimes:

    • The AI stack is now a competitive moat
    • Data pipelines and prompt layers are shaping outcomes
    • Infra is no longer invisible — it’s strategic

    What cloud was to the last decade, AI-native infra may be to the next.


    Final Thoughts

    The Mary Meeker 2025 AI Trends report isn’t just a forecast — it’s a framing device. One that challenges every enterprise leader to rethink:

    • How fast we move
    • What value looks like
    • Who (or what) we collaborate with
    • Where advantage is shifting

    It’s not enough to adopt AI.

    We have to redesign around it.

    📄 You can access the full report here

  • Shadow AI, Friction Fatigue & the Flexibility Gap: 5 Lessons from the Ivanti Tech at Work 2025 Report

    Shadow AI, Friction Fatigue & the Flexibility Gap: 5 Lessons from the Ivanti Tech at Work 2025 Report

    The Ivanti Tech at Work 2025 report isn’t just a workplace tech survey — it’s a mirror to how modern organizations are struggling (and sometimes succeeding) to adapt to the realities of hybrid work, AI adoption, and employee expectations.

    Here are 5 insights that stood out — and why they matter for anyone building teams, tools, or trust in the modern workplace.

    🔗 Read the full report


    1. Shadow AI Is a Trust Problem, Not Just a Tech One

    Nearly 1 in 3 workers admit they use AI tools like ChatGPT in secret at work.

    Why the secrecy?

    According to the report:

    • 36% want a competitive edge
    • 30% fear job cuts or extra scrutiny
    • 30% say there’s no clear AI usage policy
    • 27% don’t want their abilities questioned

    This is more than a governance issue. It’s a cultural signal.
    Employees are turning to AI to be more productive — but doing so under the radar signals a trust deficit and policy vacuum.

    💡 What to do:
    Leaders need to replace silence with structure — with clear, enabling policies that promote responsible AI use, and an environment where value creation matters more than screen time.


    2. The Flexibility Paradox: High Demand, Low Supply

    83% of IT professionals and 73% of office workers value flexibility highly — but only ~25% say they actually have it.

    Even as companies trumpet hybrid work, asynchronous enablement, autonomy, and outcome-based work norms haven’t caught up. The result? Disengagement and frustration.

    💡 What to do:
    Revisit what flexibility really means. It’s not just about where people work — it’s how they work.
    That means:

    • Tools for async collaboration
    • Decision-making frameworks for remote teams
    • Leaders modeling flexible behaviors

    3. Presenteeism Is the New Fatigue

    The report highlights “digital presenteeism”: workers pretending to be active — jiggling mice, logging in early — to appear productive.

    • 48% say they dislike their job but stay
    • 37% admit to showing up without doing meaningful work

    These are signs of unclear expectations and poor workflow design — not disengagement alone.

    💡 What to do:
    Audit for friction, not just lag.
    Look at your workflows, KPIs, and culture. Are people forced to perform busyness instead of real value?


    4. The Digital Experience Gap Is Real

    While flexible work is valued, many workers find it harder to work outside the office. The report notes:

    • 44–49% say collaboration is easier in-office
    • 36–48% say manager access is better in-office
    • 16–26% say apps are easier to use from the office

    💡 What to do:
    Enable remote-first experience, not just policy:

    • Seamless access to tools and systems
    • Integrated collaboration platforms
    • AI-powered support and IT workflows

    5. Redesign for Trust, Not Just Tools

    The big takeaway?

    Workers don’t just need better AI — they need clarity on what’s allowed
    They don’t just need more flexibility — they need workflows that enable it
    They don’t just need faster tools — they need a culture that values trust over control


    Final Thoughts

    The Ivanti Tech at Work 2025 report is a diagnostic — revealing what happens when new tools are bolted onto outdated operating models.

    For leaders, the message is clear:

    We need to evolve not just our tech stack, but our trust stack.

    🔗 Read the full report

  • From Productivity to Progress: What the New MIT-Stanford AI Study Really Tells Us About the Future of Work

    From Productivity to Progress: What the New MIT-Stanford AI Study Really Tells Us About the Future of Work

    A new study from MIT and Stanford just rewrote the AI-in-the-workplace narrative.

    Published in Fortune this week, the research shows that generative AI tools — specifically chatbots — are not only boosting productivity by up to 14%, but they’re also raising earnings without reducing work hours.

    “Rather than displacing workers, AI adoption led to higher earnings, especially for lower-performing employees.”

    Let that sink in.


    🧠 AI as a Floor-Raiser, Not a Ceiling-Breaker

    The most surprising finding?
    AI’s greatest impact was seen not among the top performers, but among lower-skilled or newer workers.

    In customer service teams, the AI tools essentially became real-time coaches — suggesting responses, guiding tone, and summarizing queries. The result: a productivity uplift and quality improvement that evened out performance levels across the team.

    This is a quiet revolution in workforce design.

    In many traditional orgs, productivity initiatives often widen the gap between high and average performers. But with AI augmentation, we’re seeing the inverse — a democratization of capability.


    💼 What This Means for Enterprise Leaders

    This research confirms a pattern I’ve observed firsthand in consulting:
    The impact of AI is not just technical, it’s organizational.

    To translate AI gains into business value, leaders need to:

    ✅ 1. Shift from Efficiency to Enablement

    Don’t chase cost-cutting alone. Use AI to empower more team members to operate at higher skill levels.

    ✅ 2. Invest in Workflow Design

    Tool adoption isn’t enough. Embed AI into daily rituals — response writing, research, meeting prep — where the marginal gains accumulate.

    ✅ 3. Reframe KPIs

    Move beyond “time saved” metrics. Start tracking value added — better resolutions, improved CSAT, faster ramp-up for new hires.


    🔄 A Playbook for Augmented Teams

    From piloting GPT agents to reimagining onboarding flows, I’ve worked with startups and enterprise teams navigating this shift. The ones who succeed typically follow this arc:

    1. Pilot AI in a high-volume, low-risk function
    2. Co-create use cases with users (not for them)
    3. Build layered systems: AI support + human escalation
    4. Train managers to interpret, not just supervise, AI-led work
    5. Feed learnings back into process improvement loops

    🔚 Not AI vs Jobs. AI Plus Better Jobs.

    The real story here isn’t about productivity stats. It’s about potential unlocked.

    AI is no longer a futuristic experiment. It’s a present-day differentiator — especially for teams willing to rethink how work gets done.

    As leaders, we now face a simple choice:

    Will we augment the talent we have, or continue to chase the talent we can’t find?

    Your answer will shape the next 3 years of your business.


    🔗 Read the original article here:

    Fortune: AI chatbots boost earnings and hours, not job loss


    Want to go deeper? I’m working on a new AI augmentation playbook — DM me or sign up for updates.

    #AI #FutureOfWork #EnterpriseStrategy #GTM #DigitalTransformation #Chatbots #Productivity #ConsultingInsights

  • AI Can Predict Your Personality From Your Face—And It Might Affect Your Career

    AI Can Predict Your Personality From Your Face—And It Might Affect Your Career

    Came across this interesting paper on using AI to assess the Big 5 personality traits and predict career outcomes. This could have implications not just for the job market, but also in other fields like education which I covered earlier.

    via https://marginalrevolution.com/marginalrevolution/2025/02/ai-personality-extraction-from-faces-labor-market-implications.html

    For more details, take your pick from the podcast:

    or the AI summary:

    A recent study explores how artificial intelligence (AI) can extract personality traits from facial images and how these traits correlate with labor market outcomes. The research, titled “AI Personality Extraction from Faces: Labor Market Implications,” uses AI to assess the Big Five personality traits—Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism—from facial images of 96,000 MBA graduates. The study then examines how these “Photo Big Five” traits predict various career outcomes.

    Key Findings

    • Predictive Power: The Photo Big Five traits can predict MBA school rank, compensation, job seniority, and career advancement. Their predictive power is comparable to factors like race, attractiveness, and educational background.
    • Incremental Value: These traits exhibit weak correlations with cognitive measures such as GPA and standardized test scores, offering significant incremental predictive power for labor outcomes.
    • Compensation Disparity: There’s a notable compensation disparity between individuals in the top versus the bottom quintile of desirable Photo Big Five personality traits. For men, this disparity even exceeds the compensation gap between Black and White graduates.
    • Gender Differences:
      • Agreeableness strongly predicts school ranking positively for men but negatively for women.
      • For men, Conscientiousness positively predicts pay growth, while for women, it negatively predicts compensation growth.
    • Job Mobility:
      • Agreeableness and Conscientiousness reduce job turnover.
      • Extraversion and Neuroticism increase job turnover.
    • Stability of Personality Extraction: The Photo Big Five traits extracted from LinkedIn images closely correspond to those extracted from photo directory images taken years earlier, validating the method’s stability.
    • Ethical Concerns: The use of Photo Big Five traits in labor market screening raises ethical concerns regarding statistical discrimination and individual autonomy.

    Methodology

    • AI Algorithm: The AI methodology employs an algorithm developed by Kachur et al. (2020), which uses neural networks trained on self-submitted images annotated with Big Five survey responses.
    • Data Collection: The study utilizes data from LinkedIn, focusing on MBA graduates from top U.S. programs between 2000 and 2023.
    • Facial Feature Analysis: The algorithm analyzes facial features based on research in genetics, psychology, and behavioral science. Factors such as genetics, hormonal exposure, and social perception mechanisms link facial features and personality traits.

    Implications

    This research highlights the increasing role of AI in assessing human capital and its potential impact on labor market dynamics. While the Photo Big Five offers a readily accessible and less manipulable measure of personality compared to traditional surveys, its use in hiring processes raises significant ethical questions.

    Key considerations include:

    • Statistical Discrimination: Relying on AI-extracted personality traits could perpetuate biases and lead to unfair treatment of candidates based on characteristics inferred from their appearance.
    • Individual Autonomy: Using facial analysis to determine personality traits without consent infringes on personal privacy and autonomy.

    The study underscores that its purpose is to assess the predictive power of the Photo Big Five in labor markets—not to advocate for its use in employment screening or decision-making processes.

    Conclusion

    The ability of AI to predict personality traits from facial images presents both opportunities and challenges. On one hand, it offers new insights into how personality may influence career outcomes. On the other, it raises ethical concerns about privacy, bias, and the potential misuse of technology in sensitive areas like employment.

    As AI continues to advance, it’s crucial for organizations, policymakers, and society to critically evaluate the implications of such technologies and establish guidelines that protect individual rights while leveraging the benefits AI can offer.

  • Coping with outrage fatigue

    Coping with outrage fatigue

    Came across this interesting article on Scientific American which talks about outrage fatigue. Converted into a short podcast using HeyGen:

    Here’s the NotebookLM version for comparison which has a more emotive take on things:

    Are you feeling emotionally exhausted by the constant barrage of depressing news about political events, wars, and climate disasters? You might be experiencing outrage fatigue. This phenomenon occurs when repeated exposure to outrage-inducing content leads to feeling withdrawn and like resistance is futile.

    What is Outrage Fatigue?

    Outrage is a response to a perceived transgression against what we consider right and wrong. It can be functional for groups, drawing attention to issues and catalysing collective action. However, constant outrage, especially along group identity lines, can create hostility and conflict, leading to psychological exhaustion.

    How Outrage Manifests

    • Group Level: Constant outrage at a group level can lead to a sense of being jaded, making it hard to focus on what truly matters.
    • Individual Level: Some people become “super-outrage producers,” while others withdraw, feeling isolated or afraid to express their opinions. High levels of negative emotions, including outrage, can be taxing, leading individuals to regulate their emotions.

    The Role of Social Media

    Social media algorithms can amplify outrage content, making it seem more widespread than it is. This can lead to feeling turned off from political participation, even if the outrage isn’t representative of the broader group’s feelings.

    Combatting Outrage Fatigue

    • Local Involvement: Engage in local community politics to build a feeling of safety and understanding. This allows for concrete actions and a sense of direct impact.
    • Directed Outrage: Focused outrage is more effective as you know what it’s for and what outcomes you’re seeking.
    • Alter Your Social Media Ecosystem: Change your online environment by engaging with different content if you feel overwhelmed.

    Outrage as a Political Tool

    Outrage can be weaponised to divide groups. For example, stoking outrage on issues like immigration or abortion can distract people from economic policies that harm them.

    Final Thoughts

    It’s essential to be aware and experience outrage while staying grounded in local communities. Direct your outrage towards concrete actions and be mindful of the media you consume to avoid fatigue.

    via https://kottke.org/25/02/0046166-outrage-fatigue-is-real-t

  • Exploring DeepSeek’s R1

    Exploring DeepSeek’s R1

    The new model from DeepSeek has been making waves over the last week thanks to its competitive performance against GPT-o1 while being open source. It is also able to transparently share its thought chain.

    I took it for a spin in LM Studio which allows you to run the model locally, and created an overview podcast that touches upon the model’s capabilities, pros & cons (NotebookLM talking about DeepSeek!). Check it out below:

    Thanks to the great overview provided by Zvi.

    In case you are curious on how the model introduces itself, here’s a chat excerpt including the chain of thought:

    Thought:

    Alright, the user asked about what I can do and who built me. Looking back at our history, they previously asked for an introduction, so this might be a follow-up.

    I need to clearly outline my capabilities without being too technical. They probably want a straightforward answer.

    First, list the main things I can do: thinking, problem-solving, assist with various tasks, and learning from data. Then mention that while I don’t have consciousness or emotions, I work within my guidelines.

    Next, explain who built me—DeepSeek, an AI research company. Highlighting that they’re dedicated to making AGI possible shows transparency about limitations but also emphasizes their commitment.

    Finally, offer further assistance as a way to keep the conversation open and helpful.

    Response:

    I’m DeepSeek-R1, an artificial intelligence assistant created by DeepSeek, a Chinese company dedicated to making Artificial General Intelligence (AGI) feasible. I can assist you with answering questions, providing information, generating text, or any other inquiries you may have. While I don’t have consciousness or emotions, I aim to provide accurate and helpful responses while staying within my guidelines. Let me know how I can help!

  • My NotebookLM a-ha moment

    My NotebookLM a-ha moment

    I have been trying out Google’s NotebookLM over the last few days, and I thought of taking a shot at generating a podcast from the “What if” exhibit images that I had come across during my visit to the Birla Science & Technology Museum in Kolkata last month.

    I had created a PDF from the images I had taken from the exhibit and uploaded it to NotebookLM as a source. Below is the audio overview from this file – just give it a listen.

    This is nothing short of astounding given the way in which it is presented by the 2 hosts not just categorizing and covering each of the scientist’s contribution, but also explaining some of the key terms and providing context for their work. The last 4-5 min are even more interesting as it looks at the big picture and talks about nation building.

    I also gave the interactive mode (currently in beta) a try wherein you can interrupt the hosts with your comments and questions. I brought in the family connection to Upendranath Brahmachari (my father had incidentally created the wikipedia page back in 2005), and also asked about potential conspiracy theories surrounding the lives of Homi Bhabha and Vikram Sarabhai. They seemed to do a good job of integrating my comments before bringing the discussion back on track, even sounding slightly vexed by the multiple interruptions.

    Looks like 2025 is going to be another big year for the AI related developments given the plethora of announcements at the end of last year, and Google definitely has something interesting going on as always. Just hope that they don’t shut it down.

    In case you are curious, here’s the PDF that I used:

    And here’s the text summary that NotebookLM generated from the file:

    Here is a list of all the scientists mentioned in the sources, along with their contributions:

    Chemistry

    • Thiruvengadam Rajendram Seshadri: Awarded a fictional Nobel Prize in Chemistry for his research on plant chemistry, primarily in oxygen heterocylics, and his contributions to the isolation and structural elucidation of flavonoid pigments.
    • Prafulla Chandra Ray: Awarded a fictional Nobel Prize in Chemistry for his research on organic and inorganic nitrites and pioneering modern chemical research in India.
    • Asima Chatterjee: Awarded a fictional Nobel Prize in Chemistry for her contributions to medicinal chemistry, with a special reference to alkaloids, coumarins and terpenoids, alongside her advancements in mechanistic organic analytical chemistry and development of anti-epileptic and anti-malarial drugs.

    Physics

    • Homi Jehangir Bhabha: Awarded a fictional Nobel Prize in Physics for explaining the scattering of positrons by electrons with exchange, his work on Dirac’s theory of the positron, the passage of fast electrons and the theory of cosmic showers.
    • Meghnad Saha: Awarded a fictional Nobel Prize in Physics for formulating the Saha Equation, which allows for the calculation of the degree of ionization in a gas, a concept important in analyzing the spectra of stars and nebulae.
    • Satyendra Nath Bose: Awarded a fictional Nobel Prize in Physics for his work describing elementary quantum particles, leading to Bose-Einstein Statistics which fueled the quantum revolution.
    • Jagadish Chandra Bose: Awarded a fictional Nobel Prize in Physics for his quasi-optical experiments on the properties of very short radio waves, leading to improvements in the coherer and contributing to the development of solid-state physics.

    Physiology or Medicine

    • Upendranath Brahmachari: Awarded a fictional Nobel Prize in Physiology or Medicine for creating the drug Urea Stibamine, used to treat visceral leishmaniasis or black fever, a severe infection caused by the Leishmania parasite.
    • Dilip Mahalanabis: Awarded a fictional Nobel Prize in Physiology or Medicine for being one of the earliest practitioners of vital oral rehydration therapy, whose research into diarrhoeal diseases saved millions of lives, including vulnerable infants.

    Other Sciences

    • Birbal Sahni: Awarded a fictional Nobel Prize in Paleobotany for examining evolutionary trends and geographical distributions, suggesting a separate order, Taxales, within the conifers.
    • John Burdon Sanderson Haldane: Awarded a fictional Nobel Prize in Evolutionary Biology for his work in relating Darwinian evolutionary theory and Gregor Mendel’s concepts of heredity.
    • Prasanta Chandra Mahalanobis: Awarded a fictional Nobel Prize in Statistics for formulating the Mahalanobis distance – a statistical measure of comparison between two data sets – and for his work on innovative techniques for conducting large-scale sample surveys.
    • Salim Moizuddin Abdul Ali: Awarded a fictional Nobel Prize in Ornithology for his systematic bird surveys across India and authoring books popularizing ornithology.
    • Edavaleth Kakkat Janaki Ammal: Awarded a fictional Nobel Prize in Botany for her contributions to plant breeding, cytogenetics, phytogeography and her development of several hybrid crop species, including varieties of sweet sugarcane that allowed India to cultivate its own.
    • Anna Modayil Mani: Awarded a fictional Nobel Prize in Meteorology for creating India’s first ozonesonde, making it easier for scientists to monitor the ozone layer.
    • Vikram Ambalal Sarabhai Jain: Awarded a fictional Nobel Prize in Space Science for initiating space research and helping develop nuclear power in India.
    • Verghese Kurien: Awarded a fictional Nobel Prize in Dairy Science for transforming India’s dairy industry by streamlining effective management and distribution strategies, making India the largest milk-producing country in the world.
    • Mankombu Sambasivan Swaminathan: Awarded a fictional Nobel Prize in Agricultural Science for his contributions to the Green Revolution in India by introducing high-yielding varieties of wheat and rice, further developing high-yielding varieties of wheat and rice, and spearheading mass movements with farmers and scientists. His work helped save India and Pakistan from certain famine-like conditions in the 1960s.
    • Manali Kallat Vainu Bappu: Awarded a fictional Nobel Prize in Astronomy for jointly discovering the Wilson-Bappu effect with astronomer Olin Chaddock Wilson.
    • Vijay Pandurang Bhatkar: Awarded a fictional Nobel Prize in Computer Science for architecting India’s national initiative in supercomputing, where he spearheaded the development of India’s first supercomputer PARAM in 1990.
  • From Data to Decision: AI Assistance in the Agile Workplace

    From Data to Decision: AI Assistance in the Agile Workplace

    I recently had the privilege of presenting online at the Business Analytics and Decision Sciences Conclave to a group of enthusiastic MBA students. The session, titled “From Data to Decision: AI Assistance in the Agile Workplace,” focused on how AI and analytics are revolutionizing the workplace and how students can prepare for these changes.

    Key Takeaways from the Session

    Data Literacy

    One of the core ideas we discussed was the importance of data literacy. In today’s data-rich world, it’s not enough to simply collect data; we must understand and interpret it effectively. I used the analogy of looking for lost keys under a streetlight to illustrate how we often focus on easily accessible data, even though the true insights might lie in harder-to-reach places. This highlights the need to measure what truly matters, rather than what is easy to quantify.

    Deep Analytics

    We also explored the concept of deep analytics. It’s crucial to go beyond surface-level data and understand the context and intricacies behind the numbers. For example, understanding the difference between correlation and causation can prevent misleading conclusions. I emphasized the importance of domain expertise in providing context to data and avoiding biases in AI-based decision making.

    Practical Examples

    To make these ideas more tangible, I shared practical examples from the pharmaceutical industry:

    • Follow-up Email Campaigns: We discussed why data literacy is important for new channel activations and how AI can help launch and optimize follow-up email campaigns by incentivizing the right behavior, monitoring customer satisfaction, and adjusting campaign content based on performance. The Rule of 80 – 80 – 40 was highlighted as a guideline to ensure campaign effectiveness.
    • Next Best Action (NBA) Solutions: I showcased how AI can determine the next best actions for the field force by analyzing customer preferences, transaction history, and available content. This approach helps in personalizing interactions and driving better outcomes.

    Agility

    The session also covered the importance of agility in today’s fast-paced business environment. AI plays a crucial role in speeding up decision-making processes by providing actionable insights, enabling rapid hypothesis testing, and offering predictive analytics. Embracing agility allows businesses to adapt quickly to market changes and stay competitive.

    Preparing for the Future To conclude the session, I offered practical tips for students on how to prepare for the future workplace. I also recommended three impactful books for those interested in diving deeper into these topics:

    • “How to Lie with Statistics” by Darrell Huff
    • “Weapons of Math Destruction” by Cathy O’Neil
    • “Data Science for Business” by Foster Provost and Tom Fawcett

    The session was an enriching experience, and I’m excited to continue the conversation on how we can better leverage AI and analytics to drive operational resilience and innovation.

    Feel free to check out the attached presentation slides for a more detailed look at the session.

  • Dubai Diaries: Running LLMs & Stable Diffusion locally on a gaming laptop

    Dubai Diaries: Running LLMs & Stable Diffusion locally on a gaming laptop

    I previously wrote about the second device that I got about coming to Dubai, but not much about the first one which was a gaming laptop. So here’s a bit about the laptop which also doubles as a local AI driver thanks to the Nvidia GPU (the RTX3060).

    Soon after getting it back in 2022, I tried running the Stable Diffusion models and it was quite a bit of an upgrade over my original attempt on a plain GPU-less Windows machine. The generation times came down to 10s or so, and has gotten even faster as the models and tools have been optimised over the last couple of years. There are quite a few projects available on GitHub if you want give it a try – AUTOMATIC1111 and easydiffusion are among the more popular options. Nvidia has also got a TensorRT extension to further improve performance.

    With that out of the way, I also discovered LM Studio which allows you to run LLMs locally with a chat like interface thrown in, and you can access a bunch of models like Meta’s LLama. The response times are of course not as fast as the freely available online options like ChatGPT, Claude, Gemini and the likes, but you effectively get unlimited access to the model.

    Here’s an example from a conversation I had with LLama regarding the coffee meme from Ace Attorney game series:

  • AI News Roundup: Oct-Nov24

    AI News Roundup: Oct-Nov24

    I have been sharing some of the interesting reads that I come across on this blog/newsletter for a while now. Given the pace at which AI related news has been rolling out, I am consolidating the links into a series of monthly posts to reduce the load on your inbox/feed.

    Here are the interesting developments in the world of AI from the last month and a half or so:

    Agentic AI

    When you give Claude a mouse: LLMs are gradually getting more access to actually do things on your computer, and effectively becoming agents. Ethan Mollick shares his experience with Claude’s new feature, and the current strengths and weaknesses:

    On the powerful side, Claude was able to handle a real-world example of a game in the wild, develop a long-term strategy, and execute on it. It was flexible in the face of most errors, and persistent. It did clever things like A/B testing. And most importantly, it just did the work, operating for nearly an hour without interruption.

    On the weak side, you can see the fragility of current agents. LLMs can end up chasing their own tail or being stubborn, and you could see both at work. Even more importantly, while the AI was quite robust to many forms of error, it just took one (getting pricing wrong) to send it down a path that made it waste considerable time.

    Claude get’s bored: With great power comes great boredom, it seems. We are already witnessing some unintended behaviour with the AI agents with them getting distracted just like humans or taking unwanted actions:

    Impact on work

    Generative AI and the Nature of Work: A paper which looks at the impact of AI tools like GitHub Copilot on how people work:

    We find that having access to Copilot induces such individuals to shift task allocation towards their core work of coding activities and away from non-core project management activities. We identify two underlying mechanisms driving this shift – an increase in autonomous rather than collaborative work, and an increase in exploration activities rather than exploitation. The main effects are greater for individuals with relatively lower ability. Overall, our estimates point towards a large potential for AI to transform work processes and to potentially flatten organizational hierarchies in the knowledge economy.

    AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably: Is this a reflection of the AI capabilities or our tastes?

    We found that AI-generated poems were rated more favorably in qualities such as rhythm and beauty, and that this contributed to their mistaken identification as human-authored. Our findings suggest that participants employed shared yet flawed heuristics to differentiate AI from human poetry: the simplicity of AI-generated poems may be easier for non-experts to understand, leading them to prefer AI-generated poetry and misinterpret the complexity of human poems as incoherence generated by AI.

    On the other hand, mainstream Hollywood is realizing the potential cost savings that AI can have – New Zemeckis film used AI to de-age Tom Hanks and Robin Wright – thanks to the tech from Metaphysic:

    Metaphysic developed the facial modification system by training custom machine-learning models on frames of Hanks’ and Wright’s previous films. This included a large dataset of facial movements, skin textures, and appearances under varied lighting conditions and camera angles. The resulting models can generate instant face transformations without the months of manual post-production work traditional CGI requires.

    Here’s the trailer:

    Manipulating AI and boring scammers

    SEO may soon be passe with the chatbots taking over from the search engines. So, what’s next – something possibly along the lines of Citate which helps you analyse and optimise what is being served up on these chatbots.

    Can we manipulate AI as much as it manipulates us? – With every new development in the way humans manage and share knowledge, come tools to manipulate the said knowledge. Fred Vogelstein takes a deeper look at the emerging options including Citate and Profound.

    AI granny to bore scammers:

    UK-based mobile operator Virgin Media O2 has created an AI-generated “scambaiter” tool to stall scammers. The AI tool, called Daisy, mimics the voice of an elderly woman and performs one simple task: talk to fraudsters and “waste as much of their time as possible.”

    Multiple AI models were used to create Daisy, which was trained with the help of YouTuber and scam baiter Jim Browning. The tool now transcribes the caller’s voice to text and generates appropriate responses using a large language model. All of this takes place without input from an operator. At times, Daisy keeps fraudsters on the line for up to 40 minutes, O2 says.

    I have already been doing a simpler version of this using Samsung’s AI based call screening, with most hanging up pretty quickly. I’m sure this will get enhanced in the future.

    It’s not just scammers misusing AI unfortunately, and this bit of news on creating deepfakes of classmates in a US school doesn’t help allay the fears of parents like me. Food for thought for the regulators, and also for authorities who need to take prompt action when such incidents occur:

    Head of School Matt Micciche seemingly first learned of the problem in November 2023, when a student anonymously reported the explicit deepfakes through a school portal run by the state attorney’s general office called “Safe2Say Something.” But Micciche allegedly did nothing, allowing more students to be targeted for months until police were tipped off in mid-2024.

    Cops arrested the student accused of creating the harmful content in August. The student’s phone was seized as cops investigated the origins of the AI-generated images. But that arrest was not enough justice for parents who were shocked by the school’s failure to uphold mandatory reporting responsibilities following any suspicion of child abuse. They filed a court summons threatening to sue last week unless the school leaders responsible for the mishandled response resigned within 48 hours.