Category: AI

  • What the 2025 Mary Meeker AI Report Means for Work, Strategy, and GTM

    What the 2025 Mary Meeker AI Report Means for Work, Strategy, and GTM

    It’s been a while since Mary Meeker’s Internet Trends report provided a pulse check on where technology is headed. The first such report in a while, now focused on AI, does more than describe trends — it lays out a new operating reality.

    This post distills 8 critical insights from the report — and what they mean for enterprise leaders, GTM strategists, product owners, and those shaping the future of work.

    📄 Full Report →


    1. This Time, the Machines Move Faster Than We Do

    The report opens with a bold observation:

    “AI usage is ramping faster than any prior computing platform — even faster than the internet.”

    This isn’t just fast. It’s compounding.

    For teams and organizations, that means:

    • Planning cycles must adapt to faster execution rhythms
    • Feedback loops need compression and real-time recalibration
    • Legacy workflows aren’t built for this pace

    If your GTM or delivery cadence still runs on quarterly inertia, it’s time to rethink.


    2. Time-to-Value Just Got Compressed. Again.

    The biggest unlock from GenAI? Time compression.

    From prompt → prototype

    From draft → delivery

    From insight → action

    This collapse in cycle time transforms:

    • Productivity metrics
    • Product development lifecycles
    • Org-wide alignment rhythms

    🚀 Output velocity is the new KPI.


    3. Your Next Teammate Might Not Be Human

    We’re entering the era of embedded AI agents — not just assistants.

    AI is no longer a tool on the side. It’s part of the team:

    • Summarizing meetings
    • Writing first drafts
    • Managing workflows

    That means:

    • Rethinking team design
    • Clarifying AI vs human task ownership
    • Measuring contribution beyond headcount

    AI is a teammate now. Time to onboard accordingly.


    4. It’s Not Risk Slowing AI — It’s Friction

    One of the most important insights in the report:

    Employees aren’t blocked by fear — they’re blocked by poor UX.

    Adoption stalls when AI:

    • Doesn’t fit into existing workflows
    • Requires tool-switching
    • Has unclear value props

    🛠️ The fix? Product thinking:

    • Reduce toggle tax
    • Integrate into natural habits
    • Onboard like a consumer-grade app

    5. AI Fluency Is the New Excel

    The most valuable skill in 2025 isn’t coding — it’s prompt fluency.

    AI Fluency = knowing how to ask, guide, and evaluate AI output:

    • What to prompt
    • What to ignore
    • How to refine

    Every function — from marketing to HR — needs this literacy.

    We’re in the age of human-in-the-loop as a capability, not a compliance checkbox.


    6. Follow the Money. It’s Flowing to AI

    The report outlines the capital story behind the hype:

    • Enterprise GenAI spend is ramping fast
    • Compute infrastructure is scaling explosively
    • VC and corporate funding is prioritizing AI-native bets

    For leaders, this isn’t a trend — it’s a reallocation cycle.

    Infra budgets, product bets, and partnerships must now align with where the ecosystem is heading — not where it’s been.


    7. Go Deep, Not Just Wide

    Horizontal AI gets you buzz.

    Vertical AI gets you impact.

    The report shows real traction in:

    • Healthcare
    • Legal
    • Education
    • Financial services

    Where AI is tuned to real-world roles and workflows, it sticks.

    If you’re shipping AI without domain context, you’re leaving retention on the table.


    8. Infrastructure Is Strategy. Again.

    The biggest shift in the back half of the report?

    AI is putting infrastructure back in the spotlight.

    From model training to agent orchestration to secure runtimes:

    • The AI stack is now a competitive moat
    • Data pipelines and prompt layers are shaping outcomes
    • Infra is no longer invisible — it’s strategic

    What cloud was to the last decade, AI-native infra may be to the next.


    Final Thoughts

    The Mary Meeker 2025 AI Trends report isn’t just a forecast — it’s a framing device. One that challenges every enterprise leader to rethink:

    • How fast we move
    • What value looks like
    • Who (or what) we collaborate with
    • Where advantage is shifting

    It’s not enough to adopt AI.

    We have to redesign around it.

    📄 You can access the full report here

  • Shadow AI, Friction Fatigue & the Flexibility Gap: 5 Lessons from the Ivanti Tech at Work 2025 Report

    Shadow AI, Friction Fatigue & the Flexibility Gap: 5 Lessons from the Ivanti Tech at Work 2025 Report

    The Ivanti Tech at Work 2025 report isn’t just a workplace tech survey — it’s a mirror to how modern organizations are struggling (and sometimes succeeding) to adapt to the realities of hybrid work, AI adoption, and employee expectations.

    Here are 5 insights that stood out — and why they matter for anyone building teams, tools, or trust in the modern workplace.

    🔗 Read the full report


    1. Shadow AI Is a Trust Problem, Not Just a Tech One

    Nearly 1 in 3 workers admit they use AI tools like ChatGPT in secret at work.

    Why the secrecy?

    According to the report:

    • 36% want a competitive edge
    • 30% fear job cuts or extra scrutiny
    • 30% say there’s no clear AI usage policy
    • 27% don’t want their abilities questioned

    This is more than a governance issue. It’s a cultural signal.
    Employees are turning to AI to be more productive — but doing so under the radar signals a trust deficit and policy vacuum.

    💡 What to do:
    Leaders need to replace silence with structure — with clear, enabling policies that promote responsible AI use, and an environment where value creation matters more than screen time.


    2. The Flexibility Paradox: High Demand, Low Supply

    83% of IT professionals and 73% of office workers value flexibility highly — but only ~25% say they actually have it.

    Even as companies trumpet hybrid work, asynchronous enablement, autonomy, and outcome-based work norms haven’t caught up. The result? Disengagement and frustration.

    💡 What to do:
    Revisit what flexibility really means. It’s not just about where people work — it’s how they work.
    That means:

    • Tools for async collaboration
    • Decision-making frameworks for remote teams
    • Leaders modeling flexible behaviors

    3. Presenteeism Is the New Fatigue

    The report highlights “digital presenteeism”: workers pretending to be active — jiggling mice, logging in early — to appear productive.

    • 48% say they dislike their job but stay
    • 37% admit to showing up without doing meaningful work

    These are signs of unclear expectations and poor workflow design — not disengagement alone.

    💡 What to do:
    Audit for friction, not just lag.
    Look at your workflows, KPIs, and culture. Are people forced to perform busyness instead of real value?


    4. The Digital Experience Gap Is Real

    While flexible work is valued, many workers find it harder to work outside the office. The report notes:

    • 44–49% say collaboration is easier in-office
    • 36–48% say manager access is better in-office
    • 16–26% say apps are easier to use from the office

    💡 What to do:
    Enable remote-first experience, not just policy:

    • Seamless access to tools and systems
    • Integrated collaboration platforms
    • AI-powered support and IT workflows

    5. Redesign for Trust, Not Just Tools

    The big takeaway?

    Workers don’t just need better AI — they need clarity on what’s allowed
    They don’t just need more flexibility — they need workflows that enable it
    They don’t just need faster tools — they need a culture that values trust over control


    Final Thoughts

    The Ivanti Tech at Work 2025 report is a diagnostic — revealing what happens when new tools are bolted onto outdated operating models.

    For leaders, the message is clear:

    We need to evolve not just our tech stack, but our trust stack.

    🔗 Read the full report

  • 🎮 Vibe Coding a Stock Market Game: Why Every GTM Leader Should Build Like This Once

    🎮 Vibe Coding a Stock Market Game: Why Every GTM Leader Should Build Like This Once

    Earlier this month, I did something I hadn’t done in over 15 years:
    I rebuilt a stock market simulation game I had originally created during business school.

    The original was built on Ruby on Rails.
    This time, I went lean — prototyping with HTML, JS, and lightweight AI-assisted dev tools in what I’d call a vibe coding session.

    But this post isn’t about the code.

    It’s about what I learned — and why every founder, product owner, or GTM leader should prototype at least one thing themselves in this way.


    🧪 What Is Vibe Coding, Really?

    The term vibe coding was coined by Andrej Karpathy, but it was a recent post by Strangeloop Canon that captured its essence:

    “If AGI is the future, vibe coding is the present.”

    To me, vibe coding is building with momentum, not perfection.
    No heavyweight specs. No team syncs. Just one person, a rough idea, and tools that let you think through your fingertips.

    You’re not coding to launch. You’re coding to understand.
    And sometimes, that’s exactly what you need.


    🧠 What I Learned From Rebuilding QSE

    1. Building sharpens your strategy lens.
    When you rebuild something from scratch, every interaction becomes a test of friction vs flow. That mindset translates directly into GTM design, onboarding strategy, and product-market fit thinking.

    2. AI is best when it feels smart.
    My game features a basic rules-based AI opponent. Not sophisticated — but just enough to create pressure and tension. It reminded me that AI doesn’t need to be advanced, it needs to feel aligned with the user’s rhythm.

    3. Prototypes create unexpected clarity.
    Tiny design decisions (like how many clicks it takes to place a trade) turned into insights about attention spans, pacing, and simplicity — lessons I’ll carry into larger GTM and transformation conversations.


    🔁 Why This Resonated Beyond the Code

    Rebuilding QSE wasn’t a nostalgia trip. It was a reconnection with creative flow.
    It reminded me of how much clarity you gain when you stop whiteboarding and start building.

    We often separate “strategy” and “execution” as different domains.
    But I’ve found that prototyping collapses that gap. You see things faster. You think better. And sometimes, you spot the real issue — not in the brief, but in the build.

    If you’re leading a product, driving a GTM motion, or exploring AI integration, I genuinely recommend vibe coding — or at least, vibing with your builders more closely.


    🕹️ Curious to try the game I rebuilt?
    👉 Play QSE Reloaded

    The code is available on github

  • From Productivity to Progress: What the New MIT-Stanford AI Study Really Tells Us About the Future of Work

    From Productivity to Progress: What the New MIT-Stanford AI Study Really Tells Us About the Future of Work

    A new study from MIT and Stanford just rewrote the AI-in-the-workplace narrative.

    Published in Fortune this week, the research shows that generative AI tools — specifically chatbots — are not only boosting productivity by up to 14%, but they’re also raising earnings without reducing work hours.

    “Rather than displacing workers, AI adoption led to higher earnings, especially for lower-performing employees.”

    Let that sink in.


    🧠 AI as a Floor-Raiser, Not a Ceiling-Breaker

    The most surprising finding?
    AI’s greatest impact was seen not among the top performers, but among lower-skilled or newer workers.

    In customer service teams, the AI tools essentially became real-time coaches — suggesting responses, guiding tone, and summarizing queries. The result: a productivity uplift and quality improvement that evened out performance levels across the team.

    This is a quiet revolution in workforce design.

    In many traditional orgs, productivity initiatives often widen the gap between high and average performers. But with AI augmentation, we’re seeing the inverse — a democratization of capability.


    💼 What This Means for Enterprise Leaders

    This research confirms a pattern I’ve observed firsthand in consulting:
    The impact of AI is not just technical, it’s organizational.

    To translate AI gains into business value, leaders need to:

    ✅ 1. Shift from Efficiency to Enablement

    Don’t chase cost-cutting alone. Use AI to empower more team members to operate at higher skill levels.

    ✅ 2. Invest in Workflow Design

    Tool adoption isn’t enough. Embed AI into daily rituals — response writing, research, meeting prep — where the marginal gains accumulate.

    ✅ 3. Reframe KPIs

    Move beyond “time saved” metrics. Start tracking value added — better resolutions, improved CSAT, faster ramp-up for new hires.


    🔄 A Playbook for Augmented Teams

    From piloting GPT agents to reimagining onboarding flows, I’ve worked with startups and enterprise teams navigating this shift. The ones who succeed typically follow this arc:

    1. Pilot AI in a high-volume, low-risk function
    2. Co-create use cases with users (not for them)
    3. Build layered systems: AI support + human escalation
    4. Train managers to interpret, not just supervise, AI-led work
    5. Feed learnings back into process improvement loops

    🔚 Not AI vs Jobs. AI Plus Better Jobs.

    The real story here isn’t about productivity stats. It’s about potential unlocked.

    AI is no longer a futuristic experiment. It’s a present-day differentiator — especially for teams willing to rethink how work gets done.

    As leaders, we now face a simple choice:

    Will we augment the talent we have, or continue to chase the talent we can’t find?

    Your answer will shape the next 3 years of your business.


    🔗 Read the original article here:

    Fortune: AI chatbots boost earnings and hours, not job loss


    Want to go deeper? I’m working on a new AI augmentation playbook — DM me or sign up for updates.

    #AI #FutureOfWork #EnterpriseStrategy #GTM #DigitalTransformation #Chatbots #Productivity #ConsultingInsights

  • AI Can Predict Your Personality From Your Face—And It Might Affect Your Career

    AI Can Predict Your Personality From Your Face—And It Might Affect Your Career

    Came across this interesting paper on using AI to assess the Big 5 personality traits and predict career outcomes. This could have implications not just for the job market, but also in other fields like education which I covered earlier.

    via https://marginalrevolution.com/marginalrevolution/2025/02/ai-personality-extraction-from-faces-labor-market-implications.html

    For more details, take your pick from the podcast:

    or the AI summary:

    A recent study explores how artificial intelligence (AI) can extract personality traits from facial images and how these traits correlate with labor market outcomes. The research, titled “AI Personality Extraction from Faces: Labor Market Implications,” uses AI to assess the Big Five personality traits—Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism—from facial images of 96,000 MBA graduates. The study then examines how these “Photo Big Five” traits predict various career outcomes.

    Key Findings

    • Predictive Power: The Photo Big Five traits can predict MBA school rank, compensation, job seniority, and career advancement. Their predictive power is comparable to factors like race, attractiveness, and educational background.
    • Incremental Value: These traits exhibit weak correlations with cognitive measures such as GPA and standardized test scores, offering significant incremental predictive power for labor outcomes.
    • Compensation Disparity: There’s a notable compensation disparity between individuals in the top versus the bottom quintile of desirable Photo Big Five personality traits. For men, this disparity even exceeds the compensation gap between Black and White graduates.
    • Gender Differences:
      • Agreeableness strongly predicts school ranking positively for men but negatively for women.
      • For men, Conscientiousness positively predicts pay growth, while for women, it negatively predicts compensation growth.
    • Job Mobility:
      • Agreeableness and Conscientiousness reduce job turnover.
      • Extraversion and Neuroticism increase job turnover.
    • Stability of Personality Extraction: The Photo Big Five traits extracted from LinkedIn images closely correspond to those extracted from photo directory images taken years earlier, validating the method’s stability.
    • Ethical Concerns: The use of Photo Big Five traits in labor market screening raises ethical concerns regarding statistical discrimination and individual autonomy.

    Methodology

    • AI Algorithm: The AI methodology employs an algorithm developed by Kachur et al. (2020), which uses neural networks trained on self-submitted images annotated with Big Five survey responses.
    • Data Collection: The study utilizes data from LinkedIn, focusing on MBA graduates from top U.S. programs between 2000 and 2023.
    • Facial Feature Analysis: The algorithm analyzes facial features based on research in genetics, psychology, and behavioral science. Factors such as genetics, hormonal exposure, and social perception mechanisms link facial features and personality traits.

    Implications

    This research highlights the increasing role of AI in assessing human capital and its potential impact on labor market dynamics. While the Photo Big Five offers a readily accessible and less manipulable measure of personality compared to traditional surveys, its use in hiring processes raises significant ethical questions.

    Key considerations include:

    • Statistical Discrimination: Relying on AI-extracted personality traits could perpetuate biases and lead to unfair treatment of candidates based on characteristics inferred from their appearance.
    • Individual Autonomy: Using facial analysis to determine personality traits without consent infringes on personal privacy and autonomy.

    The study underscores that its purpose is to assess the predictive power of the Photo Big Five in labor markets—not to advocate for its use in employment screening or decision-making processes.

    Conclusion

    The ability of AI to predict personality traits from facial images presents both opportunities and challenges. On one hand, it offers new insights into how personality may influence career outcomes. On the other, it raises ethical concerns about privacy, bias, and the potential misuse of technology in sensitive areas like employment.

    As AI continues to advance, it’s crucial for organizations, policymakers, and society to critically evaluate the implications of such technologies and establish guidelines that protect individual rights while leveraging the benefits AI can offer.

  • Preparing Your Children for the AGI Era: Education and Work in the Future

    Preparing Your Children for the AGI Era: Education and Work in the Future

    As a father of a soon to be teenager the question I keep asking myself is whether we are preparing our children for the future and equipping them to manage the AI induced disruption. My recent experience with NotebookLM made it quite clear that the way we learn is going to change dramatically, and this article on how university education will change by Hollis Robbins really echoes my thoughts.

    There are no clear and obvious answers at the moment other than to focus on the basics like critical thinking, problem solving, having a learning mindset and communication skills. Ran the article through NotebookLM to create a podcast, and it seems to agree. Do give it a listen and share your thoughts:

    Can AI Predict Your Personality From Your Face? AB's Reflections

    What if a single photo could reveal your personality traits—and even influence your professional future? This raises intriguing questions about privacy, bias, and the role of AI in hiring. Dive in to find out how a simple image might reveal more than you ever imagined—and why that matters in today's digital world.
    1. Can AI Predict Your Personality From Your Face?
    2. What if… Fictional Indian Nobel laureates
    3. Decoding Pharma Analytics
    4. Outrage Fatigue: Staying Engaged Without Burning Out
    5. AGI Kids: Preparing Your Children for an AI-Driven Future

    The summary:

    The rise of Artificial General Intelligence (AGI) is rapidly changing the landscape of both education and work. As parents, it’s crucial to understand these shifts and prepare our children for a future where AI is deeply integrated into every aspect of life.

    What AGI Means for the Future of Education

    Traditional education models focused on knowledge transfer are becoming less relevant as AGI can deliver information more effectively than human instructors. The focus is shifting towards:

    Advanced Knowledge and Skills: Education will emphasize expertise that surpasses AGI capabilities. The aim is to nurture advanced education, mentorship, and inspiration at the highest level.

    Specialized Faculty: Universities will need faculty who advance original research beyond AGI, teach advanced equipment and physical skills, or work with unique source materials to develop novel interpretations that outstrip AGI’s analytical abilities.

    Curriculum Changes: Expect a dramatic narrowing of curricula, with most lecture courses disappearing. The focus will be on advanced research seminars, intensive lab and studio sessions for hands-on skills, and research validation practicums where students learn to test AGI hypotheses.

    Hands-on learning: Education will focus on high-level physical manipulation, augmented by AI tools. Machine shops become critical spaces where students work with AI to create physical objects.

    Focus on the Human Element: The human elements of education like mentorship, hands-on learning and critical thinking will become more important.

    What AGI Means for the Future of Work

    AGI is set to transform the workplace by:

    Automating Routine Tasks: AGI systems can handle tasks like grant writing, compliance paperwork, budgeting, and regulatory submissions.

    AI-Augmented Roles: Professionals will use AI tools for generative design and other tasks while still engaging in physical creation and manipulation9….

    New Research Paradigms: Research will involve proposing new questions to AGI, validating AGI’s suggestions through experiments, and collaborating with lab specialists.

    Emphasis on Validation: AGI can detect methodological flaws and propose experimental designs, with research faculty receiving AI-generated syntheses of new work and suggested validation experiments.

    How Parents Can Prepare Their Children

    1. Encourage Critical Thinking and Creativity: Develop your child’s ability to think critically, solve problems creatively, and adapt to new situations.
    2. Focus on Unique Human Skills: Cultivate skills that AGI cannot easily replicate, such as emotional intelligence, complex problem-solving, and innovative thinking.
    3. Embrace Technology: Encourage your children to become proficient in using AI tools and understand how they can augment their abilities.
    4. Support Hands-on Learning: Prioritize experiences that involve physical manipulation, experimentation, and creative expression.
    5. Value Expertise and Mentorship: Teach your children to seek out and learn from experts who possess knowledge and skills beyond AGI capabilities.
    6. Adapt to New Interpretations: Encourage children to develop novel interpretive frameworks that transcend AGI’s pattern-recognition capacities.

    Leveraging Current Tools: NotebookLM

    Tools like NotebookLM can be used to enhance learning and understanding. (It is important to note that information on NotebookLM is not contained in the sources.) NotebookLM helps in:

    Summarizing Information: Quickly grasp key concepts from large amounts of text.

    Organizing Notes: Structure and connect ideas in a coherent manner.

    Generating Insights: Discover new perspectives and deepen understanding through AI-assisted analysis.

    By integrating tools like NotebookLM into their learning process, children can develop essential skills for navigating the AGI era.

    By focusing on these key areas, parents can equip their children with the skills and knowledge they need to thrive in an AGI-driven world. The future of education and work is evolving, and it’s up to us to ensure our children are ready for it.

  • Exploring DeepSeek’s R1

    Exploring DeepSeek’s R1

    The new model from DeepSeek has been making waves over the last week thanks to its competitive performance against GPT-o1 while being open source. It is also able to transparently share its thought chain.

    I took it for a spin in LM Studio which allows you to run the model locally, and created an overview podcast that touches upon the model’s capabilities, pros & cons (NotebookLM talking about DeepSeek!). Check it out below:

    Thanks to the great overview provided by Zvi.

    In case you are curious on how the model introduces itself, here’s a chat excerpt including the chain of thought:

    Thought:

    Alright, the user asked about what I can do and who built me. Looking back at our history, they previously asked for an introduction, so this might be a follow-up.

    I need to clearly outline my capabilities without being too technical. They probably want a straightforward answer.

    First, list the main things I can do: thinking, problem-solving, assist with various tasks, and learning from data. Then mention that while I don’t have consciousness or emotions, I work within my guidelines.

    Next, explain who built me—DeepSeek, an AI research company. Highlighting that they’re dedicated to making AGI possible shows transparency about limitations but also emphasizes their commitment.

    Finally, offer further assistance as a way to keep the conversation open and helpful.

    Response:

    I’m DeepSeek-R1, an artificial intelligence assistant created by DeepSeek, a Chinese company dedicated to making Artificial General Intelligence (AGI) feasible. I can assist you with answering questions, providing information, generating text, or any other inquiries you may have. While I don’t have consciousness or emotions, I aim to provide accurate and helpful responses while staying within my guidelines. Let me know how I can help!

  • My NotebookLM a-ha moment

    My NotebookLM a-ha moment

    I have been trying out Google’s NotebookLM over the last few days, and I thought of taking a shot at generating a podcast from the “What if” exhibit images that I had come across during my visit to the Birla Science & Technology Museum in Kolkata last month.

    I had created a PDF from the images I had taken from the exhibit and uploaded it to NotebookLM as a source. Below is the audio overview from this file – just give it a listen.

    This is nothing short of astounding given the way in which it is presented by the 2 hosts not just categorizing and covering each of the scientist’s contribution, but also explaining some of the key terms and providing context for their work. The last 4-5 min are even more interesting as it looks at the big picture and talks about nation building.

    I also gave the interactive mode (currently in beta) a try wherein you can interrupt the hosts with your comments and questions. I brought in the family connection to Upendranath Brahmachari (my father had incidentally created the wikipedia page back in 2005), and also asked about potential conspiracy theories surrounding the lives of Homi Bhabha and Vikram Sarabhai. They seemed to do a good job of integrating my comments before bringing the discussion back on track, even sounding slightly vexed by the multiple interruptions.

    Looks like 2025 is going to be another big year for the AI related developments given the plethora of announcements at the end of last year, and Google definitely has something interesting going on as always. Just hope that they don’t shut it down.

    In case you are curious, here’s the PDF that I used:

    And here’s the text summary that NotebookLM generated from the file:

    Here is a list of all the scientists mentioned in the sources, along with their contributions:

    Chemistry

    • Thiruvengadam Rajendram Seshadri: Awarded a fictional Nobel Prize in Chemistry for his research on plant chemistry, primarily in oxygen heterocylics, and his contributions to the isolation and structural elucidation of flavonoid pigments.
    • Prafulla Chandra Ray: Awarded a fictional Nobel Prize in Chemistry for his research on organic and inorganic nitrites and pioneering modern chemical research in India.
    • Asima Chatterjee: Awarded a fictional Nobel Prize in Chemistry for her contributions to medicinal chemistry, with a special reference to alkaloids, coumarins and terpenoids, alongside her advancements in mechanistic organic analytical chemistry and development of anti-epileptic and anti-malarial drugs.

    Physics

    • Homi Jehangir Bhabha: Awarded a fictional Nobel Prize in Physics for explaining the scattering of positrons by electrons with exchange, his work on Dirac’s theory of the positron, the passage of fast electrons and the theory of cosmic showers.
    • Meghnad Saha: Awarded a fictional Nobel Prize in Physics for formulating the Saha Equation, which allows for the calculation of the degree of ionization in a gas, a concept important in analyzing the spectra of stars and nebulae.
    • Satyendra Nath Bose: Awarded a fictional Nobel Prize in Physics for his work describing elementary quantum particles, leading to Bose-Einstein Statistics which fueled the quantum revolution.
    • Jagadish Chandra Bose: Awarded a fictional Nobel Prize in Physics for his quasi-optical experiments on the properties of very short radio waves, leading to improvements in the coherer and contributing to the development of solid-state physics.

    Physiology or Medicine

    • Upendranath Brahmachari: Awarded a fictional Nobel Prize in Physiology or Medicine for creating the drug Urea Stibamine, used to treat visceral leishmaniasis or black fever, a severe infection caused by the Leishmania parasite.
    • Dilip Mahalanabis: Awarded a fictional Nobel Prize in Physiology or Medicine for being one of the earliest practitioners of vital oral rehydration therapy, whose research into diarrhoeal diseases saved millions of lives, including vulnerable infants.

    Other Sciences

    • Birbal Sahni: Awarded a fictional Nobel Prize in Paleobotany for examining evolutionary trends and geographical distributions, suggesting a separate order, Taxales, within the conifers.
    • John Burdon Sanderson Haldane: Awarded a fictional Nobel Prize in Evolutionary Biology for his work in relating Darwinian evolutionary theory and Gregor Mendel’s concepts of heredity.
    • Prasanta Chandra Mahalanobis: Awarded a fictional Nobel Prize in Statistics for formulating the Mahalanobis distance – a statistical measure of comparison between two data sets – and for his work on innovative techniques for conducting large-scale sample surveys.
    • Salim Moizuddin Abdul Ali: Awarded a fictional Nobel Prize in Ornithology for his systematic bird surveys across India and authoring books popularizing ornithology.
    • Edavaleth Kakkat Janaki Ammal: Awarded a fictional Nobel Prize in Botany for her contributions to plant breeding, cytogenetics, phytogeography and her development of several hybrid crop species, including varieties of sweet sugarcane that allowed India to cultivate its own.
    • Anna Modayil Mani: Awarded a fictional Nobel Prize in Meteorology for creating India’s first ozonesonde, making it easier for scientists to monitor the ozone layer.
    • Vikram Ambalal Sarabhai Jain: Awarded a fictional Nobel Prize in Space Science for initiating space research and helping develop nuclear power in India.
    • Verghese Kurien: Awarded a fictional Nobel Prize in Dairy Science for transforming India’s dairy industry by streamlining effective management and distribution strategies, making India the largest milk-producing country in the world.
    • Mankombu Sambasivan Swaminathan: Awarded a fictional Nobel Prize in Agricultural Science for his contributions to the Green Revolution in India by introducing high-yielding varieties of wheat and rice, further developing high-yielding varieties of wheat and rice, and spearheading mass movements with farmers and scientists. His work helped save India and Pakistan from certain famine-like conditions in the 1960s.
    • Manali Kallat Vainu Bappu: Awarded a fictional Nobel Prize in Astronomy for jointly discovering the Wilson-Bappu effect with astronomer Olin Chaddock Wilson.
    • Vijay Pandurang Bhatkar: Awarded a fictional Nobel Prize in Computer Science for architecting India’s national initiative in supercomputing, where he spearheaded the development of India’s first supercomputer PARAM in 1990.
  • Dubai Diaries: Running LLMs & Stable Diffusion locally on a gaming laptop

    Dubai Diaries: Running LLMs & Stable Diffusion locally on a gaming laptop

    I previously wrote about the second device that I got about coming to Dubai, but not much about the first one which was a gaming laptop. So here’s a bit about the laptop which also doubles as a local AI driver thanks to the Nvidia GPU (the RTX3060).

    Soon after getting it back in 2022, I tried running the Stable Diffusion models and it was quite a bit of an upgrade over my original attempt on a plain GPU-less Windows machine. The generation times came down to 10s or so, and has gotten even faster as the models and tools have been optimised over the last couple of years. There are quite a few projects available on GitHub if you want give it a try – AUTOMATIC1111 and easydiffusion are among the more popular options. Nvidia has also got a TensorRT extension to further improve performance.

    With that out of the way, I also discovered LM Studio which allows you to run LLMs locally with a chat like interface thrown in, and you can access a bunch of models like Meta’s LLama. The response times are of course not as fast as the freely available online options like ChatGPT, Claude, Gemini and the likes, but you effectively get unlimited access to the model.

    Here’s an example from a conversation I had with LLama regarding the coffee meme from Ace Attorney game series:

  • Human Learning about AI

    Human Learning about AI

    The paper abstract:

    We study how humans form expectations about the performance of artificial intelligence (AI) and consequences for AI adoption. Our main hypothesis is that people project human-relevant task features onto AI. People then over-infer from AI failures on human-easy tasks, and from AI successes on human-difficult tasks. Lab experiments provide strong evidence for projection of human difficulty onto AI, predictably distorting subjects’ expectations. Resulting adoption can be sub-optimal, as failing human-easy tasks need not imply poor overall performance in the case of AI. A field experiment with an AI giving parenting advice shows evidence for projection of human textual similarity. Users strongly infer from answers that are equally uninformative but less humanly-similar to expected answers, significantly reducing trust and engagement. Results suggest AI “anthropomorphism” can backfire by increasing projection and de-aligning human expectations and AI performance.

    raphaelraux – Research

    And a simplified explanation by Copilot (seemed apt to use in this case given the topic):

    The paper explores how people form expectations about AI performance and how this impacts their willingness to use AI. The researchers’ main idea is that people tend to think of AI as if it should perform tasks in the same way humans do. This leads to two key behaviors:

    1. Overestimating AI failures: When AI makes mistakes on tasks that are easy for humans, people think the AI is not very capable overall.
    2. Overestimating AI successes: When AI does well on tasks that are hard for humans, people think the AI is more capable than it actually is.

    Experiments show that these assumptions distort people’s expectations of AI. For example, if an AI struggles with simple tasks, people might avoid using it, even if it’s actually quite effective at other things. On the flip side, if it excels at complex tasks, people might over-trust it.

    The researchers conducted a real-world experiment with an AI that provides parenting advice. They found that users were less trusting of the AI if its answers didn’t resemble what a human would say, even if the information was the same. This shows that making AI seem human-like (anthropomorphism) can sometimes backfire, leading to misaligned expectations between what AI can do and what people expect from it.

    In essence, the study highlights that our human biases can lead us to misunderstand AI capabilities, which can affect how we adopt and use AI technologies.

    From: https://sites.google.com/view/raphaelraux/research?authuser=0

    via https://marginalrevolution.com/marginalrevolution/2024/11/how-badly-do-humans-misjudge-ais.html

    Full paper here – https://www.dropbox.com/scl/fi/pvo3ozkqfrmlwo3ndscdz/HLA_latest.pdf?rlkey=mmz8f71xm0a2t6nvixl7aih23&e=1&dl=0