Tag: Technology

  • A Brief History of Artificial Intelligence: From Turing to Transformers

    A Brief History of Artificial Intelligence: From Turing to Transformers

    This is a crosspost of my article for The Print

    Artificial Intelligence did not begin with code—it began with a question. Could machines think? And if so, how would we even know?

    In 1950, Alan Turing proposed that if a machine could carry on a conversation indistinguishable from a human, it could be called intelligent. This became the Turing Test, and it marked the philosophical beginning of AI.

    The technical beginning followed six years later, at the Dartmouth Workshop of 1956. Organized by John McCarthy, Marvin Minsky, Claude Shannon and others, it launched AI as a formal discipline. The claim was breathtaking: “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” For a while, that dream held.

    The 1960s and 70s saw AI become a fixture of science fiction. Stanley Kubrick’s 2001: A Space Odyssey imagined HAL 9000, a machine that could speak, reason, and feel—until conflicting objectives caused it to turn rogue. HAL’s breakdown wasn’t madness—it was logic stretched to a breaking point. And that remains one of AI’s deepest warnings: machines may fail not because they malfunction, but because their goals are misaligned with ours.

    From the 1960s to the 1980s, Symbolic AI dominated the field. Intelligence was programmed through logic and rules, not learned from data. Expert systems like MYCIN and DENDRAL mimicked human specialists and briefly dazzled funders, but they were brittle—struggling with ambiguity and real-world complexity. Each new scenario demanded new rules, revealing the limits of hand-coded intelligence.

    The initial optimism faded. Early successes didn’t scale, and by the 1970s and again in the late 1980s, AI faced its winters—eras of disillusionment and vanishing support. The technology wasn’t ready. AI, once hailed as revolutionary, became a cautionary tale.

    Meanwhile, the world of chess provided a battleground for AI’s ambitions. In 1968, computer scientist John McCarthy bet that no machine could beat chess master David Levy in a match within ten years. He was right—but only just. By 1997, IBM’s Deep Blue defeated Garry Kasparov, the reigning world champion. This wasn’t intelligence in the human sense. Deep Blue didn’t think; it calculated—200 million positions per second, guided by rules and brute force.

    If Deep Blue marked a brute-force triumph, the next revolution came from inspiration closer to biology. Our brains are made of neurons and synapses, constantly rewiring based on experience. In 1943, McCulloch and Pitts proposed the first mathematical model of a neural network, mimicking how neurons fire and connect. Decades later, with more data and computational power, this idea would explode into what we now call deep learning.

    A key moment came in 2012. Researchers at Google Brain fed a deep neural network 10 million YouTube thumbnails—without labels. Astonishingly, one neuron began to specialize in detecting cat faces. The machine wasn’t told what a cat was. It discovered “cat-ness” on its own. This was the cat moment—the first clear sign that neural networks could extract meaning from raw data. From then on, deep learning would take off.

    That same year, another milestone arrived. AlexNet, a deep convolutional neural network, entered the ImageNet Challenge, a global competition for visual object recognition. It halved the previous error rate, using an 8-layer network trained on GPUs. This marked the beginning of AI’s rise in vision—powering facial recognition, self-driving cars, and medical diagnostics.

    In board games too, AI moved from mimicry to mastery. AlphaGo’s match against world Go champion Lee Sedol in 2016 stunned experts. Game 2, Move 37—an unconventional, creative move—changed the game’s theory forever. AlphaGo didn’t just compute; it improvised. In 2017, AlphaZero went further, mastering chess, Go, and shogi without human examples—just the rules and millions of self-play games. Grandmasters called its style “alien” and “beautiful.”

    In 2017, the landmark paper “Attention Is All You Need” introduced the Transformer architecture, a breakthrough that changed the course of AI. Unlike earlier models, Transformers could handle vast contexts and relationships between words, enabling a deeper understanding of language patterns. This paved the way for large language models (LLMs) like GPT and ChatGPT, trained on billions of words from books, websites, and online conversations. These models don’t know facts as humans do—they predict the next word based on learned patterns. Yet their output is often strikingly fluent and, at times, indistinguishable from human writing.

    These models don’t understand language the way we do. They predict the next word based on probabilities. And yet, their output often sounds thoughtful, even profound. In 2025, one such model helped save a pregnant woman’s life by identifying a symptom of preeclampsia from a casual health question. This was no longer science fiction. AI was here helping, guiding, even warning.

    This is where the story darkens. Neural networks have millions—even billions—of internal parameters. We know how they are trained, but not always why they produce a particular result. This is the black box problem: powerful models we can’t fully interpret.

    Worse, these models inherit biases from the data they are trained on. If trained on internet text that contains racial, gender, or cultural prejudices, the model may echo them—sometimes subtly, sometimes dangerously. And because their reasoning is opaque, these biases can be hard to detect and even harder to fix.

    AI systems are also confident liars. They “hallucinate” facts, produce fake citations, or reinforce misinformation—often with grammatical precision and emotional persuasion. They are trained to be convincing, not correct.

    As we hand over more decisions to machines—medical diagnoses, hiring recommendations, bail assessments, autonomous driving—we face hard questions: Who is responsible when an AI system fails? Should a machine ever make a life-or-death decision? How do we align machine goals with human values?

    The fictional HAL 9000 chose its mission over its crew, not out of malice, but from a conflict of objectives. Today’s systems don’t “choose” at all, but they still act, and their actions have consequences. Ironically, the most hopeful vision may lie in chess again. In freestyle tournaments, the best performers weren’t machines or grandmasters—but human-AI teams. Garry Kasparov put it best: “A weak human + machine + good process beats a strong human or strong machine alone.” AI doesn’t need to replace us. It can enhance us—if we build it thoughtfully, interpret it critically, and embed it in processes we trust.

  • What the 2025 Mary Meeker AI Report Means for Work, Strategy, and GTM

    What the 2025 Mary Meeker AI Report Means for Work, Strategy, and GTM

    It’s been a while since Mary Meeker’s Internet Trends report provided a pulse check on where technology is headed. The first such report in a while, now focused on AI, does more than describe trends — it lays out a new operating reality.

    This post distills 8 critical insights from the report — and what they mean for enterprise leaders, GTM strategists, product owners, and those shaping the future of work.

    📄 Full Report →


    1. This Time, the Machines Move Faster Than We Do

    The report opens with a bold observation:

    “AI usage is ramping faster than any prior computing platform — even faster than the internet.”

    This isn’t just fast. It’s compounding.

    For teams and organizations, that means:

    • Planning cycles must adapt to faster execution rhythms
    • Feedback loops need compression and real-time recalibration
    • Legacy workflows aren’t built for this pace

    If your GTM or delivery cadence still runs on quarterly inertia, it’s time to rethink.


    2. Time-to-Value Just Got Compressed. Again.

    The biggest unlock from GenAI? Time compression.

    From prompt → prototype

    From draft → delivery

    From insight → action

    This collapse in cycle time transforms:

    • Productivity metrics
    • Product development lifecycles
    • Org-wide alignment rhythms

    🚀 Output velocity is the new KPI.


    3. Your Next Teammate Might Not Be Human

    We’re entering the era of embedded AI agents — not just assistants.

    AI is no longer a tool on the side. It’s part of the team:

    • Summarizing meetings
    • Writing first drafts
    • Managing workflows

    That means:

    • Rethinking team design
    • Clarifying AI vs human task ownership
    • Measuring contribution beyond headcount

    AI is a teammate now. Time to onboard accordingly.


    4. It’s Not Risk Slowing AI — It’s Friction

    One of the most important insights in the report:

    Employees aren’t blocked by fear — they’re blocked by poor UX.

    Adoption stalls when AI:

    • Doesn’t fit into existing workflows
    • Requires tool-switching
    • Has unclear value props

    🛠️ The fix? Product thinking:

    • Reduce toggle tax
    • Integrate into natural habits
    • Onboard like a consumer-grade app

    5. AI Fluency Is the New Excel

    The most valuable skill in 2025 isn’t coding — it’s prompt fluency.

    AI Fluency = knowing how to ask, guide, and evaluate AI output:

    • What to prompt
    • What to ignore
    • How to refine

    Every function — from marketing to HR — needs this literacy.

    We’re in the age of human-in-the-loop as a capability, not a compliance checkbox.


    6. Follow the Money. It’s Flowing to AI

    The report outlines the capital story behind the hype:

    • Enterprise GenAI spend is ramping fast
    • Compute infrastructure is scaling explosively
    • VC and corporate funding is prioritizing AI-native bets

    For leaders, this isn’t a trend — it’s a reallocation cycle.

    Infra budgets, product bets, and partnerships must now align with where the ecosystem is heading — not where it’s been.


    7. Go Deep, Not Just Wide

    Horizontal AI gets you buzz.

    Vertical AI gets you impact.

    The report shows real traction in:

    • Healthcare
    • Legal
    • Education
    • Financial services

    Where AI is tuned to real-world roles and workflows, it sticks.

    If you’re shipping AI without domain context, you’re leaving retention on the table.


    8. Infrastructure Is Strategy. Again.

    The biggest shift in the back half of the report?

    AI is putting infrastructure back in the spotlight.

    From model training to agent orchestration to secure runtimes:

    • The AI stack is now a competitive moat
    • Data pipelines and prompt layers are shaping outcomes
    • Infra is no longer invisible — it’s strategic

    What cloud was to the last decade, AI-native infra may be to the next.


    Final Thoughts

    The Mary Meeker 2025 AI Trends report isn’t just a forecast — it’s a framing device. One that challenges every enterprise leader to rethink:

    • How fast we move
    • What value looks like
    • Who (or what) we collaborate with
    • Where advantage is shifting

    It’s not enough to adopt AI.

    We have to redesign around it.

    📄 You can access the full report here

  • Shadow AI, Friction Fatigue & the Flexibility Gap: 5 Lessons from the Ivanti Tech at Work 2025 Report

    Shadow AI, Friction Fatigue & the Flexibility Gap: 5 Lessons from the Ivanti Tech at Work 2025 Report

    The Ivanti Tech at Work 2025 report isn’t just a workplace tech survey — it’s a mirror to how modern organizations are struggling (and sometimes succeeding) to adapt to the realities of hybrid work, AI adoption, and employee expectations.

    Here are 5 insights that stood out — and why they matter for anyone building teams, tools, or trust in the modern workplace.

    🔗 Read the full report


    1. Shadow AI Is a Trust Problem, Not Just a Tech One

    Nearly 1 in 3 workers admit they use AI tools like ChatGPT in secret at work.

    Why the secrecy?

    According to the report:

    • 36% want a competitive edge
    • 30% fear job cuts or extra scrutiny
    • 30% say there’s no clear AI usage policy
    • 27% don’t want their abilities questioned

    This is more than a governance issue. It’s a cultural signal.
    Employees are turning to AI to be more productive — but doing so under the radar signals a trust deficit and policy vacuum.

    💡 What to do:
    Leaders need to replace silence with structure — with clear, enabling policies that promote responsible AI use, and an environment where value creation matters more than screen time.


    2. The Flexibility Paradox: High Demand, Low Supply

    83% of IT professionals and 73% of office workers value flexibility highly — but only ~25% say they actually have it.

    Even as companies trumpet hybrid work, asynchronous enablement, autonomy, and outcome-based work norms haven’t caught up. The result? Disengagement and frustration.

    💡 What to do:
    Revisit what flexibility really means. It’s not just about where people work — it’s how they work.
    That means:

    • Tools for async collaboration
    • Decision-making frameworks for remote teams
    • Leaders modeling flexible behaviors

    3. Presenteeism Is the New Fatigue

    The report highlights “digital presenteeism”: workers pretending to be active — jiggling mice, logging in early — to appear productive.

    • 48% say they dislike their job but stay
    • 37% admit to showing up without doing meaningful work

    These are signs of unclear expectations and poor workflow design — not disengagement alone.

    💡 What to do:
    Audit for friction, not just lag.
    Look at your workflows, KPIs, and culture. Are people forced to perform busyness instead of real value?


    4. The Digital Experience Gap Is Real

    While flexible work is valued, many workers find it harder to work outside the office. The report notes:

    • 44–49% say collaboration is easier in-office
    • 36–48% say manager access is better in-office
    • 16–26% say apps are easier to use from the office

    💡 What to do:
    Enable remote-first experience, not just policy:

    • Seamless access to tools and systems
    • Integrated collaboration platforms
    • AI-powered support and IT workflows

    5. Redesign for Trust, Not Just Tools

    The big takeaway?

    Workers don’t just need better AI — they need clarity on what’s allowed
    They don’t just need more flexibility — they need workflows that enable it
    They don’t just need faster tools — they need a culture that values trust over control


    Final Thoughts

    The Ivanti Tech at Work 2025 report is a diagnostic — revealing what happens when new tools are bolted onto outdated operating models.

    For leaders, the message is clear:

    We need to evolve not just our tech stack, but our trust stack.

    🔗 Read the full report

  • From Productivity to Progress: What the New MIT-Stanford AI Study Really Tells Us About the Future of Work

    From Productivity to Progress: What the New MIT-Stanford AI Study Really Tells Us About the Future of Work

    A new study from MIT and Stanford just rewrote the AI-in-the-workplace narrative.

    Published in Fortune this week, the research shows that generative AI tools — specifically chatbots — are not only boosting productivity by up to 14%, but they’re also raising earnings without reducing work hours.

    “Rather than displacing workers, AI adoption led to higher earnings, especially for lower-performing employees.”

    Let that sink in.


    🧠 AI as a Floor-Raiser, Not a Ceiling-Breaker

    The most surprising finding?
    AI’s greatest impact was seen not among the top performers, but among lower-skilled or newer workers.

    In customer service teams, the AI tools essentially became real-time coaches — suggesting responses, guiding tone, and summarizing queries. The result: a productivity uplift and quality improvement that evened out performance levels across the team.

    This is a quiet revolution in workforce design.

    In many traditional orgs, productivity initiatives often widen the gap between high and average performers. But with AI augmentation, we’re seeing the inverse — a democratization of capability.


    💼 What This Means for Enterprise Leaders

    This research confirms a pattern I’ve observed firsthand in consulting:
    The impact of AI is not just technical, it’s organizational.

    To translate AI gains into business value, leaders need to:

    ✅ 1. Shift from Efficiency to Enablement

    Don’t chase cost-cutting alone. Use AI to empower more team members to operate at higher skill levels.

    ✅ 2. Invest in Workflow Design

    Tool adoption isn’t enough. Embed AI into daily rituals — response writing, research, meeting prep — where the marginal gains accumulate.

    ✅ 3. Reframe KPIs

    Move beyond “time saved” metrics. Start tracking value added — better resolutions, improved CSAT, faster ramp-up for new hires.


    🔄 A Playbook for Augmented Teams

    From piloting GPT agents to reimagining onboarding flows, I’ve worked with startups and enterprise teams navigating this shift. The ones who succeed typically follow this arc:

    1. Pilot AI in a high-volume, low-risk function
    2. Co-create use cases with users (not for them)
    3. Build layered systems: AI support + human escalation
    4. Train managers to interpret, not just supervise, AI-led work
    5. Feed learnings back into process improvement loops

    🔚 Not AI vs Jobs. AI Plus Better Jobs.

    The real story here isn’t about productivity stats. It’s about potential unlocked.

    AI is no longer a futuristic experiment. It’s a present-day differentiator — especially for teams willing to rethink how work gets done.

    As leaders, we now face a simple choice:

    Will we augment the talent we have, or continue to chase the talent we can’t find?

    Your answer will shape the next 3 years of your business.


    🔗 Read the original article here:

    Fortune: AI chatbots boost earnings and hours, not job loss


    Want to go deeper? I’m working on a new AI augmentation playbook — DM me or sign up for updates.

    #AI #FutureOfWork #EnterpriseStrategy #GTM #DigitalTransformation #Chatbots #Productivity #ConsultingInsights

  • Exploring DeepSeek’s R1

    Exploring DeepSeek’s R1

    The new model from DeepSeek has been making waves over the last week thanks to its competitive performance against GPT-o1 while being open source. It is also able to transparently share its thought chain.

    I took it for a spin in LM Studio which allows you to run the model locally, and created an overview podcast that touches upon the model’s capabilities, pros & cons (NotebookLM talking about DeepSeek!). Check it out below:

    Thanks to the great overview provided by Zvi.

    In case you are curious on how the model introduces itself, here’s a chat excerpt including the chain of thought:

    Thought:

    Alright, the user asked about what I can do and who built me. Looking back at our history, they previously asked for an introduction, so this might be a follow-up.

    I need to clearly outline my capabilities without being too technical. They probably want a straightforward answer.

    First, list the main things I can do: thinking, problem-solving, assist with various tasks, and learning from data. Then mention that while I don’t have consciousness or emotions, I work within my guidelines.

    Next, explain who built me—DeepSeek, an AI research company. Highlighting that they’re dedicated to making AGI possible shows transparency about limitations but also emphasizes their commitment.

    Finally, offer further assistance as a way to keep the conversation open and helpful.

    Response:

    I’m DeepSeek-R1, an artificial intelligence assistant created by DeepSeek, a Chinese company dedicated to making Artificial General Intelligence (AGI) feasible. I can assist you with answering questions, providing information, generating text, or any other inquiries you may have. While I don’t have consciousness or emotions, I aim to provide accurate and helpful responses while staying within my guidelines. Let me know how I can help!

  • The S-Pen update on the S25 Ultra – It’s the magnets

    The S-Pen update on the S25 Ultra – It’s the magnets

    Looks like the S-Pen is losing the air gestures and Bluetooth capabilities that allow it to act as a remote camera shutter on the latest Samsung Galaxy S25 Ultra. I suspect it is due to the requirement of magnets in the new Qi2 wireless charging feature that’s being added.

    The earlier editions of the Ultra series used to have stylus detection issues with magnet based accessories (like this reddit post notes), like cases which gave it MagSafe like capabilities. In fact I encountered this issue with my S23 when I tried a MagSafe equipped case with the S-Pen not working properly around the middle of the screen where the magnet is positioned on the back.

    Not much to be done I suppose as the new model trades off the advanced stylus capabilities in favour of faster charging.

  • The Moon in 2024 through the S23 & more

    The Moon in 2024 through the S23 & more

    It’s been a while since I posted about my photography activities and the phone reviews, and what better time than the new year to look back at some of the memories created over the last year.

    Here’s a couple of moonshots taken last year with my current phone – the Samsung Galaxy S23 Ultra that I got back in 2023. I’ve put the 30x zoom versions first followed by the regular wide angle photo to give a sense of the zoom. All were taken during our 2024 summer trip to Kashmir (first set in Phalagam, second in Srinagar on the Dal Lake).

    While the zoomed in images may not be the sharpest, the S23 Ultra has finally made me stop missing my DSLR with its zoom lenses. There may have been a lot of controversy over the artificial enhancement in the moon photos taken with Samsung phones, but I have been really pleased with the performance of the 10x lens which enables shots that were unthinkable on mobile phones.

    The moon is of course not the only object that you can shoot, so here’s a few more including the Sun, a rainbow and more travel photos.

    Do follow me on Instagram for more.

  • My NotebookLM a-ha moment

    My NotebookLM a-ha moment

    I have been trying out Google’s NotebookLM over the last few days, and I thought of taking a shot at generating a podcast from the “What if” exhibit images that I had come across during my visit to the Birla Science & Technology Museum in Kolkata last month.

    I had created a PDF from the images I had taken from the exhibit and uploaded it to NotebookLM as a source. Below is the audio overview from this file – just give it a listen.

    This is nothing short of astounding given the way in which it is presented by the 2 hosts not just categorizing and covering each of the scientist’s contribution, but also explaining some of the key terms and providing context for their work. The last 4-5 min are even more interesting as it looks at the big picture and talks about nation building.

    I also gave the interactive mode (currently in beta) a try wherein you can interrupt the hosts with your comments and questions. I brought in the family connection to Upendranath Brahmachari (my father had incidentally created the wikipedia page back in 2005), and also asked about potential conspiracy theories surrounding the lives of Homi Bhabha and Vikram Sarabhai. They seemed to do a good job of integrating my comments before bringing the discussion back on track, even sounding slightly vexed by the multiple interruptions.

    Looks like 2025 is going to be another big year for the AI related developments given the plethora of announcements at the end of last year, and Google definitely has something interesting going on as always. Just hope that they don’t shut it down.

    In case you are curious, here’s the PDF that I used:

    And here’s the text summary that NotebookLM generated from the file:

    Here is a list of all the scientists mentioned in the sources, along with their contributions:

    Chemistry

    • Thiruvengadam Rajendram Seshadri: Awarded a fictional Nobel Prize in Chemistry for his research on plant chemistry, primarily in oxygen heterocylics, and his contributions to the isolation and structural elucidation of flavonoid pigments.
    • Prafulla Chandra Ray: Awarded a fictional Nobel Prize in Chemistry for his research on organic and inorganic nitrites and pioneering modern chemical research in India.
    • Asima Chatterjee: Awarded a fictional Nobel Prize in Chemistry for her contributions to medicinal chemistry, with a special reference to alkaloids, coumarins and terpenoids, alongside her advancements in mechanistic organic analytical chemistry and development of anti-epileptic and anti-malarial drugs.

    Physics

    • Homi Jehangir Bhabha: Awarded a fictional Nobel Prize in Physics for explaining the scattering of positrons by electrons with exchange, his work on Dirac’s theory of the positron, the passage of fast electrons and the theory of cosmic showers.
    • Meghnad Saha: Awarded a fictional Nobel Prize in Physics for formulating the Saha Equation, which allows for the calculation of the degree of ionization in a gas, a concept important in analyzing the spectra of stars and nebulae.
    • Satyendra Nath Bose: Awarded a fictional Nobel Prize in Physics for his work describing elementary quantum particles, leading to Bose-Einstein Statistics which fueled the quantum revolution.
    • Jagadish Chandra Bose: Awarded a fictional Nobel Prize in Physics for his quasi-optical experiments on the properties of very short radio waves, leading to improvements in the coherer and contributing to the development of solid-state physics.

    Physiology or Medicine

    • Upendranath Brahmachari: Awarded a fictional Nobel Prize in Physiology or Medicine for creating the drug Urea Stibamine, used to treat visceral leishmaniasis or black fever, a severe infection caused by the Leishmania parasite.
    • Dilip Mahalanabis: Awarded a fictional Nobel Prize in Physiology or Medicine for being one of the earliest practitioners of vital oral rehydration therapy, whose research into diarrhoeal diseases saved millions of lives, including vulnerable infants.

    Other Sciences

    • Birbal Sahni: Awarded a fictional Nobel Prize in Paleobotany for examining evolutionary trends and geographical distributions, suggesting a separate order, Taxales, within the conifers.
    • John Burdon Sanderson Haldane: Awarded a fictional Nobel Prize in Evolutionary Biology for his work in relating Darwinian evolutionary theory and Gregor Mendel’s concepts of heredity.
    • Prasanta Chandra Mahalanobis: Awarded a fictional Nobel Prize in Statistics for formulating the Mahalanobis distance – a statistical measure of comparison between two data sets – and for his work on innovative techniques for conducting large-scale sample surveys.
    • Salim Moizuddin Abdul Ali: Awarded a fictional Nobel Prize in Ornithology for his systematic bird surveys across India and authoring books popularizing ornithology.
    • Edavaleth Kakkat Janaki Ammal: Awarded a fictional Nobel Prize in Botany for her contributions to plant breeding, cytogenetics, phytogeography and her development of several hybrid crop species, including varieties of sweet sugarcane that allowed India to cultivate its own.
    • Anna Modayil Mani: Awarded a fictional Nobel Prize in Meteorology for creating India’s first ozonesonde, making it easier for scientists to monitor the ozone layer.
    • Vikram Ambalal Sarabhai Jain: Awarded a fictional Nobel Prize in Space Science for initiating space research and helping develop nuclear power in India.
    • Verghese Kurien: Awarded a fictional Nobel Prize in Dairy Science for transforming India’s dairy industry by streamlining effective management and distribution strategies, making India the largest milk-producing country in the world.
    • Mankombu Sambasivan Swaminathan: Awarded a fictional Nobel Prize in Agricultural Science for his contributions to the Green Revolution in India by introducing high-yielding varieties of wheat and rice, further developing high-yielding varieties of wheat and rice, and spearheading mass movements with farmers and scientists. His work helped save India and Pakistan from certain famine-like conditions in the 1960s.
    • Manali Kallat Vainu Bappu: Awarded a fictional Nobel Prize in Astronomy for jointly discovering the Wilson-Bappu effect with astronomer Olin Chaddock Wilson.
    • Vijay Pandurang Bhatkar: Awarded a fictional Nobel Prize in Computer Science for architecting India’s national initiative in supercomputing, where he spearheaded the development of India’s first supercomputer PARAM in 1990.
  • Dubai Diaries: Running LLMs & Stable Diffusion locally on a gaming laptop

    Dubai Diaries: Running LLMs & Stable Diffusion locally on a gaming laptop

    I previously wrote about the second device that I got about coming to Dubai, but not much about the first one which was a gaming laptop. So here’s a bit about the laptop which also doubles as a local AI driver thanks to the Nvidia GPU (the RTX3060).

    Soon after getting it back in 2022, I tried running the Stable Diffusion models and it was quite a bit of an upgrade over my original attempt on a plain GPU-less Windows machine. The generation times came down to 10s or so, and has gotten even faster as the models and tools have been optimised over the last couple of years. There are quite a few projects available on GitHub if you want give it a try – AUTOMATIC1111 and easydiffusion are among the more popular options. Nvidia has also got a TensorRT extension to further improve performance.

    With that out of the way, I also discovered LM Studio which allows you to run LLMs locally with a chat like interface thrown in, and you can access a bunch of models like Meta’s LLama. The response times are of course not as fast as the freely available online options like ChatGPT, Claude, Gemini and the likes, but you effectively get unlimited access to the model.

    Here’s an example from a conversation I had with LLama regarding the coffee meme from Ace Attorney game series:

  • iOS to Android P2: The Watch migration

    Being deeply entrenched in the Apple ecosystem definitely has its advantages as the devices typically work well together, be it the iPhone to Apple Watch or with the HomePod & iPads. Getting out is another matter, of course. When I ditched my dead iPhone 11 Pro & switched to the Samsung Galaxy S20 FE last year, I also got an iPhone SE as a backup phone which continued to pair with my Apple Watch. This setup was ok from the fitness tracking point of view, but for me, notifications on the wrist is the USP of smartwatches.

    Design & Features

    So, after a month of trying to avoid another gadget purchase, I opted to get the Samsung Galaxy Watch 4 which is one of the few models with Wear OS 3. While it did have pretty much all a sensors & hardware features as my Apple Watch Series 5 like always on display, sleep tracking, ECG etc. (plus a few more like blood pressure monitoring, SpO2 & body composition analyzer), the ECG & BP sensors are not active in India yet. Haptics are much better on the Apple Watch through. However, you get pretty much similar hardware, but at half the price.

    The display on the Galaxy Watch is round, which does take some getting used to when swiping around, but feels more natural to me as majority of my old school watches were round. The controls are somewhat different with 2 buttons on the side, and a touch bezel which can be used for quick scrolling similar to the Apple Watch’s crown. The Classic version of the Galaxy Watch does have a hardware dial instead of the touch bezel, but the price & weight go up significantly.

    On the strap front, the Galaxy Watch uses standard 22 mm bands, so the costs are significantly lower. My preference is for the Milanese loop bands & magnetic leather bands for formal occasions, & velcro ones for regular home use. The Apple Watch has a bunch of 3rd party options, but those are 2-3x costlier than the standard watch varieties (Apple’s own straps can cost as much as other smartwatches themselves). Also, while the mechanism for securing & removing bands on Apple watches is quite novel & convenient, 3rd party options can be a bit finicky and don’t work as smooth.

    Apps & Notifications

    Notifications themselves are quite decently managed, though you do need to use the touch bezel to scroll through them quickly when they pile up in the notification centre. You also get a good number of options to act on the notifications unlike on the Apple Watch. However, you may need to scroll quite a bit to get to them as the notifications can take up a good deal of vertical space along with the response options.

    Complications & watch faces are also handled very differently with complication support varying from face to face. Wear OS also supports full screen widgets which feel more like complications.

    There’s also a bunch of standalone apps & phone app counterparts (sometimes broken into multiple apps), similar to the Apple Watch. There’s also a browser if you want. That said, the app situation is probably better on the Apple side, but it’s pretty much stagnated at this point with some of the initial developers like Uber even discontinuing support for the Watch apps.

    Health & Fitness tracking

    While the watches may have parity in terms of hardware sensors, the biggest difference comes in the way fitness data is handled between the watches. Apple has the Health app as a central hub which can share necessary sensor data with multiple other apps. On the Galaxy Watch, and possibly on Wear OS/Android, there is no such standardised central hub due to which most of the data beyond basic step counts is locked within Samsung’s health app, or whichever app (say Google Fit) you use to get the reading, be it getting a heart rate reading, logging your weight or recording a workout. In fact, some of the advanced sensor features do not seem to be supported on other apps.

    The situation will of course be similar on other platforms which may not even have support for 3rd party apps. For me, this is a reasonable tradeoff and I am using Samsung’s Health app to track the vitals for the time being.

    Closing

    You can also answer calls on the Galaxy watch in a pinch just like on the Apple Watch, and support is limited to phone calls as well. The speaker is also fairly loud which is good enough unless you are in a noisy environment. Battery life has been typically better on My Galaxy Watch than on the Apple Watch as I can also comfortably wear it to sleep and charge it towards the end of the next day.

    I am getting a very similar experience to the Apple Watch with my Galaxy Watch, and it serves my primary need of notifications on the wrist with few tradeoffs, but also a bunch of new features at half the price of the Apple Watch. Ultimately, I have been able to replace my iPhone 11 Pro & Apple Watch Series 5 combo at the a slightly lower cost than it would have taken me to get a 11 Pro replacement.