Category: Crosspost

  • From Chess Master to Self-Driving Car: Understanding AI’s Big Picture

    From Chess Master to Self-Driving Car: Understanding AI’s Big Picture

    This is a crosspost of my article for The Print

    AI is replacing jobs—but which ones, and why? That question keeps resurfacing as headlines ping between panic and hype. The real answer, though, lies not in broad generalizations but in a deeper understanding of how different types of AI actually work—and more importantly, where they thrive and where they struggle.

    To unpack this, let’s start with a tale of two AIs. One plays chess. The other drives a car.

    The Game That Changed Everything

    Chess was once the gold standard of human intelligence. It required memory, strategy, foresight—surely, we thought, only the brightest minds could master it. Then came Deep Blue, then AlphaZero, and today, chess engines far outstrip the world’s best grandmasters.

    Why? Because chess is a closed world. The board is an eight-by-eight grid. The rules are fixed. Every piece behaves predictably. A knight never surprises you by moving like a queen. And the worst that can happen if the AI makes a mistake? It loses the game. No one gets hurt.

    That’s what makes chess such a perfect domain for AI. It’s highly predictable, and the consequences of error are minimal.

    The Road That Refuses to Be Tamed

    Now think of a self-driving car. Its environment is the polar opposite of a chessboard. The road is full of unpredictable drivers, jaywalking pedestrians, blown-out tires, random construction, rain-slicked asphalt, and sudden GPS glitches. There’s no guaranteed script.

    Worse, a single error can have catastrophic results. A misjudged turn or a missed stop sign doesn’t just mean “game over”—it could mean injury or even death. In this world, low predictability collides with high consequences, demanding an entirely different kind of intelligence—one that machines still struggle to master.

    The Two-Axis Map of AI’s Real Power

    What separates chess-playing AIs from self-driving ones isn’t just technical complexity. It’s the nature of the task itself. And that brings us to the core idea: a two-axis map that helps us understand where AI excels, where it falters, and what it means for the future of work.

    Imagine a graph:

    • On the horizontal axis, you have Predictability. To the right: structured, rule-based tasks like bookkeeping or board games. To the left: chaotic, real-world tasks like emergency response or childcare.
    • On the vertical axis, you have Consequence of Error. At the bottom: low-stakes domains where mistakes are annoying but harmless, like a bad movie recommendation. At the top: high-stakes arenas where a single misstep can cause financial ruin or loss of life.

    Jobs that sit in the bottom-right corner—predictable and low-consequence—are prime targets for full automation. Think data entry, inventory tracking, or sorting packages in a warehouse. Machines handle these with ease.

    But jobs in the top-left corner—unpredictable and high-consequence—remain stubbornly human. Think surgeons, firefighters, diplomats, or yes, even taxi drivers. These roles demand judgment, adaptability, empathy, and accountability. They are much harder, if not impossible, for AI to fully replace.

    Enter the Centaur

    This is where one of the most powerful ideas in AI comes into play: human-AI collaboration. Borrowed from the world of chess, it’s called “Centaur Chess.”

    In Centaur Chess, human players team up with AI engines. The machine crunches millions of possibilities per second. The human brings strategy, creativity, and long-term thinking. Together, they often outperform both lone humans and pure AI systems.

    This hybrid model is the future of many professions. In medicine, AI can scan thousands of X-rays in seconds, flagging anomalies. But a doctor makes the final call, understanding the patient’s story, context, and risks. In creative fields, AI can generate endless design variations, but an artist selects, curates, and gives meaning.

    The centaur doesn’t fear the machine. It rides it.

    Rethinking the Future of Work

    So, when people ask, “Will AI take my job?” the better question is: What kind of task is it? Is it rule-based or fuzzy? Are the stakes low or life-changing?

    AI is incredibly powerful at doing narrow, well-defined things faster than any human. But it is brittle in the face of chaos, ambiguity, and moral weight. And in those very places—where the world is messy and the stakes are high—humans are not just relevant; they are indispensable.

    The future of work won’t be a clean divide between jobs AI can do and jobs it can’t. It will be a layered world, where the most effective roles are those that blend human judgment with machine intelligence. So yes, some jobs will disappear. But others will evolve. And the real winners will be the centaurs.

  • A Brief History of Artificial Intelligence: From Turing to Transformers

    A Brief History of Artificial Intelligence: From Turing to Transformers

    This is a crosspost of my article for The Print

    Artificial Intelligence did not begin with code—it began with a question. Could machines think? And if so, how would we even know?

    In 1950, Alan Turing proposed that if a machine could carry on a conversation indistinguishable from a human, it could be called intelligent. This became the Turing Test, and it marked the philosophical beginning of AI.

    The technical beginning followed six years later, at the Dartmouth Workshop of 1956. Organized by John McCarthy, Marvin Minsky, Claude Shannon and others, it launched AI as a formal discipline. The claim was breathtaking: “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” For a while, that dream held.

    The 1960s and 70s saw AI become a fixture of science fiction. Stanley Kubrick’s 2001: A Space Odyssey imagined HAL 9000, a machine that could speak, reason, and feel—until conflicting objectives caused it to turn rogue. HAL’s breakdown wasn’t madness—it was logic stretched to a breaking point. And that remains one of AI’s deepest warnings: machines may fail not because they malfunction, but because their goals are misaligned with ours.

    From the 1960s to the 1980s, Symbolic AI dominated the field. Intelligence was programmed through logic and rules, not learned from data. Expert systems like MYCIN and DENDRAL mimicked human specialists and briefly dazzled funders, but they were brittle—struggling with ambiguity and real-world complexity. Each new scenario demanded new rules, revealing the limits of hand-coded intelligence.

    The initial optimism faded. Early successes didn’t scale, and by the 1970s and again in the late 1980s, AI faced its winters—eras of disillusionment and vanishing support. The technology wasn’t ready. AI, once hailed as revolutionary, became a cautionary tale.

    Meanwhile, the world of chess provided a battleground for AI’s ambitions. In 1968, computer scientist John McCarthy bet that no machine could beat chess master David Levy in a match within ten years. He was right—but only just. By 1997, IBM’s Deep Blue defeated Garry Kasparov, the reigning world champion. This wasn’t intelligence in the human sense. Deep Blue didn’t think; it calculated—200 million positions per second, guided by rules and brute force.

    If Deep Blue marked a brute-force triumph, the next revolution came from inspiration closer to biology. Our brains are made of neurons and synapses, constantly rewiring based on experience. In 1943, McCulloch and Pitts proposed the first mathematical model of a neural network, mimicking how neurons fire and connect. Decades later, with more data and computational power, this idea would explode into what we now call deep learning.

    A key moment came in 2012. Researchers at Google Brain fed a deep neural network 10 million YouTube thumbnails—without labels. Astonishingly, one neuron began to specialize in detecting cat faces. The machine wasn’t told what a cat was. It discovered “cat-ness” on its own. This was the cat moment—the first clear sign that neural networks could extract meaning from raw data. From then on, deep learning would take off.

    That same year, another milestone arrived. AlexNet, a deep convolutional neural network, entered the ImageNet Challenge, a global competition for visual object recognition. It halved the previous error rate, using an 8-layer network trained on GPUs. This marked the beginning of AI’s rise in vision—powering facial recognition, self-driving cars, and medical diagnostics.

    In board games too, AI moved from mimicry to mastery. AlphaGo’s match against world Go champion Lee Sedol in 2016 stunned experts. Game 2, Move 37—an unconventional, creative move—changed the game’s theory forever. AlphaGo didn’t just compute; it improvised. In 2017, AlphaZero went further, mastering chess, Go, and shogi without human examples—just the rules and millions of self-play games. Grandmasters called its style “alien” and “beautiful.”

    In 2017, the landmark paper “Attention Is All You Need” introduced the Transformer architecture, a breakthrough that changed the course of AI. Unlike earlier models, Transformers could handle vast contexts and relationships between words, enabling a deeper understanding of language patterns. This paved the way for large language models (LLMs) like GPT and ChatGPT, trained on billions of words from books, websites, and online conversations. These models don’t know facts as humans do—they predict the next word based on learned patterns. Yet their output is often strikingly fluent and, at times, indistinguishable from human writing.

    These models don’t understand language the way we do. They predict the next word based on probabilities. And yet, their output often sounds thoughtful, even profound. In 2025, one such model helped save a pregnant woman’s life by identifying a symptom of preeclampsia from a casual health question. This was no longer science fiction. AI was here helping, guiding, even warning.

    This is where the story darkens. Neural networks have millions—even billions—of internal parameters. We know how they are trained, but not always why they produce a particular result. This is the black box problem: powerful models we can’t fully interpret.

    Worse, these models inherit biases from the data they are trained on. If trained on internet text that contains racial, gender, or cultural prejudices, the model may echo them—sometimes subtly, sometimes dangerously. And because their reasoning is opaque, these biases can be hard to detect and even harder to fix.

    AI systems are also confident liars. They “hallucinate” facts, produce fake citations, or reinforce misinformation—often with grammatical precision and emotional persuasion. They are trained to be convincing, not correct.

    As we hand over more decisions to machines—medical diagnoses, hiring recommendations, bail assessments, autonomous driving—we face hard questions: Who is responsible when an AI system fails? Should a machine ever make a life-or-death decision? How do we align machine goals with human values?

    The fictional HAL 9000 chose its mission over its crew, not out of malice, but from a conflict of objectives. Today’s systems don’t “choose” at all, but they still act, and their actions have consequences. Ironically, the most hopeful vision may lie in chess again. In freestyle tournaments, the best performers weren’t machines or grandmasters—but human-AI teams. Garry Kasparov put it best: “A weak human + machine + good process beats a strong human or strong machine alone.” AI doesn’t need to replace us. It can enhance us—if we build it thoughtfully, interpret it critically, and embed it in processes we trust.