Category: Newsletter

  • Turning 42, and Coming Full Circle

    Turning 42, and Coming Full Circle

    I turned 42 this year. I don’t usually attach much meaning to birthdays, but this one did trigger a quiet pause—not a reinvention, not a reset, just a sense of recognition. A feeling that certain instincts and interests I’ve carried for a long time were finally meeting the right conditions to be acted upon.

    In many ways, it felt like coming full circle. Post–IIT Bombay, I had toyed with the idea of building something of my own more than once, but the timing never really worked. The cost of experimentation was high, the downside felt asymmetric, and meaningful execution required a kind of commitment—in time, capital, and headspace—that didn’t quite align with where life was then. AI didn’t create those ambitions. It removed enough friction that acting on them no longer felt irresponsible.

    There’s a popular mental model floating for AI—the Star Trek computer. Voice interface, simple commands, complex tasks executed seamlessly. It’s appealing: tell the AI what you want, and it handles the messy details. But we don’t have that yet. What we have now is quite powerful but far messier—and that gap between expectation and reality matters.

    Looking back, this arc probably started earlier than I realized. What felt like casual tinkering at the time—experimenting with tools, workflows, and ways of interacting with information—was really the beginning of a longer loop of building, reflecting, and writing.

    The first half of the year was deliberately focused inward. The arrival of our newborn changed the rhythm of everyday life in fairly fundamental ways. Time became scarcer, but priorities became clearer. Decisions that once felt nuanced or debatable started resolving themselves quickly when viewed through the lens of family well-being and long-term sustainability.

    Around the same time, we moved from Dubai back to Mumbai. On paper, it was a relocation. In practice, it was a broader reset—of cost structures, support systems, and optionality. Some things became simpler, others more complex, but overall it created a sense of grounding that had been missing.

    In hindsight, the first half wasn’t a slowdown so much as an incubation period. That stability mattered more than I realized at the time, because once the personal base felt settled, professional decisions became easier to make—and easier to commit to. The questions that kept surfacing during this phase were telling. Education, work, and what we’re really preparing the next generation for stopped feeling abstract and started feeling personal. Agency matters more than intelligence—the capacity to take initiative, make things happen, shape your path rather than wait for it. Are we educating for that? It’s a question that feels more urgent when you’re thinking about your own child’s future.

    The second half of the year marked a clear shift from exploration to ownership. I chose to go down the solo path, not because it’s easier, but because at this stage it offers speed and coherence. Fewer dependencies, tighter feedback loops, clearer accountability.

    AI changed the feasibility equation in a very real way. What once required teams and capital can now be prototyped solo—not perfectly, but fast enough to learn. Over time, this also changed how I approached building itself, gravitating toward a more fluid, iterative style where intent and execution sit much closer together.

    That conviction led to formalizing OrchestratorAI. Registering the company and filing the trademark weren’t about signaling externally as much as they were about drawing a line for myself. This wasn’t just advisory work or experimentation anymore; it was something I wanted to build patiently and seriously.

    A lot of the focus naturally gravitated toward the long tail—especially MSMEs. Large enterprises will adopt AI, but slowly and expensively. Smaller businesses can leapfrog. The marginal cost of intelligence has dropped enough that problems once considered too small or not worth solving suddenly are. That idea kept resurfacing as I looked at broader patterns in work, strategy, and go-to-market, often in ways that felt far messier in practice than in slides.

    Completing a year self-employed felt like its own milestone—not because of what I’d built yet, but because I’d committed to the path.

    Three things that have crystallized

    This is a great time for builders—not for shortcut-seekers.

    There’s a popular narrative that AI is about doing less work—the Star Trek computer fantasy where you state your intent and complex systems just work. My experience has been the opposite. We don’t have the Star Trek computer. AI rewards those willing to go deeper, not those trying to bypass fundamentals.

    Tools amplify intent and effort; they don’t replace them. The gap between “prompting” and actually building systems—workflows, artifacts, and feedback loops—is widening.

    Jevons’ Paradox is no longer theoretical in knowledge work.

    Making intelligence cheaper doesn’t reduce the amount of work; it expands it. Lower costs unlock suppressed demand—more ideas get tested, more workflows get built, more edge cases start to matter.

    Entire categories of previously unsolvable problems suddenly become economically viable. We’re even seeing fundamental business model shifts—from selling seats to selling outcomes, from “buy software” to “hire agents.”

    This is the foundation of what I’m building: serving markets that were previously uneconomical to serve.

    A lot of old ideas are finally working the way they were meant to.

    State machines, artifact-centric design, structured workflows, even the promise of auto-coding—none of these are new concepts. What’s new is that the economics finally make sense.

    But there’s also a new layer to master. Programmers now need mental models for agents, subagents, prompts, contexts, tools, workflows—a fundamentally new abstraction layer intermingled with traditional engineering.

    Abstractions still leak, and much of the year’s noise around agentic coding oscillated between hype and reality before settling. What’s emerging: structure matters, and there’s a real shift as agents become central to how work gets done.

    One curious footnote

    Starting in September, I noticed an unusual spike in traffic to my blog—specifically to posts from 10-15 years ago. The pattern was unmistakable: China. Most likely LLM training runs scraping old content at scale.

    There’s something quietly amusing about that timing. While my decade-old posts were feeding tomorrow’s AI models, I was using today’s AI to finally act on ideas I’d shelved post-IITB. Full circle, in an unexpected way.

    2026 feels different. Not because the work gets easier, but because the constraints are clearer. Family grounded, venture formalized, year one complete.

  • GitHub’s SpecKit: The Structure Vibe Coding Was Missing

    GitHub’s SpecKit: The Structure Vibe Coding Was Missing

    When I first started experimenting with “vibe coding,” building apps with AI agents felt like a superpower. The ability to spin up prototypes in hours was exhilarating. But as I soon discovered, the initial thrill came with an illusion. It was like managing a team of developers with an attrition rate measured in minutes—every new prompt felt like onboarding a fresh hire with no idea what the last one had been working on.

    The productivity boost was real, but the progress was fragile. The core problem was context—a classic case of the law of leaky abstractions applied to AI. Models would forget why they made certain choices or break something they had just built. To cope, I invented makeshift practices: keeping detailed dev context files, enforcing strict version control with frequent commits, and even asking the model to generate “reset prompts” to re-establish continuity. Messy, ad hoc, but necessary.

    That’s why GitHub’s announcement of SpecKit immediately caught my attention. SpecKit is an open-source toolkit for what they call “spec-driven development.” Instead of treating prompts and chat logs as disposable artifacts, it elevates specifications to first-class citizens of the development lifecycle.

    In practice, this means:

    • Specs as Durable Artifacts: Specifications live in Git alongside your code—permanent, version-controlled, and not just throwaway notes.
    • Capturing Intent: They document the why—the constraints, purpose, and expected behavior—so both humans and AI stay aligned.
    • Ensuring Continuity: They serve as the source of truth, keeping projects coherent across sessions and contributors.

    For anyone who has tried scaling vibe coding beyond a demo, this feels like the missing bridge. It brings just enough structure to carry a proof-of-concept into maintainable software.

    And it fits into a larger story. Software engineering has always evolved in waves—structured programming, agile, test-driven development. Each wave added discipline to creativity, redefining roles to reflect new economic realities—a pattern we’re seeing again with agentic coding. Spec-driven development could be the next step:

    • Redefining the Developer’s Role: Less about writing boilerplate, more about designing robust specs that guide AI agents.
    • Harnessing Improvisation: Keeping the creative energy of vibe coding, but channeling it within a coherent framework.
    • Flexible Guardrails: Not rigid top-down rules, but guardrails that allow both creativity and scalability.

    Looking back, my dev context files and commit hygiene were crude precursors to this very idea. GitHub’s SpecKit makes clear that those instincts weren’t just survival hacks—they pointed to where the field is heading.

    The real question now isn’t whether AI can write code—we know it can. The question is: how do we design the frameworks that let humans and AI build together, reliably and at scale?

    Because as powerful as vibe coding feels, it’s only when we bring structure to the improvisation that the music really starts.


    👉 What do you think—will specs become the new lingua franca between humans and AI?

  • From Productivity to Progress: What the New MIT-Stanford AI Study Really Tells Us About the Future of Work

    From Productivity to Progress: What the New MIT-Stanford AI Study Really Tells Us About the Future of Work

    A new study from MIT and Stanford just rewrote the AI-in-the-workplace narrative.

    Published in Fortune this week, the research shows that generative AI tools — specifically chatbots — are not only boosting productivity by up to 14%, but they’re also raising earnings without reducing work hours.

    “Rather than displacing workers, AI adoption led to higher earnings, especially for lower-performing employees.”

    Let that sink in.


    🧠 AI as a Floor-Raiser, Not a Ceiling-Breaker

    The most surprising finding?
    AI’s greatest impact was seen not among the top performers, but among lower-skilled or newer workers.

    In customer service teams, the AI tools essentially became real-time coaches — suggesting responses, guiding tone, and summarizing queries. The result: a productivity uplift and quality improvement that evened out performance levels across the team.

    This is a quiet revolution in workforce design.

    In many traditional orgs, productivity initiatives often widen the gap between high and average performers. But with AI augmentation, we’re seeing the inverse — a democratization of capability.


    💼 What This Means for Enterprise Leaders

    This research confirms a pattern I’ve observed firsthand in consulting:
    The impact of AI is not just technical, it’s organizational.

    To translate AI gains into business value, leaders need to:

    ✅ 1. Shift from Efficiency to Enablement

    Don’t chase cost-cutting alone. Use AI to empower more team members to operate at higher skill levels.

    ✅ 2. Invest in Workflow Design

    Tool adoption isn’t enough. Embed AI into daily rituals — response writing, research, meeting prep — where the marginal gains accumulate.

    ✅ 3. Reframe KPIs

    Move beyond “time saved” metrics. Start tracking value added — better resolutions, improved CSAT, faster ramp-up for new hires.


    🔄 A Playbook for Augmented Teams

    From piloting GPT agents to reimagining onboarding flows, I’ve worked with startups and enterprise teams navigating this shift. The ones who succeed typically follow this arc:

    1. Pilot AI in a high-volume, low-risk function
    2. Co-create use cases with users (not for them)
    3. Build layered systems: AI support + human escalation
    4. Train managers to interpret, not just supervise, AI-led work
    5. Feed learnings back into process improvement loops

    🔚 Not AI vs Jobs. AI Plus Better Jobs.

    The real story here isn’t about productivity stats. It’s about potential unlocked.

    AI is no longer a futuristic experiment. It’s a present-day differentiator — especially for teams willing to rethink how work gets done.

    As leaders, we now face a simple choice:

    Will we augment the talent we have, or continue to chase the talent we can’t find?

    Your answer will shape the next 3 years of your business.


    🔗 Read the original article here:

    Fortune: AI chatbots boost earnings and hours, not job loss


    Want to go deeper? I’m working on a new AI augmentation playbook — DM me or sign up for updates.

    #AI #FutureOfWork #EnterpriseStrategy #GTM #DigitalTransformation #Chatbots #Productivity #ConsultingInsights

  • Fixing newsletter readability on Inoreader

    Fixing newsletter readability on Inoreader

    I have been using Inoreader as my main feed reader for quite a while now after having tried feedly for a few years. I even upgraded to their Premium plan for the power user and additional newsletter limits.

    I subscribe to a few newsletters as it is easier to get all the content in one place. There had been one rendering issue that I had been facing with Matt Levine’s Money Stuff newsletter ever since the recent update to Inoreader where the last few characters in each line would get cut off in the pop up reader view, like so:

    I tried reaching out to support, but they were not able to do much. So, I did a bit of research and found that Inoreader has a custom CSS feature in its power user setting that some folks have used to personalise the interface. The newsletter contents were being rendered in an HTML table which I discovered by inspecting the source (hit F12 on the browser or go to the dev tools).

    I did a bit of experimentation in the custom CSS settings, and found that setting the table width to 85% fixed the issue:

    table {
      width: 85%;
    }

    I’m sure this is a very obscure issue which is for users like me who have subscribed to a particular newsletter in a feed reader, but documenting it in case others face something similar.

    You could of course just read the newsletter in your email inbox or through a service like NewsletterHunt, or just change the view to full article in Inoreader.

  • Dubai Diaries: Running LLMs & Stable Diffusion locally on a gaming laptop

    Dubai Diaries: Running LLMs & Stable Diffusion locally on a gaming laptop

    I previously wrote about the second device that I got about coming to Dubai, but not much about the first one which was a gaming laptop. So here’s a bit about the laptop which also doubles as a local AI driver thanks to the Nvidia GPU (the RTX3060).

    Soon after getting it back in 2022, I tried running the Stable Diffusion models and it was quite a bit of an upgrade over my original attempt on a plain GPU-less Windows machine. The generation times came down to 10s or so, and has gotten even faster as the models and tools have been optimised over the last couple of years. There are quite a few projects available on GitHub if you want give it a try – AUTOMATIC1111 and easydiffusion are among the more popular options. Nvidia has also got a TensorRT extension to further improve performance.

    With that out of the way, I also discovered LM Studio which allows you to run LLMs locally with a chat like interface thrown in, and you can access a bunch of models like Meta’s LLama. The response times are of course not as fast as the freely available online options like ChatGPT, Claude, Gemini and the likes, but you effectively get unlimited access to the model.

    Here’s an example from a conversation I had with LLama regarding the coffee meme from Ace Attorney game series:

  • Dubai Diaries: Staying active via VR

    Dubai Diaries: Staying active via VR

    The second device (the first was of course the gaming laptop that has been doing double duty as a GenAI device) that I purchased in Dubai after relocating in 2022 was the Meta Quest 2 VR headset.

    Picking it up towards the end of the year has its advantages as the apps and games are usually discounted due to the Christmas sales. In fact I got Beat Saber as a freebie with my purchase. This was the game that sent me down the Meta VR App store rabbit hole where I found a bunch of sports games like:

    There are also games for boxing, fishing, shooting, Star Wars (becoming Darth Vader’s apprentice) among others. They are a big departure from the typical computer, mobile or console gaming as they require you to move around and give you a decent workout.

    I also picked up some accessories like the hard case to store & transport the device in a safer manner, along with the head strap replacement. The head strap in particular is a big upgrade and almost necessary if you want to use the headset for even a moderate amount of time.

    Most have been around for several years now and have gotten a boost in terms of features & quality thanks to the renewed focus on AR & VR with the launch of the Apple Vision Pro and the Meta Quest 3 over the last year or so.

    Here’s my experience with some of these apps that have helped me stay more active, especially during the Dubai summers when it gets pretty difficult for outdoor activities. One thing to note is that most of these apps/games require some dedicated space – typically 6″ x 6″ – to play safely, though some can be played standing in one place.

    iB Cricket

    This game has been developed by a team from India, and you can see that they have done their share of partnerships with some of the mainstream cricket events over the years. It is mainly a batting simulator where you can play as a bunch of teams at varying difficulties and it also has multiplayer options & leagues if you like to compete against other players.

    They sell a bat accessory that can be used with the Quest 2 controller to give you an easier and more authentic experience. This was in fact something that I picked up during one of my India trips and it really makes the gameplay much better.

    VZFit

    This year, I also picked up a subscription to the VZFit app which can be used with an indoor bike to stay fit. By default they have a fitness experience that you can perform using just the controllers, but the virtual biking is what piqued my interest. The app allows you to bike around different locations in Google Maps using the Streetview images in an immersive form.

    Here’s a sample from one of my rides along the Colorado river:

    There are a bunch of user curated locations that can be quite scenic. Some even come with voiceover to direct your attention to places of interest. They also have regular challenges and leaderboards if you like to compete, and integration with a bunch of online radio stations to keep you entertained. You also have a trainer who can accompany you on a bike and guide you with the workout.

    You mainly need to connect a compatible bluetooth cadence sensor to your Quest headset so that it can detect the bike activity. As for the stationary bike, you can get your own or use one in the gym. I got the Joroto X2 spin bike which seems to be pretty good value. A battery powered clip-on fan can also be pretty handy to keep you cool and also simulate a breeze when you are virtually biking.

    Beat Saber

    Beat Saber is possibly one of the most well known VR games. After all, it’s not every day that you get to dual-wield something akin to light sabers and chop things up with a sound track to match.

    It is basically a virtual rhythm game that has been around for several years where you wield a pair of glowing sabers to cut through approaching blocks which are in sync with a song’s beats and notes. This can give you a really good workout as it also involves ducking and dodging in addition to the hand movements.

    Eleven Table Tennis

    Given the size of the Quest controllers and in hand feel similar to a TT bat, table tennis feels like a natural fit. This was one of the first sports games that I picked up on the Quest, and I have seen this game evolve within a few months of my purchase. Currently it has a host of options ranging from practice to multiplayer with different levels of difficulty.

    The multiplayer part is also pretty interesting and immersive as it can use your Meta avatar for the in game player. It also has voice chat so you can talk to your opponent. The in game Physics is also very realistic due to which you sometimes forget that there is no actual table in front of you.

    Vader Immortal Series

    This is a 3 episode game on the Quest, and doesn’t actually need you to move around as much as the other sports games that I have mentioned. However, if you are a Star Wars fan, this is pretty much a must try game as it gives you your fill of light saber fighting sequences starting a training involving with those mini floating droids and leading up to enemy fights standing beside Darth Vader.

    If you loved the Jedi Knight series on the computer or one of the recent Star Wars games involving Jedi, then this is pretty much a no brainer to try out. Oh, and you do get to use the force push/pull powers as well.

  • AI News Roundup: Oct-Nov24

    AI News Roundup: Oct-Nov24

    I have been sharing some of the interesting reads that I come across on this blog/newsletter for a while now. Given the pace at which AI related news has been rolling out, I am consolidating the links into a series of monthly posts to reduce the load on your inbox/feed.

    Here are the interesting developments in the world of AI from the last month and a half or so:

    Agentic AI

    When you give Claude a mouse: LLMs are gradually getting more access to actually do things on your computer, and effectively becoming agents. Ethan Mollick shares his experience with Claude’s new feature, and the current strengths and weaknesses:

    On the powerful side, Claude was able to handle a real-world example of a game in the wild, develop a long-term strategy, and execute on it. It was flexible in the face of most errors, and persistent. It did clever things like A/B testing. And most importantly, it just did the work, operating for nearly an hour without interruption.

    On the weak side, you can see the fragility of current agents. LLMs can end up chasing their own tail or being stubborn, and you could see both at work. Even more importantly, while the AI was quite robust to many forms of error, it just took one (getting pricing wrong) to send it down a path that made it waste considerable time.

    Claude get’s bored: With great power comes great boredom, it seems. We are already witnessing some unintended behaviour with the AI agents with them getting distracted just like humans or taking unwanted actions:

    Impact on work

    Generative AI and the Nature of Work: A paper which looks at the impact of AI tools like GitHub Copilot on how people work:

    We find that having access to Copilot induces such individuals to shift task allocation towards their core work of coding activities and away from non-core project management activities. We identify two underlying mechanisms driving this shift – an increase in autonomous rather than collaborative work, and an increase in exploration activities rather than exploitation. The main effects are greater for individuals with relatively lower ability. Overall, our estimates point towards a large potential for AI to transform work processes and to potentially flatten organizational hierarchies in the knowledge economy.

    AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably: Is this a reflection of the AI capabilities or our tastes?

    We found that AI-generated poems were rated more favorably in qualities such as rhythm and beauty, and that this contributed to their mistaken identification as human-authored. Our findings suggest that participants employed shared yet flawed heuristics to differentiate AI from human poetry: the simplicity of AI-generated poems may be easier for non-experts to understand, leading them to prefer AI-generated poetry and misinterpret the complexity of human poems as incoherence generated by AI.

    On the other hand, mainstream Hollywood is realizing the potential cost savings that AI can have – New Zemeckis film used AI to de-age Tom Hanks and Robin Wright – thanks to the tech from Metaphysic:

    Metaphysic developed the facial modification system by training custom machine-learning models on frames of Hanks’ and Wright’s previous films. This included a large dataset of facial movements, skin textures, and appearances under varied lighting conditions and camera angles. The resulting models can generate instant face transformations without the months of manual post-production work traditional CGI requires.

    Here’s the trailer:

    Manipulating AI and boring scammers

    SEO may soon be passe with the chatbots taking over from the search engines. So, what’s next – something possibly along the lines of Citate which helps you analyse and optimise what is being served up on these chatbots.

    Can we manipulate AI as much as it manipulates us? – With every new development in the way humans manage and share knowledge, come tools to manipulate the said knowledge. Fred Vogelstein takes a deeper look at the emerging options including Citate and Profound.

    AI granny to bore scammers:

    UK-based mobile operator Virgin Media O2 has created an AI-generated “scambaiter” tool to stall scammers. The AI tool, called Daisy, mimics the voice of an elderly woman and performs one simple task: talk to fraudsters and “waste as much of their time as possible.”

    Multiple AI models were used to create Daisy, which was trained with the help of YouTuber and scam baiter Jim Browning. The tool now transcribes the caller’s voice to text and generates appropriate responses using a large language model. All of this takes place without input from an operator. At times, Daisy keeps fraudsters on the line for up to 40 minutes, O2 says.

    I have already been doing a simpler version of this using Samsung’s AI based call screening, with most hanging up pretty quickly. I’m sure this will get enhanced in the future.

    It’s not just scammers misusing AI unfortunately, and this bit of news on creating deepfakes of classmates in a US school doesn’t help allay the fears of parents like me. Food for thought for the regulators, and also for authorities who need to take prompt action when such incidents occur:

    Head of School Matt Micciche seemingly first learned of the problem in November 2023, when a student anonymously reported the explicit deepfakes through a school portal run by the state attorney’s general office called “Safe2Say Something.” But Micciche allegedly did nothing, allowing more students to be targeted for months until police were tipped off in mid-2024.

    Cops arrested the student accused of creating the harmful content in August. The student’s phone was seized as cops investigated the origins of the AI-generated images. But that arrest was not enough justice for parents who were shocked by the school’s failure to uphold mandatory reporting responsibilities following any suspicion of child abuse. They filed a court summons threatening to sue last week unless the school leaders responsible for the mishandled response resigned within 48 hours.