Tag: agenticAI

  • Indian IT’s Arbitrage Problem: When Tokens Cost the Same Everywhere

    The Indian IT services industry was built on a straightforward premise: skilled developers in Bangalore cost significantly less than comparable talent in San Francisco. This differential created an empire — TCS, Infosys, Wipro, and hundreds of smaller firms billing clients based on headcount. The model was self-reinforcing. More engineers meant more revenue, which meant hiring even more engineers.

    AI breaks this equation in a way previous technology shifts didn’t. When an LLM API costs the same per million tokens whether you’re calling it from Mumbai or Manhattan, geography stops mattering. The cost of doing work is shifting from labour, which varies by location, to compute, which doesn’t. As AI agents get better at performing tasks that used to require human engineers, the ratio keeps tilting further away from the headcount model, resulting in a structural break.

    The arbitrage that built an industry

    India’s tech boom worked because clients could get the same capability at dramatically lower cost. A Fortune 500 company could hire multiple engineers in India for significantly less than the cost of one in the US, and the output quality was comparable. Even Global Capability Centres — the in-house versions of this model — followed the same logic, functioning as cost centres to reduce the parent company’s tech spend.

    China’s manufacturing dominance followed the same pattern: cheap labour built the industry, then automation eroded the advantage but the specialised human knowledge persisted. The difference may be speed — manufacturing automation took decades, while AI may be compressing that timeline.

    When uniform pricing changes everything

    Nandan Nilekani described recently how India moved from concept to deployed AI solution for dairy farmers in three weeks — from a January 8 meeting with the Prime Minister to a February 11 launch. That kind of velocity shows what’s possible when AI adoption isn’t constrained by procurement cycles. Large IT services companies, by contrast operate on longer evaluation timelines. By the time a tool clears compliance and gets deployed at scale, the market has moved on.

    This isn’t a process problem that better project management can fix. It’s structural, baked into how large organisations manage risk. Smaller, leaner operations can adopt and discard tools at whatever pace the technology demands. Established players can’t.

    Scale, which used to be the competitive moat, becomes an anchor. When you have large engineering teams on payroll, each person represents fixed costs — salaries, benefits, office space, management overhead. If 10 engineers with AI agents can now produce what 50 engineers produced before, every client will eventually ask why they’re still paying for 50. The “bench” model, where firms keep engineers on payroll between projects, becomes financially unsustainable when margins compress.

    The maintenance trap

    The strongest counterargument came immediately. In February 2026, a short-seller report from Citrini — written as a fictional memo from June 2028 — wiped roughly $10 billion off Indian IT stocks by arguing that cost arbitrage was dead because AI agents run at the cost of electricity. The defence was swift and detailed: Indian IT revenue is overwhelmingly maintenance and integration on legacy enterprise systems, not greenfield coding. Enterprise systems are sprawling, non-monolithic, and require deterministic outputs. AI is probabilistic. You can’t wholesale replace systems of record with something that gives you a different answer every time you ask the same question.

    HSBC estimated 14-16% gross AI-led revenue deflation across service segments — significant but not existential. The technology stacks of the world’s largest enterprises take years to adapt. Custom application maintenance alone accounts for roughly 35% of a typical Indian IT company’s revenue: incident management, service requests, change requests, problem resolution across architectures where SAP, Salesforce, Snowflake, and ServiceNow coexist in configurations unique to each client.

    The problem with this defence: maintenance work is structured, repeatable, well-documented—exactly the kind of work agents may eventually handle well. It’s arguably easier to automate than greenfield development because the patterns are known and the test conditions are defined. Even if 14-16% deflation is accurate, that’s 14-16% less revenue through a headcount-based billing model, which means clients now have a benchmark for what’s possible. The entire pricing structure comes under pressure.

    HFS Research projects a category called Services-as-Software growing to $1.5 trillion — AI-driven autonomous delivery replacing seat-based pricing with outcome-based models. IT service companies proactive about building their own AI agents, and willing to cannibalise legacy revenues, can gain share from software companies rather than just lose it. Companies that defend the old model will likely lose share.

    What survives

    Strategic judgement still matters. Domain expertise still matters. The ability to translate messy business problems into AI-solvable workflows — that doesn’t have a token cost equivalent. Even if code generation gets solved, the compliance, security, infrastructure, and domain knowledge layers don’t collapse. Enterprise software involves SOC-2 audits, data residency, currency handling, PII management. None of that happens automatically. Someone needs to be accountable when things break.

    DevOps, support, and production reliability are further behind code generation in the automation curve. Monitoring, incident response, infrastructure management — the consequences of AI errors in these areas are immediate and expensive. The software development lifecycle may be restructuring fast, but the operational layer still needs human judgment.

    Indian IT’s deep domain knowledge in specific verticals — healthcare, banking, insurance — could be repositioned rather than eliminated. Whether companies can make that pivot before clients start asking harder questions about headcount is the open question.

    The uncomfortable transition

    Headcount-based billing becomes harder to justify every quarter. The bench model becomes financially unsustainable at current margins. GCCs will face pressure to shrink headcount and demonstrate output-per-head improvements. Indian IT may need to pivot from services to products, or reinvent the services model around outcome-based pricing.

    When 59% of hiring managers admit they emphasize AI in layoff announcements because it “plays better with stakeholders” than admitting financial constraints, the narrative gap becomes clear. Companies are restructuring for traditional budget reasons but framing it as AI transformation. That creates a trust problem, but it also reveals something about client expectations: the perception that AI should reduce headcount costs is becoming real, whether or not the technology has fully delivered on that promise yet.

    The same forces dismantling labour arbitrage are creating opportunities for lean operators. A solo developer or small team with the right domain expertise and AI tools can now deliver enterprise-grade output. Clients don’t care if the work was done by 50 engineers in a GCC or 2 people with agents — they care about the outcome. Outcome-based pricing models become viable and attractive: charge for value delivered, not hours spent.

    Indian tech talent is world-class. The individuals who decouple from the headcount model and operate independently or in small setups may be better positioned than ever. The market is shifting from “who has the most people” to “who can deliver the most value per unit of cost” — and that’s a game lean operators can win.

    The question isn’t whether Indian IT survives. The industry isn’t disappearing. The question is whether the organisational models built around labour arbitrage can adapt to value arbitrage fast enough. The talent is there. The domain expertise is there. What’s uncertain is whether companies structured around selling engineer-hours can reinvent themselves to sell outcomes instead—and whether they can do it before clients find someone else who already has.

  • The Autonomous SDLC: What’s Solved, What’s Not, and Why the Gaps Are Closing Fast

    We’re further along than most people realize. The software development lifecycle is being automated piece by piece, and the trajectory is becoming harder to ignore—not through some magical breakthrough, but through the steady elimination of bottlenecks that seemed permanent six months ago.

    This is a practitioner’s status report discussing what works in production today, what remains genuinely unsolved, and why the remaining gaps matter less than conventional wisdom suggests.

    Code Generation: Already Production-Grade

    The middle portion of the SDLC—turning specifications into working code—has crossed a threshold. Cursor CEO Michael Truell describes three eras: tab autocomplete, synchronous agents responding to prompts, and now agents tackling larger tasks independently with less human direction. At Cursor, 35% of merged PRs now come from agents running autonomously in cloud VMs. The agent PRs are “an order of magnitude more ambitious than human PRs” while maintaining higher merge rates.

    What matters isn’t the percentage—it’s that these agent-generated PRs pass the same review standards as human code. Max Woolf’s detailed experiments are instructive. Starting as a vocal skeptic who wrote about rarely using LLMs, he ended up building Rust libraries that outperformed battle-tested numpy-backed implementations by 2-30x. Not prototypes—production code passing comprehensive test suites and benchmarks.

    His conclusion after months of testing:

    I have been trying to break this damn model by giving it complex tasks that would take me months to do by myself despite my coding pedigree but Opus and Codex keep doing them correctly.

    The quality ceiling keeps rising with each model generation. This isn’t “good enough for prototypes”—it’s production-grade code that ships.

    Spec-Driven Development

    The initiation problem has largely converged. Most tools now support planning mode—the agent reads a spec, creates an implementation plan, follows it through. Woolf’s experience matters here:

    AGENTS.md is probably the main differentiator between those getting good and bad results with agents.

    These persistent instruction files function as system prompts that shape agent behaviour across sessions.

    This is just spec-driven development—the same methodology good engineering teams already use. The pattern works: write a detailed spec (GitHub issue, markdown file), point the agent at it, let it execute. The difference is that agents can now be the executor, and the pattern works across tools (Cursor, Claude Code, Codex) because it aligns with how reliable software gets built regardless of who’s typing.

    The Feedback Loop: The Primary Gap

    Basic unit tests and regression tests work well—agents can write and run them as part of their workflow. Complex feature tests, integration tests, and UAT remain the primary gap. UI/UX testing is particularly challenging since agents can’t easily evaluate visual output.

    The current workaround: human-in-the-loop for complex test evaluation, with agents handling mechanical testing. That said, the coding agents can still fix bugs when given screenshots and descriptions.

    This is an active focus area. The gap is narrowing from both sides: agents getting better at generating comprehensive tests, and tooling improving for automated visual and integration testing. Satisfactory solutions within 2026 aren’t a stretch—they’re the natural next step given where the infrastructure is heading.

    Guardrails: Actively Being Solved

    Managing task boundaries and blast radius is critical for autonomous operation. Best practices are emerging around sandboxing—isolated agent execution environments, limited file system access, branch-based workflows.

    The Anthropic C compiler experiment demonstrated the pattern at scale: 16 agents working on a shared codebase over 2,000 sessions, coordinating through git locks and comprehensive test harnesses. The test infrastructure was rigorous enough to guide autonomous agents toward correctness without human review, producing a 100,000-line compiler that can build Linux.

    StrongDM took this further with their dark factory approach. They built digital twins of production dependencies—behavioral clones of Okta, Jira, Slack—using agents to replicate APIs and edge cases. This enabled validation at volumes far exceeding production limits without risk. Their rule: “Code must not be reviewed by humans.” The safety comes from comprehensive scenario testing against holdout test cases the agents never see.

    The agent infrastructure layer is building out fast. We’re seeing microVMs that boot fast enough to feel container-like, with snapshot/restore making “reset” almost free. Agent-specific sandboxed compute, identity, and API access are emerging as distinct product categories.

    The guardrails problem is increasingly an infrastructure problem, not a model problem. This converges toward a standard pattern: spec + guardrails + sandbox + automated validation = safe autonomous execution.

    The Self-Improvement Dynamic

    Something subtle is happening. Codex optimizes code, Opus optimizes it further, Opus validates against known-good implementations. Cumulative 6x speed improvements on already-optimized code. Then you have Opus 4.6 iteratively improving its own code through benchmark-driven passes.

    Folks have showed agents tuning LLMs on Hugging Face—the tooling layer being built by the tools themselves. This isn’t theoretical AGI. It’s narrow but powerful self-improvement within the coding domain. The practical implication: the rate of improvement accelerates as agents get better at improving agents. For the coding stack specifically, each generation of tools makes the next generation arrive faster.

    What This Means for Planning

    Here’s the timeline as I see it:

    2025: Code generation reliable. Spec-driven development emerging. Testing and guardrails manual.

    2026: Testing automation reaches satisfactory level. Guardrails standardize. The loop becomes semi-autonomous.

    2027+: Fully autonomous for standard applications. Human involvement shifts entirely to direction and edge cases.

    The companies planning as if these gaps will persist are making the same mistake as those who planned around slow internet in 2005. AI tools amplify existing expertise—all the practices that distinguished senior engineers (comprehensive testing, good documentation, strong version control habits, effective code review) matter even more now. But the bar for what “good enough” looks like is rising in parallel.

    Antirez captures the shift plainly:

    Writing code is no longer needed for the most part. It is now a lot more interesting to understand what to do, and how to do it.

    The mental work hasn’t disappeared. It’s concentrated in the parts machines can’t yet replace: architecture decisions, user needs, system design trade-offs.

    The gaps are real today. But they’re the wrong thing to optimize around. Optimize around what becomes possible when they close—because that’s happening faster than the pace of traditional software planning cycles.

  • From Clicks to Conversations: How AI Agents Are Revolutionizing Business

    From Clicks to Conversations: How AI Agents Are Revolutionizing Business

    For the last decade, businesses have invested heavily in “Digital Transformation,” building powerful digital tools and processes to modernize their operations. While this era brought significant progress, it also created a persistent challenge. The tools we built—from CRMs to ERPs—were largely dependent on structured data: the neat, organized numbers and categories found in a spreadsheet or database. Computers excel at processing this kind of information.

    The problem is that the most valuable business intelligence isn’t structured. The context behind a business plan locked in a 100-slide presentation, the nuance of a customer relationship captured in a rep’s notes, or the true objective of a strategy discussed in a meeting—this is all unstructured data. This divide has created a major hurdle for business efficiency, as great ideas often get lost when people try to translate them into the rigid, structured systems that computers understand.

    The Old Way: The Limits of Traditional Digital Tools

    The first wave of digital tools, from customer relationship management (CRM) software to accounting platforms, were designed for humans to operate. Their critical limitation was their reliance on structured data, which forced people to act as human translators. A brilliant, nuanced strategy conceived in conversations and documents had to be manually broken down and entered into rigid forms and fields.

    This created a significant “gap between business strategy and execution,” where high-level vision was lost during implementation. The result was heavy “change management overheads,” not just because teams needed training on new software, but because of the cognitive friction involved. People are used to working with the unstructured information in their heads; these tools forced them to constantly translate their natural way of thinking into structured processes the software could understand.

    Information TypeBusiness Example
    StructuredEntries in a CRM database, financial data in an accounting platform, inventory numbers in an ERP system.
    UnstructuredA 100-slide brand plan document, a sales rep’s recorded notes describing a doctor they just met, emails discussing a new brand strategy.

    This reliance on structured systems meant that the tools, while digital, couldn’t fully grasp the human context of the work they were supposed to support. A new approach was needed—one that could understand information more like a person does.

    A Smarter Way: Introducing AI Agents

    Welcome to the era of “AI Transformation.” At the heart of this new wave are AI Agents: specialized digital team members that can augment a human workforce. Think of them as a dedicated marketing agent, a sales agent, or a data analyst agent, each designed to perform specific business functions.

    The single most important capability of AI agents is their ability to work with both structured and unstructured information. You can communicate a plan to an agent by typing a message, speaking, or providing a document—just as you would with a human colleague. This fundamental shift from clicking buttons to holding conversations unlocks three profound benefits:

    • Bridging the Strategy-to-Execution Gap: AI agents can understand the nuance of an unstructured plan—the “why” behind the “what”—and help execute it without critical information getting lost in translation.
    • Handling All Information Seamlessly: They can process natural language from documents, presentations, or conversations and transform it into the actionable, structured data that existing digital tools need to function.
    • Reducing Change Management: Because agents understand human language, the need for extensive training on rigid software interfaces is significantly reduced. People can work more naturally, supervising the agents as they handle the tedious, structured tasks.

    To see how this works in practice, let’s walk through how a team of AI agents can help plan and execute a marketing campaign from start to finish.

    AI Agents in Action: Launching a Marketing Campaign

    This step-by-step walkthrough shows how AI agents can take a high-level marketing plan from a simple idea to a fully executed campaign, seamlessly connecting unstructured strategy with structured execution.

    1. The Starting Point: The Marketing Brief – The process begins when a brand manager provides a marketing brief. This brief is pure unstructured information—it could be a presentation, a document, or even the transcript of a planning conversation. It contains the high-level goals and vision for the campaign.
    2. Deconstructing the Brief: The Brand Manager Agent – A specialized “Brand Manager” agent analyzes the unstructured brief and extracts the core business context elements. It identifies key information such as:
      • Business objectives
      • Target audience definitions
      • Key messages
      • Brands in focus
      • Timelines and milestones
    3. The agent then organizes this information into structured, machine-readable “context blocks,” creating a clear, logical foundation that other systems and agents can use.
    4. Understanding the Customer: The Digital Sales Agent – Next, a “Digital Sales” agent contributes by performing customer profiling. It can take unstructured, natural language descriptions of customers (for instance, from a sales rep’s recorded notes) and map them to formal, structured customer segments and personas. This builds a richer, more accurate customer profile than a simple survey could provide.
    5. Creating the Content: The Content Writer Agent – Using the structured business context from the Brand Manager agent, a “Content Writer” agent assembles personalized content. It can reuse and repurpose existing content from a library of approved modules, accelerating content creation while ensuring brand compliance.
    6. Executing the Plan: The Next Best Action (NBA) Engine – Finally, the system brings everything together to recommend the “Next Best Action.” This engine synthesizes the campaign’s business context, the customer’s profile, the available content, and their recent engagement history to suggest the perfect next step for each customer. It recommends precisely what content to send and which channel to use, turning high-level strategy into a concrete, personalized action.

    This orchestrated workflow makes the entire process smoother, faster, and far more intelligent. It creates a virtuous cycle, where the system learns from every interaction to continuously improve the overall strategy and execution over time.

    The Future of Work is Collaborative

    The rise of AI agents marks a fundamental shift in how we work with technology. We are moving from a world where humans must adapt to operate digital tools to one where humans supervise intelligent AI agents that use those tools on our behalf.

    This new wave of AI transformation is not about replacing people, but about augmenting their human workforce without adding headcount. By handling the translation between unstructured human ideas and structured digital processes, AI agents help businesses reduce friction, cut down on turnaround times, and finally bridge the long-standing gap between their biggest strategies and their real-world execution.

  • The Economic Reality and the Optimistic Future of Agentic Coding

    The Economic Reality and the Optimistic Future of Agentic Coding

    After a couple of months deep in the trenches of vibe coding with AI agents, I’ve learned this much: scaling from a fun, magical PoC to an enterprise-grade MVP is a completely different game.

    Why Scaling Remains Hard—And Costly

    Getting a prototype out the door? No problem.

    But taking it to something robust, secure, and maintainable? Here’s where today’s AI tools reveal their limits:

    • Maintenance becomes a slog. Once you start patching AI-generated code, hidden dependencies and context loss pile up. Keeping everything working as requirements change feels like chasing gremlins through a maze.
    • Context loss multiplies with scale. As your codebase grows, so do the risks of agents forgetting crucial design choices or breaking things when asked to “improve” features.

    And then there’s the other elephant in the room: costs.

    • The cost scaling isn’t marginal—not like the old days of cloud or Web 2.0. Powerful models chew through tokens and API credits at a rate that surprises even seasoned devs.
    • That $20/month Cursor plan with unlimited auto mode? For hobby projects, it’s a steal. For real business needs, I can see why some queries rack up millions of tokens and would quickly outgrow even the $200 ultra plan.
    • This is why we’re seeing big tech layoffs and restructuring: AI-driven productivity gains aren’t evenly distributed, and the cost curve for the biggest players keeps climbing.

    What the Data Tells Us

    That research paper—Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity—had a surprising conclusion:

    Not only did experienced developers see no time savings on real-world coding tasks with AI, but costs increased as they spent more time reviewing, correcting, and adapting agent output.

    The lesson:

    AI shifts where the work happens—it doesn’t always reduce it. For now, scaling with agents is only as good as your processes for context, review, and cost control.

    Why I Remain Optimistic

    Despite the challenges, I’m genuinely excited for what’s coming next.

    • The platforms and models are evolving at warp speed. Many of the headaches I face today—context loss, doc gaps, cost blind spots—will get solved just as software engineering best practices eventually became codified in our tools and frameworks.
    • Agentic coding will find its place. It might not fully automate developer roles, but it will reshape teams: more focus on high-leverage decisions, design, and creative problem-solving, less on boilerplate and “busy work.”

    And if you care about the craft, the opportunity is real:

    • Devs who learn to manage, review, and direct agents will be in demand.
    • Organizations that figure out how to blend agentic workflows with human expertise and robust process will win big.

    Open Questions for the Future

    • Will AI agentic coding mean smaller, nimbler teams—or simply more ambitious projects for the same headcount?
    • How will the developer role evolve when so much code is “synthesized,” not hand-crafted?
    • What new best practices, cost controls, and team rituals will we invent as agentic coding matures?

    Final thought:

    The future won’t be a return to “pure code” or a total AI handoff. It’ll be a blend—one that rewards curiosity, resilience, and the willingness to keep learning.

    Where do you see your work—and your team—in this new landscape?

  • The Law of Leaky Abstractions & the Unexpected Slowdown

    The Law of Leaky Abstractions & the Unexpected Slowdown

    If the first rush of agentic/vibe coding feels like having a team of superhuman developers, the second phase is a reality check—one that every software builder and AI enthusiast needs to understand.

    Why “Vibe Coding” Alone Can’t Scale

    The further I got into building real-world prototypes with AI agents, the clearer it became: Joel Spolsky’s law of leaky abstractions is alive and well.

    You can’t just vibe code your way to a robust app—because underneath the magic, the cracks start to show fast. AI-generated coding is an abstraction, and like all abstractions, it leaks. When it leaks, you need to know what’s really happening underneath.

    My Experience: Hallucinations, Context Loss, and Broken Promises

    I lost count of the times an agent “forgot” what I was trying to do, changed underlying logic mid-stream, or hallucinated code that simply didn’t run. Sometimes it wrote beautiful test suites and then… broke the underlying logic with a “fix” I never asked for. It was like having a junior developer who could code at blazing speed—but with almost no institutional memory or sense for what mattered.

    The “context elephant” is real. As sessions get longer, agents lose track of goals and start generating output that’s more confusing than helpful. That’s why my own best practices quickly became non-negotiable:

    • Frequent commits and clear commit messages
    • Dev context files to anchor each session
    • Separate dev/QA/prod environments to avoid catastrophic rollbacks (especially with database changes)

    What the Research Shows: AI Can Actually Slow Down Experienced Devs

    Here’s the kicker—my frustration isn’t unique.

    A recent research paper, Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity, found that experienced developers actually worked slower with AI on real-world tasks. That’s right—AI tools didn’t just fail to deliver the expected productivity boost, they created friction.

    Why?

    • Only about 44% of AI-generated code was accepted
    • Developers lost time reviewing, debugging, and correcting “bad” generations
    • Context loss and reliability issues forced more manual intervention, not less

    This matches my experience exactly. For all the hype, these tools introduce new bottlenecks—especially if you’re expecting them to “just work” out of the box.

    Lessons from the Frontlines (and from Agent Week)

    I’m not alone. In the article What I Learned Trying Seven Coding Agents, Timothy B. Lee finds similar headaches:

    • Agents get stuck
    • Complex tasks routinely stump even the best models
    • Human-in-the-loop review isn’t going anywhere

    But the tools are still useful—they’re not a dead end. You just need to treat them like a constantly rotating team of interns, not fully autonomous engineers.

    Best Practices: How to Keep AI Agents Under Control

    So how do you avoid the worst pitfalls?

    The answer is surprisingly old-school:

    • Human supervision for every critical change
    • Sandboxing and least privilege for agent actions
    • Version control and regular context refreshers

    Again, Lee’s article Keeping AI agents under control doesn’t seem very hard nails it:

    Classic engineering controls—proven in decades of team-based software—work just as well for AI. “Doomer” fears are overblown, but so is the hype about autonomy.

    Conclusion: The Hidden Cost of Abstraction

    Vibe coding with agents is like riding a rocket with no seatbelt—exhilarating, but you’ll need to learn to steer, brake, and fix things mid-flight.

    If you ignore the leaky abstractions, you’ll pay the price in lost time, broken prototypes, and hidden tech debt.

    But with the right mix of skepticism and software discipline, you can harness the magic and avoid the mess.

    In my next post, I’ll zoom out to the economics—where cost, scaling, and the future of developer work come into play.

    To be continued…

  • The Thrill and the Illusion of AI Agentic Coding

    The Thrill and the Illusion of AI Agentic Coding

    A few months ago, I stumbled into what felt like a superpower: building fully functional enterprise prototypes using nothing but vibe coding and AI agent tools like Cursor and Claude. The pace was intoxicating—I could spin up a PoC in days instead of weeks, crank out documentation and test suites, and automate all the boring stuff I used to dread.

    But here’s the secret I discovered: working with these AI agents isn’t like managing a team of brilliant, reliable developers. It’s more like leading a software team with a sky-high attrition rate and non-existent knowledge transfer practices. Imagine onboarding a fresh dev every couple of hours, only to have them forget what happened yesterday and misinterpret your requirements—over and over again. That’s vibe coding with agents.

    The Early Magic

    When it works, it really works. I’ve built multiple PoCs this way—each one a small experiment, delivered at a speed I never thought possible. The agents are fantastic for “greenfield” tasks: setting up skeleton apps, generating sample datasets, and creating exhaustive test suites with a few prompts. They can even whip up pages of API docs and help document internal workflows with impressive speed.

    It’s not just me. Thomas Ptacek’s piece “My AI Skeptic Friends Are All Nuts” hits the nail on the head: AI is raising the floor for software development. The boring, repetitive coding work—the scaffolding, the CRUD operations, the endless boilerplate—gets handled in minutes, letting me focus on the interesting edge cases or higher-level product thinking. As they put it, “AI is a game-changer for the drudge work,” and I’ve found this to be 100% true.

    The Fragility Behind the Hype

    But here’s where the illusion comes in. Even with this boost, the experience is a long way from plug-and-play engineering. These AI coding agents don’t retain context well; they can hallucinate requirements, generate code that fails silently, or simply ignore crucial business logic because the conversation moved too fast. The “high-attrition, low-knowledge-transfer team” analogy isn’t just a joke—it’s my daily reality. I’m often forced to stop and rebuild context from scratch, re-explain core concepts, and review every change with a skeptical eye.

    Version control quickly became my lifeline. Frequent commits, detailed commit messages, and an obsessive approach to saving state are my insurance policy against the chaos that sometimes erupts. The magic is real, but it’s brittle: a PoC can go from “looks good” to “completely broken” in a couple of prompts if you’re not careful.

    Superpowers—With Limits

    If you’re a founder, product manager, or even an experienced developer, these tools can absolutely supercharge your output. But don’t believe the hype about “no-code” or “auto-code” replacing foundational knowledge. If you don’t understand software basics—version control, debugging, the structure of a modern web app—you’ll quickly hit walls that feel like magic turning to madness.

    Still, I’m optimistic. The productivity gains are real, and the thrill of seeing a new prototype come to life in a weekend is hard to beat. But the more I use these tools, the more I appreciate the fundamentals that have always mattered in software—and why, in the next post, I’ll talk about the unavoidable reality check that comes when abstractions leak and AI doesn’t quite deliver on its promise.

    To be continued…