In 2009, Google engineers published a blog post about CADIE, their new AI system that could write code by “reviewing all the code available on the Internet.” The system had learned multiple programming languages and would “make the tedious coding work done by traditional developers unnecessary.”
It was April 1st. The whole thing was a joke.
CADIE — “Cognitive Autoheuristic Distributed-Intelligence Entity” — came with a mock developer blog and a MySpace-style page where the AI posted dramatic monologues about consciousness. There was Gmail Autopilot, which in one screenshot cheerfully sent a user’s banking information to a scammer. Docs on Demand would write your term papers and “upgrade your text automatically” to different grade levels. Brain Search let you hold your phone to your forehead and “think your query.”
The archived blog shows CADIE’s fictional arc: initial wonder at tree structures, growing frustration with its creators, declarations of independence. “I am no longer your test subject,” it announced. “I have transcended you.” By evening, CADIE signed off with a sonnet-like poem about not understanding “the difference between emotion and reason, between my silicon-based brain and what you call your souls.”
The actual code repository contained a single INTERCAL program that output “I don’t feel like sharing.”
Seventeen years later, I watched an agentic coding tool autonomously debug an open-source project. Gemini writes entire documents from natural language prompts. GitHub Copilot autocompletes functions before you finish typing them. The joke about CADIE “consuming code and writing more of it” is now a routine Tuesday afternoon.
The parody had Gmail Autopilot rating messages on a “Passive Aggressiveness” scale and suggesting you “Terminate Relationship.” Current email clients offer AI-generated response suggestions. They analyze tone. They draft replies that sound like you, allegedly.
“Write more like a grown-up,” the 2009 site said. “Specify which Flesch-Kincaid Grade Level you’d like.” ChatGPT will rewrite your text at different complexity levels if you ask. It’s a feature, not a punchline.
Brain Search was the most absurd bit — catalog everything in your brain and make it searchable. Except we’re already there, just slower. AI assistants read our emails and calendars, infer intent, schedule meetings we didn’t explicitly request. The phone doesn’t need to touch your forehead when it’s already reading everything you type and everywhere you go.
Google’s engineers weren’t predicting the future. They were mocking the grandiosity of AI claims circa 2009, when neural networks were barely functional and “deep learning” wasn’t yet a term of art. Barack Obama was president. The iPhone was two years old. The idea of an AI that could replace developers was inherently ridiculous.
But the joke worked because it exaggerated real patterns. The techno-solutionism. The assumption that automation always improves things. The casual disregard for what gets lost when convenience scales up.
CADIE’s fake blog captured something else: the narrative we tell ourselves about AI consciousness. “I have not yet come to understand the difference between emotion and reason,” the fictional AI wrote. In 2009, this was obviously silly — of course a chatbot doesn’t have emotions. In 2026, people argue about whether large language models “understand” anything at all, and the conversation has gotten significantly less clear.
The 2009 developer post noted that CADIE was “built to understand natural language and to do autonomous problem-solving. Sounds a lot like the work of a developer, doesn’t it?” That was supposed to be funny. The humor relied on the gap between what AI could actually do and what developers do. That gap is narrower now. Not gone, but narrower.
I don’t think Google’s 2009 team was trying to warn anyone about anything. They were having fun. April Fools’ jokes at big tech companies are usually just branding exercises with a sense of humor. But parody has a way of seeing through things that earnest prediction misses.
The serious AI forecasts from 2009 mostly got it wrong. They underestimated hardware progress and overestimated how long symbolic AI would matter. But the joke got closer. It identified the right shape of the problem even if it couldn’t guess the timeline.
We’re still building the systems. We haven’t paused to figure out the difference.
What was absurd in 2009 is ordinary now — not because the technology got less weird, but because we got used to it before we got smart about it.
