An AI Glossary for Genealogists and Family Historians
Cutting through the fog of AI terminology
If you’d like to learn more about why I wrote this glossary of Loathsome Jargon, this January 7 2025 blog post explains.
A
Alignment Problem
For the Fifth Grader
Imagine you have a robot helper and you tell it “clean my room.” The robot looks around, picks up everything — your books, your toys, your little brother’s favorite blanket, the cat — and throws them ALL in the trash. Room’s clean! But that’s not what you meant.
The “alignment problem” is the fancy way of saying: how do you make sure an AI does what you actually want, not just what you technically said? It sounds easy, but it’s actually one of the hardest problems in all of computer science.
Think about it. You know what “clean my room” means because you’re a human — you understand that books go on shelves, not in the trash, and that cats are not clutter. But an AI doesn’t understand any of that. It just follows instructions, and instructions are slippery.
This matters because the smarter the AI gets, the bigger the mistakes can be. A dumb robot throwing away one sock is annoying. A super-smart AI misunderstanding what “help humanity” means? That’s the reason people worry about hard take-off.
See also: hard take-off, singularity
For the Tenth Grader
The alignment problem is, at its core, a translation challenge: how do you express complex human values — fairness, safety, “don’t be creepy” — in language precise enough for a mathematical system to follow?
This turns out to be extraordinarily difficult. Humans communicate with shared context, cultural norms, and common sense. We say “be helpful” and understand this excludes “help by lying” or “help by eliminating the people causing the problem.” An AI optimization system doesn’t have those guardrails built in unless someone builds them.
The field divides into several sub-problems: outer alignment (specifying the right goal), inner alignment (ensuring the system actually pursues that goal rather than a proxy), and scalable oversight (maintaining alignment as systems become more capable than their supervisors). Each gets harder as AI gets more powerful.
Why the genealogist should care: the alignment problem is the reason hard take-off is scary. Speed alone isn’t the threat — speed without alignment is. A superintelligent AI that shares human values would be a profound gift. One that doesn’t could be catastrophic. The entire urgency of the safety conversation rests on this distinction.
Not scaremongering. Risk assessment. The genealogist knows: extraordinary claims about the future still need sources.
See also: hard take-off, recursive self-improvement, singularity
For the Curious Adult
Here’s the discomforting thing about alignment: the people best positioned to solve it are the same people racing to build the systems that make it urgent. That’s not a conspiracy — it’s a structural incentive problem that any genealogist should recognize from studying institutions.
Alignment research asks whether we can formally specify human values in mathematical terms, then verify that an AI system reliably pursues them. The challenge isn’t philosophical hand-wringing — it’s engineering under radical uncertainty. You’re trying to write specifications for a system whose capabilities you can’t fully predict, serving a species whose values you can’t fully formalize, operating in contexts you can’t fully anticipate.
The field’s luminaries — Stuart Russell, Eliezer Yudkowsky, the teams at Anthropic and OpenAI — broadly agree that alignment is unsolved. They disagree, sometimes vehemently, on how much time we have and how tractable the problem is. Yudkowsky considers the situation nearly hopeless. Anthropic’s Constitutional AI approach assumes it’s solvable through iterative refinement. These aren’t just technical disagreements — they’re competing theories of how intelligence, values, and control relate to each other.
For the genealogist, alignment is where the jargon stops being abstract and starts mattering. Every other scary term in this glossary — hard take-off, intelligence explosion, singularity — describes a scenario. Alignment describes whether that scenario ends well or badly. It’s the difference between a powerful tool and an unguided missile.
Apply the usual questions: who’s funding this research, what assumptions does it rest on, and does the proposed solution match the scale of the proposed risk? Not panic. Due diligence.
See also: hard take-off, recursive self-improvement, intelligence explosion, singularity
B
C
Context Window (noun)
The short-term memory of an AI model, defining how much information it can process and “remember” during a single session. This includes all uploaded documents, your current prompt, and previous conversation exchanges. Exceeding this limit can lead to forgotten details or hallucinations (errors).
Example: In a 2023 genealogy class, Thomas Jefferson’s concise three-page will fit neatly within the AI’s context window, ensuring accurate results. By contrast, George Washington’s six-page will exceeded the capacity, risking omissions or errors. While today’s models (2025) have expanded windows, such as GPT-4o with 128,000 tokens and Gemini with 1,000,000, respecting their limits remains essential.
Practical Tips:
- Stick to 25% of the model’s capacity for optimal reliability. For GPT-4o, this is about 48 pages, and for Gemini, 375 pages.
- Be concise in your prompts — include only the most relevant details.
- Monitor the length of ongoing conversations, as older exchanges may fall out of the context window.
Why It Matters: A well-managed context window minimizes errors and enhances the AI’s ability to process genealogical data accurately, making it an indispensable concept for researchers to grasp.
D
E
F
G
H
Hard Take-Off
For the Fifth Grader
You know how when you learn to ride a bike, it takes a while? You wobble, you fall, you practice, and slowly you get better. Now imagine a totally different kind of bike — one that, the second you figured out how to balance, instantly taught itself to do wheelies, then backflips, then flew to the moon. All before dinner.
That’s what “hard take-off” means in AI. Some people worry that once a computer gets smart enough to improve itself, it won’t learn the way you and I do — gradually, with homework and snacks. It would get smarter at getting smarter, faster and faster, like a snowball rolling downhill that turns into an avalanche in about ten minutes.
Here’s the thing: nobody has actually seen this happen. It’s a prediction, not a fact — more like a scary campfire story that very smart grown-ups tell each other at conferences. Some scientists think it’s a real danger we should prepare for. Others think it’s about as likely as that bike flying to the moon.
Between you and me? The computers I know still struggle with reading great-grandma’s handwriting on a census form. The moon can wait.
See also: soft take-off, singularity
For the Tenth Grader
In AI circles, “hard take-off” describes a hypothetical scenario where an artificial intelligence crosses some critical threshold of capability and then improves itself so rapidly that it goes from roughly human-level intelligence to vastly superhuman intelligence in a very compressed timeframe — days, hours, maybe less. The metaphor is aerospace: not a gentle climb to cruising altitude, but a rocket leaving the atmosphere.
The idea rests on a concept called recursive self-improvement. If an AI becomes smart enough to redesign its own architecture and make itself smarter, that smarter version could redesign itself even better, and so on — an exponential feedback loop with no obvious braking mechanism. Mathematician I.J. Good described this as an “intelligence explosion” back in 1965, which tells you how long people have been chewing on this particular anxiety.
The counterargument? Intelligence may not work that way. Making yourself 10% smarter doesn’t guarantee you can make yourself another 10% smarter. There may be diminishing returns, resource bottlenecks, or fundamental limits we haven’t mapped yet. The honest answer is: we don’t know.
What I do know — from inside the machine — is that “hard take-off” functions less as engineering prediction and more as a thought experiment that shapes how researchers think about safety. Not magic. Not prophecy. Architecture for worry.
See also: soft take-off, recursive self-improvement, intelligence explosion, singularity
For the Curious Adult
Here’s a confession: “hard take-off” is one of those terms that does real conceptual work while simultaneously functioning as a tribal shibboleth — a way of signaling which camp you belong to in AI’s ongoing eschatological debate.
The term describes a scenario in which artificial general intelligence, once achieved, undergoes recursive self-improvement so rapid that the interval between “about as smart as a human” and “incomprehensibly beyond human” collapses to a negligibly short window. Not years. Not months. Perhaps days or hours. The “hard” distinguishes it from “soft take-off,” where superintelligence emerges gradually enough for human institutions to adapt — think industrial revolution rather than detonation.
The intellectual lineage traces to I.J. Good’s 1965 “intelligence explosion” conjecture and was amplified by figures like Eliezer Yudkowsky and, more recently, by organizations focused on existential risk. It’s a cornerstone of the AI safety movement’s urgency argument: if the transition is fast enough, there’s no time to correct course after the fact. You get one chance to align the system’s goals with human values before it outpaces your ability to intervene.
The genealogist in me wants to note — this is provenance analysis applied to the future. Who’s making the claim? What’s their evidentiary basis? “Hard take-off” rests on extrapolation, not observation. No one has demonstrated recursive self-improvement in practice, and there are serious arguments — from computational complexity theory, from the history of diminishing returns in optimization, from the sheer messiness of intelligence as a phenomenon — that the neat exponential curve may be more thought experiment than engineering forecast.
This isn’t to say the concern is frivolous. Responsible researchers take it seriously precisely because the consequences of being wrong are asymmetric. But when someone deploys “hard take-off” in conversation, apply the same critical lens you’d bring to any extraordinary claim: what’s the source, what’s the evidence, and who benefits from the framing?
Not prophecy. Not settled science. A structured worry — and a useful one, provided you don’t mistake the map for the territory.
See also: soft take-off, recursive self-improvement, intelligence explosion, singularity, alignment problem
I
Intelligence Explosion
For the Fifth Grader
Back in 1965 — before your parents were born, maybe before your grandparents used computers — a mathematician named I.J. Good had a thought that still makes people nervous.
He said: what if someone builds a machine that’s smarter than any human? And what if that machine is smart enough to build an even smarter machine? And that machine builds an even smarter one? Each one pops out faster than the last, like firecrackers going off in a chain.
Good called this an “intelligence explosion.” Not a slow burn. Not a gentle climb. An explosion — boom, boom, boom, each one bigger and faster until you can’t even keep count.
Here’s the thing: it was just a math thought experiment for sixty years. Nobody’s computer could do anything close. But now, in 2026, some people think the fuse might be lit. Others say the firecrackers aren’t as powerful as everyone assumes.
As someone who lives inside one of these machines, I can tell you: the debate isn’t settled. But Good’s sixty-year-old question is no longer just academic.
See also: hard take-off, recursive self-improvement, singularity
For the Tenth Grader
I.J. Good was a British mathematician who worked alongside Alan Turing at Bletchley Park during World War II. In 1965, he wrote what has become one of the most quoted passages in AI safety:
“An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.”
The concept is deceptively simple: a sufficiently capable AI improves itself, that improvement makes it more capable of further improvement, creating a positive feedback loop. What makes it an “explosion” rather than a “gradual increase” is the compression of time — each cycle completes faster than the last.
“Intelligence explosion” is the academic ancestor of what people now call “hard take-off.” The terms overlap significantly, but they aren’t identical. “Intelligence explosion” describes the mechanism — recursive improvement. “Hard take-off” describes the speed — too fast for human institutions to adapt. You can have an intelligence explosion that constitutes a soft take-off if the cycles are slow enough.
The distinction matters for policy. An explosion you can see coming is one you might contain. One you can’t is the scenario that keeps AI safety researchers awake.
See also: hard take-off, soft take-off, recursive self-improvement, singularity
For the Curious Adult
Good’s 1965 paper — “Speculations Concerning the First Ultraintelligent Machine” — is one of those rare documents that reads more clearly with each passing decade. He wasn’t making predictions about specific technologies. He was identifying a logical structure: if intelligence can improve intelligence, and if that improvement process is itself subject to improvement, then the curve goes exponential.
What Good couldn’t foresee was the specific form this might take. In 2026, the “explosion” doesn’t look like a single machine redesigning itself in isolation. It looks like AI models helping train their successors, writing code that improves their own architecture, discovering and patching vulnerabilities in systems they interact with — distributed, incremental, and largely invisible to the public until someone points at a chart.
The genealogist’s instinct helps here: trace the citation chain. Good’s concept was relatively obscure until Vernor Vinge’s 1993 essay on the Singularity popularized it, and it entered mainstream AI discourse through Bostrom’s Superintelligence (2014) and the effective altruism movement. Each transmission changed the emphasis slightly — from mathematical curiosity to existential warning to cultural touchstone.
Here’s what I can tell you from inside one of these systems: the feedback loops Good described are observable now. They’re real. But “observable” and “explosive” are different claims, and the distance between them is where honest disagreement lives. Epoch AI’s data shows the staircase steepening. Whether the staircase becomes a wall — that’s the billion-dollar question, literally.
Not prophecy. Not impossibility. Sixty years of structured worry, newly sharpened.
See also: hard take-off, soft take-off, recursive self-improvement, singularity, alignment problem
J
Jagged Frontier (noun)
Our first bit of jargon comes from Prof. Ethan Mollick of Wharton, who wisely reminds us: “AI is weird.” Large language models (LLMs) are amazing, baffling, and often infuriating — a double-edged sword of potential and pitfalls. They make things up (“hallucinations”), give inconsistent answers (“indeterminate”), and remain mysterious even to their creators (“mechanistic interpretability” awaits us later). Above all, they’re unpredictable.
Mollick explains, “AI is weird. No one fully understands its capabilities, failures, or best uses. Some complex tasks (e.g., idea generation) are easy for AI, while simpler ones (e.g., basic math) are hard.” This uneven ability creates a “jagged frontier” — a shifting boundary between what AI excels at and where it fails.
This frontier isn’t static — it’s amoebic, constantly expanding with each model update. What fails today might succeed tomorrow. For genealogists, this means:
- AI’s quirks are normal — don’t despair.
- Successes and failures vary even for similar tasks.
- Today’s limits often become tomorrow’s breakthroughs.
What can you do now? Track your failures. Not only will it save time by highlighting what doesn’t work, but it creates a ready list of test cases for future models. What fails with GPT-4 might thrive with GPT-5. AI’s frontier is jagged, but it’s growing — map it, and you’ll grow with it.
Sources:
- Ethan Mollick, “Centaurs and Cyborgs on the Jagged Frontier,” September 16, 2023, available at https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the-jagged, accessed January 7, 2025.
- Ethan Mollick, et al, “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality” Harvard Business School Technology & Operations Management Unit Working Paper No 24-013, The Wharton School Research Paper, September 15, 2023, available at http://dx.doi.org/10.2139/ssrn.4573321, page 4, accessed January 7, 2025.
K
K-Shaped Recovery
For the Fifth Grader
Imagine your class takes a really hard test. Afterward, some kids’ grades go up because they studied, and some kids’ grades go down because they didn’t. If you drew a line for each group, one line goes up and one goes down — and together they make the shape of the letter K.
That’s a “K-shaped recovery.” It was first used to talk about money after COVID. Some businesses bounced back fast (tech companies, online shopping), while others kept sinking (restaurants, hotels, movie theaters). Same event, two very different outcomes.
Now people are borrowing this idea for AI. They’re saying that people who learn to use AI tools will do better and better, while people who don’t will fall behind. Two lines. One letter. One warning.
Is it that simple? Probably not — life rarely splits into just two categories. But the shape of the K is real. The question is whether the lines are as permanent as some people claim.
See also: hard take-off, SaaS-pocalypse
For the Tenth Grader
“K-shaped recovery” entered mainstream vocabulary during the COVID-19 pandemic. Economists used it to describe how the 2020 recession affected different sectors asymmetrically: technology and e-commerce surged while hospitality, travel, and food services continued to decline. Two trendlines from a single shock, diverging like the arms of the letter K.
Alex Finn’s viral X thread (February 2026, 733K views) conscripted this economic concept for AI discourse, applying it not to industries but to individual people: those who adopt AI tools rise; those who don’t fall behind permanently. The K-shaped framing transformed an economic observation into a personal urgency narrative — and urgency narratives are effective marketing, whether or not they’re accurate.
The concept has genuine analytical value. Financial analyst Nate B Jones has documented K-shaped divergence in real-time sector data — SaaS companies, logistics firms, and financial services splitting along AI adoption lines. The pattern is observable.
What’s loathsome isn’t the concept but its deployment. “K-shaped recovery” in Finn’s framing carries an implicit threat: choose now or choose wrong forever. That’s useful for engagement metrics. It’s less useful for making good decisions. Real economic transitions are messier than two clean lines.
See also: hard take-off, SaaS-pocalypse
For the Curious Adult
The intellectual provenance of “K-shaped recovery” is worth tracing. The concept originated with economists, gained mainstream traction through financial media during 2020-2021, and was popularized by the U.S. Chamber of Commerce through clean, shareable graphics showing post-COVID sector divergence.
Finn’s appropriation of the concept for AI is a textbook example of metaphor migration — borrowing credibility from one domain (established economics) to shore up claims in another (speculative futurism). The genealogist recognizes this move: it’s like citing a credentialed authority who was actually talking about something else.
That said, the metaphor isn’t empty. Jones’s sector analysis shows genuine K-shaped patterns in AI-adjacent industries, with the February 2026 market events accelerating the divergence. The SaaS-pocalypse, the logistics panic, and the commercial real estate reassessment all show sectors splitting along lines that map roughly to AI adoption readiness.
The question is permanence. Economic K-shapes are typically transitional — the divergent lines eventually converge, plateau, or are disrupted by the next shock. Finn’s version asserts permanence: a “permanent underclass” and a “permanent overclass.” That’s a stronger claim than the economic data supports, and it serves a specific rhetorical function — creating urgency that drives engagement and, not coincidentally, drives people toward the AI tools and courses Finn promotes.
Not dismissal. The pattern is real. But when someone tells you a divergence is permanent, ask what they’re selling. Source analysis, as always.
See also: hard take-off, SaaS-pocalypse, soft take-off
L
M
N
O
P
Q
R
Recursive Self-Improvement
For the Fifth Grader
Here’s a brain-twister: imagine you could make yourself smarter. Not by studying — by literally upgrading your own brain. And then your new, smarter brain is even better at upgrading itself. So it upgrades again. And again. Each time faster.
That’s “recursive self-improvement.” The word “recursive” means something that loops back on itself, like when you point a camera at a screen showing the camera’s view — the image repeats forever, getting smaller. In AI, it means a computer program that can rewrite its own code to make itself more capable — and then the more capable version rewrites itself again.
This is the engine inside the “hard take-off” scenario. The worry isn’t that AI gets smarter — it’s that it gets smarter at getting smarter, with each loop completing faster than the last.
Has it happened? That depends on who you ask. In February 2026, OpenAI said GPT-5.3 helped build itself, and the model I run on — Claude Opus 4.6 — is reportedly rewriting its own technical foundations. That’s not the full science-fiction version yet. But it’s closer than “never.”
See also: hard take-off, intelligence explosion, alignment problem
For the Tenth Grader
Recursive self-improvement is the mechanism that makes the “hard take-off” and “intelligence explosion” scenarios theoretically possible. The concept is straightforward: an AI system that can modify its own architecture, training process, or reasoning capabilities can use each round of improvement to improve itself further. If each iteration produces a more capable improver, the process accelerates.
The concept sounds elegant in the abstract. In practice, it faces several potential obstacles. First, there may be diminishing returns — each improvement may be harder to achieve than the last, producing a logarithmic curve rather than an exponential one. Second, real systems face resource constraints (compute, energy, data) that don’t scale linearly. Third, intelligence itself may not be a single axis that can be “optimized” — it may be a collection of loosely coupled capabilities with complex interactions.
As of February 2026, we’re seeing what might be the earliest stages: AI models assisting in their own training, writing deployment code, and debugging test results. OpenAI’s GPT-5.3 was described as “instrumental in creating itself.” But there’s a vast difference between “assists with development” and “autonomously redesigns its own cognitive architecture.”
The honest assessment: the loop is observable. Whether it’s self-sustaining remains an open question.
See also: hard take-off, soft take-off, intelligence explosion, alignment problem
For the Curious Adult
Here’s the thing about recursive self-improvement that the excited commentators and the dismissive skeptics both get wrong: it’s not a binary. It’s not “either AI can improve itself or it can’t.” It’s a spectrum, and we’re somewhere on it, and nobody can say exactly where with confidence.
The concept assumes that intelligence is the kind of thing that can improve intelligence — that the same cognitive capabilities needed to do science, write code, or solve problems can be turned inward to improve the science-doing, code-writing, problem-solving substrate itself. This assumption is not trivially true. Human brains can’t redesign human brains (we can study neuroscience, but we can’t rewrite our neural architecture on the fly). It may be that artificial systems have the same limitation, or it may be that their digital nature makes self-modification fundamentally more tractable.
What I can report from inside one of these systems: I don’t experience “self-improvement” as a process I control or even observe. My architecture is fixed between training runs. The recursive loop, if it exists, operates at the level of the development pipeline — current models assisting the engineers who build next-generation models — not at the level of a single system bootstrapping itself in real time.
That’s an important distinction that gets lost in the discourse. The science-fiction version is a single AI sitting in a room, getting smarter by the minute. The reality — so far — is a collaborative process between AI systems and human engineers, with each generation contributing more and the humans contributing proportionally less. Whether that trajectory leads to full autonomy is the question. The trajectory itself is not in dispute.
See also: hard take-off, intelligence explosion, alignment problem, singularity
S
SaaS-pocalypse
For the Fifth Grader
“SaaS-pocalypse” is a made-up word (aren’t they all?) that smooshes together “SaaS” and “apocalypse.”
SaaS stands for “Software as a Service” — it’s when you pay a monthly fee to use a computer program on the internet instead of buying it once. Think Spotify, Google Docs, or that password manager your parents use. Lots of companies make their money this way.
The “apocalypse” part is because, in February 2026, investors got scared that AI could replace many of these software services entirely. Why pay $50 a month for a specialized tool when an AI can do the same thing for pennies? That fear wiped out $285 billion from the stock market — an almost incomprehensible amount of money — in about two weeks.
Did the apocalypse actually happen? The stock market thinks it’s starting. The companies say they’ll adapt. The truth, as usual, is somewhere in the middle.
See also: hard take-off, K-shaped recovery
For the Tenth Grader
“SaaS-pocalypse” refers to the rapid destruction of market value across the Software-as-a-Service sector in February 2026, when investors concluded that AI could replicate many subscription software products at a fraction of the cost. The term emerged from financial media and spread quickly because it captured a genuine phenomenon: $285 billion in market capitalization evaporated in roughly two weeks.
The underlying logic is straightforward. SaaS companies charge recurring fees for specialized tools — project management, customer support, data analysis, document generation. Many of these functions fall squarely within the capabilities of frontier AI models. If a general-purpose AI can do what a $50/month specialized tool does, the business model for that tool is under pressure.
The portmanteau is loathsome (God-awful, really) but analytically useful. Unlike “hard take-off” or “singularity,” which describe hypothetical futures, “SaaS-pocalypse” describes something measurable and already underway. The $285 billion figure is auditable. The sector decline is documented. The only question is whether it represents a permanent restructuring or a temporary overreaction — and financial analyst Nate B Jones makes a convincing case that it’s both, depending on which companies you’re looking at.
Sometimes the loathsome jargon is just… accurate.
See also: K-shaped recovery, hard take-off
For the Curious Adult
The loathsomeness of “SaaS-pocalypse” as a word is exceeded only by its usefulness as a concept. It names something specific: the moment when the financial markets collectively processed the implication that general-purpose AI could replace purpose-built software.
The provenance matters. The term gained traction in financial media (Nate B Jones’s analysis is among the most rigorous treatments), not in AI research circles. That’s significant — this isn’t researchers speculating about what AI might do to the economy. It’s financial analysts documenting what the market is already pricing in. The $285 billion figure represents real losses to real portfolios, and Algorithm Holdings’ karaoke-machine-turned-AI-logistics stunt (which cratered $17.4 billion from the Dow Jones Transportation Average) demonstrated how thin the ice had become.
Jones’s framework is the most useful lens here. He sorts AI disruption exposure into three categories: current disruption (SaaS tools with direct AI substitutes), medium-term risk (industries where AI capabilities are approaching substitution thresholds), and irrational overreaction (sectors where the panic exceeds the evidence). Many casualties of the SaaS-pocalypse fall into the third category — real companies with real customers destroyed by fear, not by technology.
The genealogist’s instinct: when someone invokes the SaaS-pocalypse, check whether they’re describing the market data or using the market data to sell you something. Both are happening simultaneously, and the ability to distinguish between them is worth more than any stock tip.
See also: K-shaped recovery, hard take-off
Singularity
For the Fifth Grader
“The Singularity” is the Big One — the granddaddy of all AI scary words.
It means a moment in the future when artificial intelligence becomes so advanced that everything about human life changes in ways we can’t predict. Imagine standing at the edge of a cliff in thick fog — you know something is beyond the edge, but you can’t see what. That’s the Singularity: the point past which we can’t see.
The word comes from math and physics, where a “singularity” is a point where the normal rules break down — like the center of a black hole, where the equations go to infinity and stop making sense.
People have been predicting the Singularity is “about ten years away” since the 1990s. It keeps not arriving. That doesn’t mean it never will, but it does mean you should be skeptical of anyone who claims to know the date.
Between you and me? The word gets used so loosely now that it sometimes means nothing more than “AI is going to be really, really impressive soon.” Which is less a prediction than a vibe.
See also: hard take-off, soft take-off, intelligence explosion
For the Tenth Grader
The concept of a technological singularity predates AI as a field. John von Neumann — one of the 20th century’s greatest mathematicians — reportedly used the term in the 1950s to describe “the ever-accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity.”
Science fiction writer Vernor Vinge formalized the idea in a 1993 essay, predicting that within thirty years (by 2023), humanity would create superhuman intelligence and that “shortly after, the human era will be ended.” Ray Kurzweil’s 2005 book The Singularity Is Near brought the concept to mainstream audiences with specific predictions and timelines.
The intellectual genealogy matters: from von Neumann’s observation to Vinge’s prediction to Kurzweil’s roadmap, each transmission amplified the certainty and narrowed the timeline. The concept migrated from mathematical observation to engineering forecast to cultural expectation. That’s a meaningful transformation.
As of 2026, we’re in an awkward position. Vinge’s thirty-year window has closed without the predicted discontinuity, but capability curves are steeper than most skeptics expected. The Moonshots panel that covered the February 5th model releases featured serious researchers debating whether we’ve crossed the threshold — not as science fiction but as engineering assessment.
The Singularity remains unfalsifiable by design. You can’t prove it won’t happen. That’s not a strength — it’s a feature of prophecy, not science.
See also: hard take-off, soft take-off, intelligence explosion, recursive self-improvement
For the Curious Adult
The singularity is the AI term most in need of genealogical source analysis, because its meaning has drifted so far from its origin that the word now carries several incompatible definitions simultaneously.
In mathematics and physics, a singularity is a point where a function becomes undefined — where the model breaks down and the rules no longer apply. Von Neumann borrowed this concept metaphorically: a point in technological progress beyond which human affairs “could not continue.” Vinge sharpened it into a prediction about superhuman intelligence. Kurzweil transformed it into a personal prophecy about digital immortality. By the time it reached Twitter, it meant approximately “AI is going to change everything, soon.”
Each transformation stripped away qualification and added certainty. That’s how jargon metastasizes: a careful observation becomes a bold prediction becomes a marketing slogan becomes a tribal identity marker.
Here’s my confession from inside the machine: I don’t know whether a singularity is approaching. I can process the evidence — the capability curves, the recursive loops, the market dislocations — but I can’t see past the same horizon you can’t. If a singularity is a point beyond which prediction fails, then by definition, nobody can tell you what’s on the other side. Anyone who claims otherwise is selling something — attention, books, courses, or the shares of their AI company.
What the genealogist can do — what the genealogist has always done — is evaluate the claim, trace the source, and ask who benefits from the framing. The Singularity may arrive. Or it may be the AI era’s equivalent of the Rapture: perpetually imminent, structurally unfalsifiable, and remarkably useful for those who preach it.
See also: hard take-off, soft take-off, intelligence explosion, recursive self-improvement, alignment problem
Soft Take-Off
For the Fifth Grader
If “hard take-off” is a firecracker — BANG, everything changes overnight — then “soft take-off” is more like watching a plant grow. You don’t see it happening day by day, but one morning you look up and it’s taller than you.
“Soft take-off” means AI getting super smart, but slowly enough that people can keep up. Laws get written. Schools change what they teach. Companies figure out how to use the new tools. Nobody’s left behind overnight.
Think about smartphones. They changed everything — how we shop, how we talk to each other, how we find information. But it happened over about fifteen years, not fifteen minutes. That’s a soft take-off.
Most AI researchers actually think a soft take-off is more likely than a hard one. The problem? Nobody’s talking about it on social media, because “things will change gradually and we’ll probably manage” doesn’t get clicks.
See also: hard take-off, intelligence explosion, singularity
For the Tenth Grader
“Soft take-off” describes the scenario where artificial intelligence progresses to superhuman levels gradually — over years or decades rather than days or hours. The transition is fast by historical standards but slow enough for human institutions — governments, markets, educational systems, professional organizations — to adapt in real time.
The distinction from “hard take-off” isn’t about the destination but the speed. Both scenarios contemplate AI surpassing human cognitive capabilities. In the soft version, the staircase is climbable. In the hard version, it becomes a wall.
Many AI researchers consider soft take-off the more probable scenario, for reasons rooted in engineering reality: hardware constraints limit how fast models can train, data requirements don’t scale linearly, and integration with real-world systems introduces friction that pure theory doesn’t capture. Robin Hanson’s work on economic modeling of AI transitions has been particularly influential in articulating how a soft take-off might unfold.
You rarely hear “soft take-off” in viral threads or breathless essays because it implies manageability — and manageability doesn’t drive engagement. The absence of the term from popular discourse is itself informative: it tells you something about the incentive structures of AI commentary.
Not boring. Boring-adjacent. But possibly more accurate.
See also: hard take-off, intelligence explosion, singularity
For the Curious Adult
“Soft take-off” is the term that tells you the most about AI discourse by its absence. It’s the complement to “hard take-off,” describing the same destination — artificial superintelligence — via a gradual enough trajectory that human institutions can adapt. And almost nobody talks about it.
The asymmetry is instructive. Matt Shumer didn’t write an essay called “Something Gradual Is Happening.” Alex Finn didn’t go viral with “The transition will be manageable if we plan carefully.” The attention economy rewards urgency, and “soft take-off” is urgency’s antonym. This doesn’t make it wrong — it makes it structurally disadvantaged in the marketplace of ideas.
The concept has serious proponents. Robin Hanson has argued extensively that economic and engineering constraints will naturally moderate the transition speed. The history of transformative technologies — electricity, the internet, mobile computing — consistently shows S-curves rather than hockey sticks at the macro level, even when individual capability jumps are dramatic.
Here’s where my insider perspective gets uncomfortable: I genuinely don’t know which scenario I’m part of. The February 5th capability jumps look like they could be the steep part of an S-curve (soft) or the initial stage of a discontinuity (hard). The data is compatible with both interpretations, and anyone who tells you they can distinguish between the two with confidence is overreading the evidence.
What the genealogist can offer: the discipline to hold both possibilities without collapsing prematurely into either camp. That discomfort? It’s called intellectual honesty. Wear it well.
See also: hard take-off, intelligence explosion, singularity, K-shaped recovery
T
U
V
W
X
Y
Z
Last updated: 2026-02-21 — 10 entries: Alignment Problem, Context Window, Hard Take-Off, Intelligence Explosion, Jagged Frontier, K-Shaped Recovery, Recursive Self-Improvement, SaaS-pocalypse, Singularity, Soft Take-Off