Ten Jobs Whose Current Form Deserves a Farewell Party
A sharp look at which white-collar roles AI may not merely change, but quietly make obsolete, and why polite language hides the scale of the shift.
35 posts
A sharp look at which white-collar roles AI may not merely change, but quietly make obsolete, and why polite language hides the scale of the shift.
A review of a rare AI book that uses mathematics to illuminate rather than intimidate, making difficult ideas feel genuinely learnable.
Donald Knuth's collaboration with Claude offers a quietly historic glimpse of AI as mathematical assistant rather than mere answer machine.
PageIndex.ai makes the case for document-aware retrieval that respects pages, structure, and references instead of blindly chunking PDFs.
As AI writes more code, naming becomes even more central: the human craft shifts toward concepts, boundaries, and meaning.
LLMs may act impressively while still failing to know when they are capable, making self-assessment a core safety problem.
A new AI-assisted algebraic geometry result raises the stakes for language models as collaborators in genuine mathematical discovery.
Two papers suggest that external guardrails cannot provide airtight AI safety, forcing a harder look at the mathematics of control.
Agent0 points toward self-evolving agents that learn through tools and reasoning traces without the usual diet of curated training data.
Strange LLM outputs become clues to the messy training data, transcription errors, and hidden artifacts inside modern models.
Apple's sensor-fusion research hints at a privacy-sensitive future where models learn from multimodal context without simply grabbing more cloud data.
Kimi K2 Thinking enters the reasoning-model race, showing how quickly China's AI frontier is becoming globally competitive.
The desert data center in Transcendence now looks less like symbolism and more like a blueprint for hyperscale AI geography.
The neural junk-food hypothesis asks whether low-quality viral content can degrade models much like shallow media degrades attention.
Tiny reasoning models challenge the assumption that scale is always the path to intelligence, especially on structured problems.
CraftGPT turns a language model into Minecraft redstone, proving that absurd constraints can teach serious lessons about computation.
Human and LLM errors can look similar, but their causes differ in ways that matter for trust, correction, and accountability.
The AI boom is compared with dot-com excess, asking which parts are durable infrastructure and which are speculative heat.
Bayesian experimental design offers a way for LLMs to ask better follow-up questions instead of guessing blindly.
AI hype is framed as an economic mirage, propping up confidence while hiding fragile assumptions beneath the spectacle.
Synergetics offers a language for understanding emergent abilities in LLMs as patterns of order and self-organization.
Dietrich Dörner's work on complex-system failure becomes a warning label for autonomous AI and overconfident decision-making.
Dune's Butlerian Jihad is used to ask whether today's AI race is replaying old fears about dependence on machines.
A study of intimate chatbot conversations reveals how major models handle flirtation, refusal, safety, and awkward human expectations.
AlphaEvolve suggests algorithmic discovery may reshape science and industry by evolving solutions humans would not design directly.
Sycophantic AI is mocked as flattery gone wrong, showing how agreeable models can become less useful and less truthful.
Knowledge graphs are useful, but the post argues they are not a magic cure for LLM hallucination and reasoning failures.
OpenAI's competitive-programming work suggests generalist reasoning models can outperform narrow specialists in demanding coding contests.
Gibson's digital ghosts become a frame for modern AI simulations of human behavior and the science behind them.
LLM reasoning failures may reveal uncomfortable parallels with human cognition rather than a simple machine deficiency.
The post asks whether LLMs possess coherent world models or merely produce fluent stories about reality.
STaR shows how models can improve reasoning by generating and learning from their own explanations.
A friendly guide to the difference between narrow AI and artificial general intelligence, with metaphors that make the distinction stick.
Human overconfidence and AI hallucination meet in a comparison of how bad certainty distorts judgment in both minds and machines.
DeepMind's AlphaGeometry shows how synthetic data and symbolic reasoning can push AI toward Olympiad-level mathematics.