The First 24 Months of the Last Economy
What Happens Between Now and the Moment It's Too Late to Care Last Economy Part 2
AI NEWS
3/20/202611 min read


The First 24 Months of the Last Economy
What Happens Between Now and the Moment It's Too Late to Care
It won't announce itself.
No memo. No press conference. No CEO standing at a podium saying, "We've decided humans are no longer cost-effective."
Instead, a Tuesday in 2027 will just feel slightly off. Fewer emails written by actual people. A job posting that doesn't get filled. A team of twelve that quietly becomes a team of four. And everyone will call it efficiency.
That's the nature of the transition we're in.
Not a collapse. A quiet repricing.
This is Part 2 of The Last Economy, the series about what actually happens when the thing everyone argued about becomes the thing everyone lives with.
The Real Timeline
(Not the one you saw on LinkedIn)
The AI-jobs conversation has two competing diseases: breathless acceleration and reflexive denial.
Neither is useful. Both are comfortable. Here's what the data actually says.
Phase 1: Augmentation
Where we are now, roughly the next 6 months
Most companies are here. AI copilots are everywhere. Productivity is up. Nobody's been officially laid off because of a chatbot.
Not yet.
Companies are in "wait and see" mode, deploying tools while quietly benchmarking human output against AI output. The Federal Reserve Bank of St. Louis found that workers using generative AI save an average of 5.4% of their weekly hours, and that for every hour spent using AI, productivity increases by 33%.
That's not a rounding error.
That's a full-time employee's worth of capacity, conjured from thin air, without adding headcount. The spreadsheet is already running. Most people just haven't been shown the tab.
Phase 2: Substitution
Six to eighteen months from now
This is where the math starts to win arguments.
Hiring freezes begin. Not layoffs, just a quiet decision not to replace people who leave. Managers discover that one AI-enabled senior employee can do what used to require three or four juniors. Entry-level roles start disappearing first.
The data is already here:
SignalFire analyzed major tech companies from 2019 to 2024 and found a 50% decrease in new job openings for people with less than one year of post-graduate experience
UK tech companies cut graduate roles by 46% between 2023 and 2024, with a further 53% projected by 2026
A Harvard study found that after AI adoption, "junior employment sharply decreases in firms that adopt AI compared to those that do not, while senior employment remains relatively stable."
The Dallas Fed identified the mechanism: AI can replicate codified knowledge, textbooks, procedures, and known patterns, but struggles with tacit knowledge judgment built from years of doing the work. Which means AI substitutes most directly for the people who are just getting started.
Reality check: The people who were supposed to become experienced in five years won't get the chance. The entry point is being removed while the ladder is still standing.
Phase 3: Elimination
Eighteen to thirty-six months from now
This is the phase most analysts skip. Because it's uncomfortable to name.
Entire job categories don't get laid off. They just stop being refilled. "We don't need that role anymore" generates no HR paperwork, no severance, and no headlines.
The projections, from people not known for pessimism:
Anthropic CEO Dario Amodei warned in 2025 that AI could eliminate 50% of entry-level white-collar positions within five years, potentially pushing U.S. unemployment to 10–20%
Goldman Sachs projected in 2023 that AI could eliminate or significantly diminish 300 million full-time jobs globally
Bloomberg Intelligence estimates global banks alone will shed up to 200,000 jobs in the next three to five years
Translation: These are not projections from activists or researchers with something to prove. These are projections from the institutions and executives who are doing the eliminating. When the CFO tells you it's coming, take notes.
The Counterargument and Why It's Partially Wrong
"AI won't replace jobs, it'll create new ones."
This has been true for every prior technological wave. Looms displaced weavers; factories hired them back. The internet killed travel agents; it created UX designers.
The problem this time is structural.
Every past displacement had a higher tier to absorb the displaced workers. That tier now includes AI.
And the new jobs AI creates? They require a master's degree. Or a doctorate. Or a very specific combination of technical fluency and domain expertise that takes years to build.
The World Economic Forum projects 97 million "new jobs" by 2025. The fine print: roughly 350,000 of them are roles like "prompt engineer" and "AI ethics officer."
Niches. Not pipelines.
The ladder still exists. They've just removed the bottom rungs.
The New Class System
(Welcome to the sorting)
Forget income quintiles. The economy is reorganizing into four tiers, and it's happening faster than the policy conversation can follow.
Tier 1: AI Orchestrators
These are not the people who use AI. They're the people who direct it.
They own workflows, not tasks. They chain models, evaluate outputs, and design systems that compound in value. One person with the right setup ships what used to require a team. A single AI-enabled developer can replace five.
This group is small. It is highly leveraged. And it is becoming structurally irreplaceable.
Tier 2: AI-Assisted Workers
Still employed. Still valuable.
But doing more with less support, carrying a heavier cognitive load, and under perpetual pressure to justify their existence against a benchmark that gets cheaper every quarter.
A UC Berkeley Haas study found that AI doesn't free up workers' time it expands what they feel obligated to take on. It dissolves natural stopping points in the workday and creates a rhythm where "both the human and the machine were constantly in motion."
Translation: You're still employed. You're just now expected to be a superhero because the tools exist. And when you burn out, the tools don't. Nobody writes a think piece about that part.
Tier 3: The Displaced
Entry-level white collar. Admin, support, data analysis, junior coding.
These are the bellwether roles, the ones that tell you where the pressure is coming from next.
In the first seven months of 2025 alone, over 10,000 job cuts were directly attributed to AI.
The numbers underneath that:
Entry-level software development roles: dropped from 43% to 28% of available positions in under a decade
Data analysis roles: dropped from 35% to 22%
66% of enterprises are actively reducing entry-level hiring due to AI
This is not a warning sign. This is an event in progress.
Tier 4: The Invisible
Nobody talks about this group.
They don't get displaced directly. They get excluded. They never adapt. They never access the tools. They never get the reskilling. They drift quietly out of the workforce, and the system records their exit as a personal failure rather than a structural one.
"Should have upskilled" covers a multitude of institutional sins.
The buried insight:
AI doesn't just eliminate jobs. It eliminates on-ramps.
The CNBC headline said it plainly: "AI isn't just ending entry-level jobs. It's ending the career ladder."
Without entry points, Tier 1 and Tier 2 eventually stop regenerating. The pipeline dries up. In ten years, the "experienced workers" who are immune to AI displacement will be a shrinking, aging cohort, with no one coming up behind them.
That's not a jobs crisis. That's a civilization infrastructure problem.
The Corporate Playbook
(What they'll actually do versus what they'll say on earnings calls)
No CFO is going to announce, "We've decided to automate our way out of payroll." But the plan exists.
Here's the actual sequence:
Adopt AI tools quietly to measure productivity gains without announcing headcount intent
Freeze entry-level and junior hiring under the cover of "market conditions."
Achieve role consolidation through attrition departures simply aren't backfilled
Brand restructuring is a "transformation" when attrition isn't fast enough
Report the cost savings to shareholders as margin expansion
The math driving every step of that plan is not ambiguous:
A fully-loaded mid-level human employee: $75,000–$95,000 annually (salary, benefits, taxes, equipment, office space, onboarding)
An AI system performing comparable functions: $3,000–$25,000 per year
The gap: 60–85%
A Carnegie Mellon/Stanford study sharpened that further: AI agents complete equivalent work 88.3% faster at 90–96% lower cost than human professionals.
The counterargument has always been "but AI can't match human quality." And in some domains, that's still true. But here's the thing, companies have figured out:
They don't need AGI. They don't need an AI that passes the Turing test, writes poetry, or solves physics problems.
They need AI that's good enough.
Good enough to handle the 80% of tasks that are repetitive, codifiable, and low-ambiguity. The economics work is good enough. They work spectacularly at good enough.
The proof is already in the filings:
JPMorgan lifted its AI-assisted productivity rate to 6% (from 3% before)
Citigroup saw a 9% lift in coding output
Salesforce CEO claimed AI is already doing up to 50% of the company's workload
These aren't experiments. They're the new baseline.
The Hidden Bottleneck: Trust, Not Capability
(The one thing slowing this down, and it's not what you think)
The single biggest constraint on AI deployment in 2026 isn't intelligence. It's trust.
And trust is not a benchmark. It can't be fixed with a model update. It's earned through performance in the specific, unpredictable, emotionally loaded moments that never appear in lab evaluations.
Two case studies that tell the full story:
The Klarna Story (Both Halves)
This is the one everyone loves to cite. Most people stop reading halfway through.
In February 2024, Klarna deployed an AI assistant that handled two-thirds of all customer inquiries in one month, the equivalent of 700 full-time agents. Response times improved 82%. Repeat issues dropped 25%. Wall Street applauded. Klarna reduced its workforce from 5,000 to 2,000.
Triumphant, right?
By 2025, Klarna was hiring humans back.
Customer satisfaction had dropped. The AI handled volume. It couldn't handle complexity. CEO Sebastian Siemiatkowski reversed course entirely, stating that "quality human support is the way of the future."
The lesson isn't that AI failed.
The lesson is that the trust bottleneck is real, and it surfaces exactly where you don't expect it. Not in technical benchmarks. In the moment when a customer needs to feel heard rather than processed.
AI is very good at processing. It is not yet good at the other thing.
The Banking Sector*(The canary that is the coal mine)*
Wall Street is the clearest signal of what's coming across white-collar industries.
Citigroup is simultaneously cutting 20,000 jobs and training 175,000 employees in AI.
Read that sentence again.
Bloomberg Intelligence projects global banks will shed up to 200,000 roles in the next three to five years. Citi's own research found that 54% of financial services jobs have high automation potential, more than manufacturing, logistics, or retail.
The financial sector isn't a warning sign. It's the event.
The Infrastructure War
(Behind the scenes, someone is deciding who owns intelligence)
A more strategic conflict is underway, one that will determine who controls the intelligence layer of the global economy.
The open-source versus closed-source divide is not philosophical. It's economic. It's geopolitical. And it's moving faster than most observers realize.
In 2023, open-source models lagged closed-source models by 12–18 months in terms of capability. By early 2026, that gap had compressed to 3–6 months. DeepSeek R1 dramatically narrowed the reasoning gap. Llama 4, Mistral, and Qwen-3 are now viable production models for most enterprise tasks.
Open-source models cost up to 90% less at inference than closed APIs.
They can be deployed on local hardware, meaning zero marginal inference cost once hardware is amortized. They can be fine-tuned on proprietary data to create capabilities competitors cannot replicate through standard API access. And they eliminate platform dependency, no surprise pricing changes, and no policy updates that cascade into broken workflows.
As of mid-2025, closed-source LLMs still account for roughly 87% of deployed production workloads. But McKinsey research shows that more than half of enterprises now use open-source AI across their stacks.
The shift is underway. The timeline is months, not years.
Reframe the three actors clearly:
Big Tech (Closed Models) → Digital Feudalism You rent the intelligence. You can't own it, audit it, or take it with you. If the terms change, you're stuck. You signed the ToS.
Governments → Fragmentation The EU AI Act, proposed U.S. legislation, China's model regulations, every jurisdiction drawing different lines, creating a patchwork of compliance burdens that advantages incumbents and disadvantages everyone building something new.
Builders and Indies → Symbiosis Local models. Fine-tuned on real workflows. Deployed on owned infrastructure. This is the path to actual AI sovereignty, not renting intelligence from a hyperscaler at whatever price they decide next quarter.
This isn't a debate about open-source philosophy.
It's a supply chain war for intelligence.
And like every supply chain war in history, the people who control the raw material win.
The Skills That Actually Matter
(Not "learn to code." Something harder.)
The AI-skills discourse is saturated with LinkedIn-brained takes.
"Become an AI expert." "Learn prompt engineering." "Stay curious." Congratulations, you've been handed a compass with no map.
Here's what actually matters when AI can outcompete most people on most tasks:
Problem framing: The ability to define what the actual question is before answering it. AI is extraordinarily good at answering questions. It is structurally weak in questioning whether the question is right in the first place.
Systems thinking: Understanding how components interact, where second-order effects appear, and what breaks when you change one variable. AI optimizes within constraints. It rarely sees the constraints themselves.
Taste: The judgment to know whether an output is genuinely good or just plausible-sounding. This is the most undervalued skill in every hiring rubric and the most irreplaceable one in practice. Anyone who has used AI for more than a week knows exactly how confidently wrong it can be.
Workflow design: The ability to decompose complex problems into sequences AI can execute reliably, review outputs at the right checkpoints, and build systems that compound rather than stall.
The brutal truth: the future doesn't reward what you know.
It rewards how well you direct what knows more than you.
That is a fundamentally different cognitive skill than the one every school system on the planet is currently designed to produce.
Good luck to us all.
What You Should Actually Do
The window for "wait and see" has quietly closed.
This is not an emergency broadcast. It's an operational briefing.
Level 1 Survival
Use AI every single day. Not as a curiosity. Not to write bad emails faster.
As a core part of your actual workflow.
Replace parts of your own job before someone else uses AI to replace the whole thing. The people who are waiting to "see how this plays out" will look up in eighteen months and discover the market already decided without consulting them.
Level 2 Positioning
Become the AI operator in your organization.
Not the cheerleader. Not the enthusiast who sends Slack links to articles about AI. The operator is the person who builds automations, runs workflows, evaluates outputs, and translates capability into measurable business value.
This role exists at every company, in every function.
Most organizations have exactly zero people who own it seriously.
That is an opening. Take it.
Build leverage, not output. The question isn't "how do I produce more?" It's "how do I make my outputs harder to replicate?"
Level 3 Ownership
Run local models. Own your inference. Build on open-source foundations so your workflows don't live in someone else's pricing spreadsheet.
If you're relying entirely on closed API access, you are one price increase away from a margin crisis.
The shift from renting intelligence to owning it is the same shift that happened with software in the 2010s. Cloud was convenient until the bills arrived. Now every serious company has an on-prem strategy somewhere.
That inflection point for AI is coming faster than most people expect. Position before it arrives, not after.
The Point of No Return
The Last Economy doesn't start when AI gets smart enough.
It starts when companies realize they don't need you specifically, individually, at your current cost to get the result they need.
That calculation is already underway.
The indicators are in plain sight:
Entry-level hiring down 50%
Global banks planning to shed 200,000 jobs
41% of employers globally plan to cut up to 40% of their workforce due to AI within five years
These aren't projections from alarmists.
They're providing forward guidance from the people running the math.
This isn't a cliff. It's a slope.
The people who navigate it well won't do so because they panicked. They'll do so because they paid attention early while most people were still debating whether the disruption was real.
The door doesn't slam.
It quietly locks behind you.
Part 2 of The Last Economy series. Part 1 covered the cost delta and why "AI won't replace jobs" is the wrong framing. Part 3 covers what a post-displacement economy actually looks like and what's worth building inside it.