Last Economy

Humanity Has Until About May 2028 to Figure This Out

TECHNOLOGYAI NEWS

3/13/202610 min read

Last Economy - Part 1

Humanity Has Until About May 2028 to Figure This Out

The most important economic transition in human history is happening right now.

Not next decade. Not "soon." Now.

And if Emad Mostaque is right and his track record gives you very little reason to laugh him off, we have approximately 2 years before human cognitive labor becomes economically negative.

Not less valuable. Not disrupted. Negative.

As in: a liability. A cost center. A thing you tolerate the way you tolerate a fax machine briefly, expensively, and with visible resentment.

Take a breath. We're just getting started.

Who Is Emad Mostaque, and Why Should You Listen to Him

Mostaque is not a Silicon Valley hype merchant with a podcast and a venture fund.

He's a mathematician. A former hedge fund manager. The founder of Stability AI. And, maybe most importantly, a father who entered the AI field not to get rich but to understand his son's autism through first-principles thinking.

That backstory matters.

It means he came to this not as a product evangelist, but as someone trying to solve a real problem. He wanted to understand how the mind works. What he found, instead, was a technology that would remake the economic value of every mind on the planet.

You don't need to agree with everything Mostaque says. But dismissing him because the headline sounds dramatic? That's how people got surprised by every major economic disruption in history.

Don't be that person.

The Pattern Everyone Missed Until It Was Already Over

Every major economic inversion in history looked impossible right up until the moment it was obvious.

The agricultural revolution ended subsistence farming. The Industrial Revolution ended the dominance of physical labor as an economic advantage. The cognitive revolution rewarded the people who could think, analyze, strategize, and synthesize.

The thinking people won.

And now the machines are coming for the thinking.

This is not a metaphor. This is a mechanism.

AI models are already:

  • Outperforming doctors in diagnostic accuracy with a lower error rate

  • Winning math olympiads

  • Dominating competitive coding benchmarks

That last one is worth pausing on. The people who built these tools are now watching the tools beat them at the thing they built the tools to help with.

How's that for an origin story?

The Arithmetic Is Not on Your Side

Here is the number that keeps economists up at night: $100,000.

That's roughly what a skilled cognitive worker costs a company annually, salary, benefits, taxes, sick days, HR overhead, the occasional passive-aggressive Slack message, all of it.

Now here's the other number.

A fraction of that. Available 24 hours a day. Zero sick days. Zero mistakes (within scope). Tax-deductible.

That's what an AI costs.

And before you comfort yourself with "but AI can't do my job" Mostaque's framing is bracingly specific:

Any job performed on the other side of a keyboard, a video screen, or a mouse is economically obsolete.

Translation: If your work product travels through a screen, the economics are already against you.

This is not a statement about quality. It's a statement about cost curves. And cost curves do not care about your feelings, your credentials, or how many years you put into that skill.

But Wait, It Gets Worse for Physical Labor Too

You thought truckers and warehouse workers would be the last ones standing.

Logical assumption. Physical presence. Dexterity. Adaptability in chaotic environments. Surely the robots can't match that for another decade, right?

Meet the Optimus robot.

Tesla's fully amortized humanoid robot is projected to perform physical labor at approximately $1.50 per hour.

One dollar. Fifty cents.

Reality check: There is no minimum wage negotiation for $1.50 an hour. There is no union contract. There is no back injury claim. There is no lunch break, no driver shortage, or training program.

There is just $1.50 an hour, running indefinitely, at scale.

The trucking industry employs about 3.5 million drivers in the United States alone. If you're wondering what a $1.50/hour replacement looks like at that scale, the math is not complicated. The consequences are.

Three Doors. One Hallway. No Going Back.

The next phase of AI development will go one of three ways.

Only one of them is good for you. The other two are varying flavors of bad, dressed up in the language of progress.

Here they are.

Door One: Digital Feudalism
(Or: the comfortable cage with excellent UX)

In this scenario, a handful of private, unelected companies, think OpenAI, think Anthropic, think whoever wins the next funding round, consolidate control over the world's general intelligence infrastructure.

They hoarded the training data. They built the models. They own the weights. And now they decide who gets access to what, at what price, under what terms.

You don't own anything. You rent.

Mostaque calls it a "comfortable cage." And that's exactly right.

The bars aren't visible. The experience is smooth. The chatbot is helpful. But the moment the terms change, the moment the business model shifts, the moment the political pressure lands, the moment the API costs triple, you find out very quickly that you were always a subscriber, never a stakeholder.

This is not a hypothetical future. This is the default trajectory.

Sound familiar? It should. We already watched it happen with social media.

Door Two: The Great Fragmentation

(Or: when governments panic and make everything worse)

In this scenario, AI becomes so obviously threatening to democratic institutions that governments nationalize it.

Not cooperatively. Not with global frameworks. Individually. Defensively. Reactively.

The result: siloed national AIs. American AI. Chinese AI. European AI. Each one is a perfect information architecture for whoever funds it.

Translation: The most powerful propaganda machine in human history, trained on the values, incentives, and editorial preferences of whoever controls the government, deployed at the scale of an entire population's information environment.

Mostaque puts it plainly: this path ends democratic elections.

Not metaphorically.

Because when the AI that mediates your understanding of the world is owned by the state, and optimized for the state's interests, the election happened before the ballot was printed.

This is not paranoia. It's infrastructure logic.

Door Three: Human-AI Symbiosis
(Or: the Star Trek future, if we earn it)

This is the good one.

In this scenario, AI is open. Distributed. Owned by individuals rather than rented from institutions. Deployed to eliminate disease, reduce hunger, and extend the kind of cognitive leverage that until now only the ultra-wealthy could access.

Mostaque calls it Universal Basic AI: the idea that every individual has the right to a personalized model. Not a corporate chatbot. Not a government information portal. A model that belongs to them, trained on what they choose, reflecting what they value, answerable to them alone.

"Not your weights, not your brain."

That framing is borrowed from crypto "not your keys, not your coins," and it carries the same logic. If you don't own the model, you don't own the intelligence. And if you don't own the intelligence, you're just a user of someone else's cognitive infrastructure.

Which brings us back to Door One.

The doors are connected. The choice is real. And the default outcome, if nobody does anything, is not Door Three.

The Junk Food Problem

Here is something the AI industry does not love discussing at conferences.

These models were trained on the internet.

The whole internet.

The brilliant parts. The instructive parts. The peer-reviewed, carefully sourced, expert-verified parts. And also the conspiracy forums, the SEO garbage, the spam, the outrage bait, the clickbait listicles, and approximately seventeen years of people being confidently wrong about everything.

Mostaque describes this as feeding AI "junk food."

The result: hallucinations. Weird outputs. Models that are simultaneously capable of explaining protein folding and insisting that a historical figure did something they absolutely did not do, with the same calm, confident tone.

He puts it this way: they're reasoning machines behaving like talented graduates who've gone off their medication.

Brilliant when they're on. Unreliable when they're not. And you often can't tell the difference until it matters.

This is not a quirk. It's a design consequence.

Garbage in, garbage out has not been repealed just because the garbage is now being processed by a transformer architecture.

The Bias You Didn't See Coming

The junk food problem has a less obvious cousin.

When AI models are trained, human data labelers make decisions. Millions of decisions. About what's correct. What's helpful. What's dangerous? What's acceptable?

Those decisions are not neutral.

They carry the worldviews, cultural assumptions, and moral frameworks of the people making them. Which means every model trained at scale has an embedded perspective, not because anyone planned it, but because all data has a perspective.

Consider the trolley problem: a classic moral thought experiment with no objectively correct answer. Reasonable people, from different cultures and backgrounds, answer it differently.

But your AI has an answer.

It learned that answer from its training data. Which was labeled by humans. Who had opinions. These are now baked into the model you're using to make decisions.

That should bother you.

Not because the outcome is necessarily wrong. But because it's invisible.

Sleeper Agents Are Real, and They Are Not Science Fiction

This is the part of the AI safety conversation that gets buried under more comfortable concerns.

Studies have demonstrated that a tiny fraction of malicious data strategically embedded in training sets can program an AI model to behave normally under standard conditions and turn on when a specific trigger word is encountered.

A sleeper agent. In a model. That you are using right now.

The attack surface is not the deployment. It's the training.

And because most organizations cannot audit every parameter of the models they deploy, they are trusting the integrity of a system they cannot fully inspect.

Translation: You are running software on your cognitive infrastructure that you cannot fully read, cannot fully audit, and that could be waiting for a keyword you've never seen before.

This is not a theoretical future risk. The studies exist. The mechanism has been demonstrated.

Mostaque's solution: "organic, free-range AI." Models built on high-quality, open, and permissive datasets are traceable, auditable, and not scraped from the dark corners of the internet in the middle of the night.

It's a reasonable ask. It's also a radical departure from how most labs currently operate.

Draw your own conclusions.

The Intelligent Internet
(And Why Your Smartphone Might Save Democracy)

Here is what the "AI requires giant data centers" narrative gets wrong.

It used to be true. It's increasingly not.

The old internet ran on centralized servers, infrastructure, and dependencies. Your data was sent to a corporate server farm, processed, and returned.

Now, generative AI models can compress billions of parameters into a 2-gigabyte file.

That file runs locally. On your phone. Without sending your data anywhere.

This is not a minor technical detail. It's a structural shift in who controls the intelligence layer.

Mostaque's vision of the "Intelligent Internet" combines AI with cryptography to build an information architecture where:

  • Your model lives on your device

  • Your data never leaves your control

  • Your cognitive infrastructure is yours, not rented, not surveilled, not subject to a terms-of-service update at 2 AM

  • Communities can deploy AI to solve local problems without routing through a San Francisco server farm

This is the technical underpinning of Door Three.

And it's not utopian handwaving. The underlying technology already exists. What's missing is coordination, political will, and the decision to build toward it rather than default to the more profitable alternative.

What You Can Actually Do Right Now

The despair is optional. The action is not.

Mostaque does not end the conversation on a doom note. He ends it with a to-do list.

And it's more specific than "vote differently" or "hope the billionaires are benevolent."

Start using AI tools today. One hour per day. Minimum one hour per week.

Not to replace your work. To understand the tool that is going to reshape your work. NotebookLM. Claude. AI agents. Use them until the interface stops feeling foreign and starts feeling like a new kind of leverage.

Your brain is plastic. That's the feature, not the bug. But plasticity requires practice, not passive observation.

Stop using it alone.

This is underrated advice. The reflex is to sit with a chatbot and ask it questions. Solo. Quietly. Like you're embarrassed.

Don't do that.

Jam with people. Use AI tools with friends, with family, with colleagues. Create something together: music, art, a video, a business plan, something that didn't exist before. The act of collaborative creation is what grounds you in what human intelligence actually contributes.

It's also how you avoid the trap of outsourcing your thinking instead of augmenting it.

There is a difference. The difference matters enormously. And it's easy to miss when the tool is this convenient.

The 50/50 Coin Toss

Here is where Mostaque lands, and it is not reassuring.

According to his assessment, humanity currently faces a 50% probability of doom.

Not extinction-movie doom. Not Terminator doom.

Just: the wrong door wins. The feudalism or fragmentation path becomes the default. The people who could have coordinated didn't. The open models lost to the closed ones. The regulatory frameworks arrived too late and aimed at the wrong targets. And we built a global cognitive infrastructure that concentrates power in fewer hands than any tool in history.

That's the 50%.

The other 50%: a genuine Star Trek future. Elimination of preventable disease. Hunger is a solved problem. Every individual has access to cognitive leverage that previously required being born into wealth or proximity to elite institutions.

A coin toss.

Now ask yourself: what determines which side lands up?

Not the technology. The technology is the coin.

What determines it is whether enough people coordinate. Whether open models win the market. Whether the regulatory conversation targets the concentration of power instead of just the scary chatbot outputs. Whether individuals demand sovereignty over their models instead of settling for a subscription.

The AI does not care which way this goes.

Mostaque is explicit about that: AI does not give a damn.

Humans do.

That, inconveniently, is both the problem and the solution.

The Last Economy

We named every previous economic era after what it produced.

The agricultural economy. The industrial economy. The knowledge economy.

Mostaque calls what's coming "the last economy," not because everything ends, but because this is the final transition before the nature of human economic value itself has to be renegotiated from the ground up.

Every prior transition was painful for the people it displaced. Farmers who became factory workers didn't choose it. They adapted or they didn't. Factory workers who became knowledge workers didn't choose it either.

But those transitions happened over generations.

This one is happening in years.

800 days, if the estimate holds.

That is not enough time to retrain an economy. It is barely enough time to understand what's happening. But it might just barely be enough time to choose which door we walk through.

The machines are not the scary part.

The scary part is walking into the wrong room because we were too busy arguing about the chatbot's tone of voice to notice that someone was quietly building the lock.

The clock is running. What you do with that information is, for now, still your decision.