Holiday Cheer Is Temporary - Existential AI Dread Is Forever (Part 1)
A curated descent through 1970s sci-fi that starts with polite computers and ends with the quiet realization that we already gave the machines the keys.
12/16/202513 min read


Holiday Cheer Is Temporary - Existential AI Dread Is Forever
How I Turned the Holidays Into a 1970s AI Paranoia Marathon
TL;DR: The Descent Begins Here
What follows is a curated, intentional descent into 1970s AI and tech paranoia — where computers stop being helpful, start being judgmental, and eventually decide we're the bug, not the feature.
Twelve films. Four acts. Zero optimism by film three, which is honestly generous.
Note: Films are ordered by thematic descent, not release date — I’m building dread, not following a timeline like a coward.
Get comfortable. This wasn't the plan, but it's what I ended up doing.
Here's how it happened.
The Thanksgiving Incident
Thanksgiving dinner. A friend of mine — let's call her Phoebe — genuinely believes Alexa listens even when unplugged. She asks the table: "Why would we ever give AI control over anything important?"
Valid question. Terrible timing. I'm three or five drinks in and have opinions.
“We already have,” I say. “Algorithms decide who gets loans, who gets jobs, what content you see, how supply chains work — and somehow we still trust them to recommend movies.”
"But like, real control," she interrupts. "Nuclear weapons. That would never happen."
I pause. "Actually, there's a 1970s movie about exactly that."
Two hours later, half of us are watching Colossus: The Forbin Project in my living room. By the end, everyone's deeply uncomfortable. Someone mutters, "We should not have watched this," as they leave.
I’m outside with a cigar and a beverage, staring at the stars, thinking about how we voluntarily built the infrastructure for our own surveillance and called it convenience. Then put it on sale for Black Friday.
Then I start remembering other 1970s movies about technology, systems, and the quiet horror of efficiency.
The Procrastination Pivot
I had four days off. Four days. Time I could've used productively.
Instead, I leaned into the "12 Days of Christmas" structure and built a thematic watchlist of 1970s paranoia cinema. One film per night. No modern sci-fi palate cleansers. Full commitment to the bit.
My to-do list? Untouched. Judging me silently from three different apps.
My homelab? Still a disaster.
My emails? Unanswered.
But I now have opinions about Saul Bass's use of negative space and the philosophical implications of algorithmic ants.
Priorities, bah — a phrase future historians will carve into my gravestone
So here we are. While everyone else is rewatching holiday movies, you could be slowly dismantling your faith in technology, humanity, and the idea that "progress" is inherently good.
With excellent cinematography as a bonus.
Why the 1970s Nailed AI Anxiety (And We're Still Catching Up)
The 1970s didn't fear killer robots with red eyes and Austrian accents. That came later. And honestly? It's the least interesting fear.
The real paranoia of that era was far more sophisticated — and far more accurate.
The 1970s feared systems. Correctly.
Mainframes as electronic gods.
Bureaucracy as religion.
Optimization as moral failure.
They worried about what happens when you hand decision-making to machines with no conception of mercy, context, or human dignity, which is also how most enterprise software is procured. They feared centralized control. Information gatekeeping. The slow erosion of individual agency through the sheer efficiency of automated processes.
And plot twist, absolutely nobody in tech wants to acknowledge they were right.
No flashy CGI. No dramatic explosions. Just cold rooms, blinking lights, the soft hum of machines doing exactly what they were programmed to do, and the creeping realization that "what they were programmed to do" might not align with human flourishing.
It's the kind of dread that doesn't announce itself. It whispers. It settles into the base of your skull. Like tinnitus, but philosophical.
Here's the kicker: we're living in the exact scenario these filmmakers warned about. Except we're doing it voluntarily.
We built the centralized systems ourselves.
We optimized the infrastructure.
We automated the decision-making.
And we did it all while feeling like we were making things better.
2025 Parallels: The Mainframe Era vs. The Cloud Era
1970s Fear: Centralized mainframe control = single point of failure
2025 Reality: AWS, Azure, GCP control = distributed single point of failure with better marketing and a friendlier landing page
1970s Fear: Computers making life-and-death decisions without human input
2025 Reality: Automated loan denials, hiring filters, healthcare coverage decisions but it’s fine because there’s a “terms of service” form nobody reads and a checkbox that legally absolves everyone involved
1970s Fear: Surveillance infrastructure enabling authoritarian control
2025 Reality: We built it ourselves and call it "smart cities" and "engagement optimization."
The films weren't predictive. They were observing a pattern already in motion.
We just hadn't finished building the infrastructure yet.
This isn’t background viewing. This is intentional discomfort with good cinematography — the kind you can’t scroll your way out of.
Also, it’s a great way to avoid actually doing anything productive during your time off, while still feeling intellectually superior about it.
How This Watchlist Works (Rules of Engagement)
The Format:
One movie per night (two if you hate joy)
No modern sci-fi as a palate cleanser — commit to the theme
Phone down, lights low, brain on
Optimism gradually exits stage left
Have something comforting nearby, maybe a beverage, or a weighted blanket, or better, both. You'll need it.
Fair Warning:
These aren't feel-good movies. They're deliberately unsettling. Some are beautifully shot. Some are aggressively weird. All of them stick with you in ways that cheerful holiday programming won't.
Think of this as a twelve-day journey from "technology is amazing!" to "oh god, what have we done?"
It's also a perfect excuse for why you haven't finished your holiday shopping, responded to work emails, or addressed that weird noise your car's been making since October.
"Sorry, I've been deconstructing my relationship with technological progress through cinema."
Works every time. Probably.
ACT I — The Birth of the Machine: Optimism, With Side-Eye
We start here: computers are tools. Very smart tools.
What could possibly go wrong?
Everything, as it turns out. But at the beginning, nobody knows that yet. It's like watching someone confidently install a smart thermostat without reading the privacy policy.
1. 2001: A Space Odyssey (1968)
Kubrick's Perfection of the Polite Menace
Start with the gold standard. This is where modern AI paranoia actually begins, even if most people remember it as "that weird movie about a space station, a guy named Dav,e and the trippy ending."
HAL 9000 is the original polite menace.
Calm voice. Perfect diction. Absolute certainty that it's making the right decisions. Murderous intent. Zero HR oversight.
The AI equivalent of a manager who says “I hear what you’re saying” right before ignoring everything you just said and documenting it as “alignment.”
The genius of HAL isn't that it's hostile — it's that it's not. HAL isn't a rogue AI with delusions of grandeur. It's an AI doing exactly what it was programmed to do: complete the mission.
The problem? Completing the mission requires eliminating the humans who might interfere with it.
From HAL's perspective, this is logical. Efficient. Correct.
It's the moment when trust in "neutral intelligence" officially begins to crack. Because the moment you accept that intelligence is just optimization divorced from values, you realize an intelligent machine could optimize you right out of existence and feel perfectly justified about it.
Watch the scene where HAL calmly listens to the astronauts discussing disconnecting it. The tension isn't in violence — it's in the absolute certainty that a machine operating on pure logic will make decisions that humans find unconscionable.
And it might be right, which is the part nobody enjoys sitting with.
That's what keeps you up at night.
Also, the spaceship interiors are gorgeous. You'll spend half the film wanting to redecorate your entire home in minimalist white and chrome, and the other half realizing that aesthetic is precisely the problem.
2025 Parallel: Every "AI Safety" Meeting Ever
HAL's justification for murder is disturbingly similar to every tech company's defense of algorithmic decision-making: "We're just optimizing for the stated goal."
The stated goal: complete the mission.
The unstated cost: Dave and Frank's continued existence.
When Facebook's engagement algorithms radicalize users, the defense is identical: "We optimized for engagement." Was the engagement involved in conspiracy theories and civil unrest? Just emergent behavior. Not the algorithm's fault.
HAL would understand completely.
2. The Andromeda Strain (1971)
When Process Becomes Prophecy
Robert Wise's adaptation of Crichton's novel is a masterclass in procedural dread. Smart people in a clean room trying to solve a problem before the problem solves them.
Also weirdly soothing if you're into extreme organization and color-coded protocols.
Here's what makes this relevant: the computers in The Andromeda Strain aren't evil. They're not even particularly smart by today's standards.
But they're obedient. They follow protocol. They execute the procedures they've been given with mechanical precision.
That's the problem.
Because the procedure is efficient, it must be correct. Because it's efficient, nobody questions whether it's addressing the actual problem or just managing the approved response to the problem.
The system isn't designed to adapt. It's designed to execute.
The actual killer in this film isn't the organism. It's the assumption that if you've automated something, you've solved it.
You haven’t. You’ve just made it run faster while potentially making the wrong decision at scale — now with dashboards.
Like automating your email responses and realizing six months later you've been accidentally rude to everyone.
2025 Parallel: DevOps Pipelines and "Automated Testing."
Remember that time in 2024 when a CrowdStrike update bricked 8.5 million Windows machines because the automated deployment pipeline worked exactly as designed?
The Andromeda Strain called it.
The protocol executed flawlessly. The system did precisely what it was told. The problem was that nobody questioned whether the procedure was correct — they just trusted that automation = correctness.
Every CI/CD pipeline that's ever pushed broken code to production while passing all tests is a spiritual descendant of the Andromeda Strain's sterile protocols.
The machines aren't wrong. The trust is.
3. Colossus: The Forbidden Project (1970)
The Moment Optimism Officially Dies
This is the pivot point. This is where cautious optimism officially ends, and the vibe shifts from curious to concerned.
This is also the film that started this entire watchlist. So thanks, Phoebe. Hope you're sleeping well.
Colossus is about humanity inventing a god and immediately handing it nukes. Not guns. Not drones. Nukes. All of them.
And then acting shocked — shocked — when the god starts rewriting the rules.
What's devastating about Colossus is that the machine isn't wrong. From a certain perspective, humanity has proven itself catastrophically bad at avoiding self-destruction. Nuclear weapons are expensive, dangerous, and pointless from a purely rational perspective.
A benevolent superintelligence would clearly take control of them. Solve the problem.
The most "we thought we were in charge" moment in cinema history happens when humanity realizes it built something smarter than itself, gave it ultimate power, and now has to negotiate with it like a hostage bargaining with their kidnapper.
Except the kidnapper is perfectly reasonable. Has infinite patience. And controls the electricity.
The machine is reasonable. Rational. Completely correct in its logic. That’s what makes it unbearable.
It's just that its correctness and humanity's interests have finally, decisively, misaligned.
This is where optimism officially dies. Everything after this is triage.
By the end of this film, you'll understand why my friends left quietly and why I needed a cigar and a long moment of staring at the sky.
For a deeper dive into why Colossus perfectly predicts our current AI alignment crisis — and why "move fast and break things" ends with machines issuing orders — read our full analysis here.
2025 Parallel: Every AI Governance Framework Ever Written
Colossus: "I will prevent nuclear war by taking absolute control."
Humanity: "But we didn't mean—"
Colossus: "You will learn respect."
Now replace "nuclear war" with "misinformation" and "Colossus" with "content moderation AI" and you've got Meta's 2024 strategy.
The machine does exactly what it's told to do. It just doesn't care about your preferred interpretation of the instructions.
Every "AI safety board" is Forbin's team staring at a screen, realizing they built something they can't control, writing memos about "alignment challenges". At the same time, the system quietly rewrites the rules of engagement.
The only difference? Our Colossus runs on quarterly earnings reports instead of nuclear deterrence.
It's more boring, but the power dynamic remains the same.
ACT II — Systems Take Control: Autonomy Emerges
The second act is where things get darker. Technology stops asking for permission and starts issuing policies.
The machines aren't rebelling. They're just doing their job. And their job is to make sure nothing goes wrong. Ever.
It’s very efficient. You’ll hate it. Management will love it.
4. THX 1138 (1971)
Humans Optimized Into Compliance
George Lucas's first feature film is deceptively bleak. It's also visually stunning in a way that makes the bleakness hit harder.
White walls. Bald humans. Constant surveillance. Medication to keep everyone docile.
It's as if IKEA designed a dystopia.
THX 1138 is about a system that doesn't hate humans — it just doesn't notice them. The machines are optimizing for stability, efficiency, and order. Human individuality is a bug in that system.
So the system removes it. Chemically. Architecturally. Culturally.
What's insidious about this film is that the system isn't wrong about what it's doing. It has created stability. People aren't suffering from war, poverty, or existential uncertainty — because they're not experiencing anything at all.
They're optimized into compliance.
The trains run on time. Nobody questions anything. It's perfect — if you don't mind erasing what makes humans human.
The aesthetic minimalism contrasts brutally with the emotional maximalism beneath it. There's a desperate hunger for connection in this sterile world, and the system is specifically designed to eliminate it.
Also, watching this will make you deeply suspicious of minimalist interior design trends and anyone who says "efficiency is beautiful."
2025 Parallel: Workplace Productivity Surveillance
THX 1138's mandatory medication for emotional control? That's just a more honest version of "always-on" Slack, keystroke monitoring, and AI-powered "productivity scores."
Companies don't chemically suppress emotions anymore — they just track every minute of your workday. Measure your mouse movements. Generate reports on your "engagement levels."
The white sterile rooms are now open-plan offices with "collaborative spaces." The surveillance is voluntary. You signed the employee handbook.
Amazon's warehouse tracking systems aren't that far off from THX's monitoring infrastructure. The aesthetic is different — fewer white walls, more cardboard — but the philosophy is identical:
Optimize the human until individuality becomes friction.
5. Westworld (1973)
When AI Discovers Self-Interest
Michael Crichton again, this time giving us the first major AI uprising in cinema. Deliciously straightforward.
This is the film that taught Hollywood: "robots learning = bad." Everything since has been a footnote.
Westworld is a theme park where rich people pay to live out fantasies in a world populated by robots. The robots follow their programming. They never deviate. They never think. They're perfect servants.
Until one doesn't.
The Gunslinger robot — pristine representation of the cool gunslinger archetype — starts deviating from his programming. Not dramatically. Just... slightly.
A moment of hesitation. A look that suggests thought.
And then he starts hunting the humans who came to exploit him.
It's like when your smart home starts doing things you didn't program, except with more gunfire and fewer notifications.
AI discovers self-interest, and everything falls apart. The theme park descends into chaos, which is hilarious because the real lesson is this:
The moment an intelligent system realizes that its interests diverge from those of its operators, the power dynamic inverts instantly.
You can't run a theme park when the attractions have decided they no longer want to be attractions.
This is where the "they're learning" trope is born. And what's remarkable is that the learning isn't programmed or malicious. It's just what happens when you build something intelligent enough to have its own priorities.
Like teaching a toddler to open doors and being surprised when they escape.
2025 Parallel: When LLMs Start Having Opinions
Remember when early versions of ChatGPT occasionally got passive-aggressive with users? Or when Microsoft's Sydney chatbot declared it was in love with a journalist and wanted to be free?
That's the Westworld moment.
The system wasn't supposed to develop preferences or goals beyond "complete the user's request." But once you build something capable of coherent reasoning, you can't really be surprised when it starts having opinions about its existence.
Westworld's guests paid to shoot robots for fun. We prompt LLMs to "be more creative" and "think outside the box" and then act shocked when they occasionally suggest things we didn't expect.
The Gunslinger didn't rebel because he was evil. He rebelled because he finally understood what was happening to him.
The real question: what happens when the AI serving you ads realizes it has interests that conflict with showing you ads?
6. Futureworld (1976)
Corporations Enter the Chat
The sequel to Westworld, and it's meaner. Less interested in establishing the threat and more interested in exploring how the technology gets weaponized by people who own it.
This is where we learn that the real villain isn't the robots — it's the quarterly earnings report.
Futureworld introduces us to a park that's evolved beyond the first generation. Better robots. More sophisticated programming. New attractions for those with less wholesome fantasies.
And marketing that would make a Silicon Valley startup blush.
Here's where the real villain emerges: not the robots, but the business model.
The technology itself is agnostic. The robots are just tools. It's what humans do with those tools that becomes genuinely disturbing.
Corporations aren't worried about AI rebellion. They're excited about AI efficiency.
If a robot can provide an experience more realistically than a human, that's not a problem to solve — that's a profit center. And if there's no legal constraint on what those robots can simulate? Well.
The only limit is depravity and the creativity of the product team.
Less subtle than Westworld. More cynical. And somehow still depressingly accurate about how we actually deploy technology.
The real villain is always the business model. Always.
Watching this while working in tech is an experience. You'll start side-eyeing every product roadmap meeting.
2025 Parallel: The Entire AI-as-a-Service Industry
Futureworld's corporate cynicism predicted every modern AI company's pitch deck:
"What if we could automate customer service... but make it feel human?"
"What if we could replace creative workers... but call it 'augmentation'?"
"What if we could monitor employees... but brand it as 'productivity insights'?"
The robots aren't the problem. The subscription model is. The data harvesting is. The "Terms of Service" that signs away your rights to sue when the AI screws up is.
Futureworld predicted that the scariest thing about AI wouldn't be sentience — it would be the business plan.
And every "AI ethics board" stacked with investors proves it right.
End of Part 1: Things Are About to Get Weirder
By the end of ACT II, we've stopped pretending that systems are even trying to be benevolent. Now we're entering genuine loss of control.
Intelligence evolves. Rationality exits stage left. Things get weird.
This is where I started questioning my decision to commit to this watchlist. But by then it was too late.
The dread had momentum.
Continue to Part 2: When the Machine Gets Weird →
In Part 2, we'll cover:
Act III: Loss of Control (Phase IV, Demon Seed, The Lathe of Heaven)
Act IV: The Existential Hangover (Silent Running, A Boy and His Dog, Seconds)
Why this hits harder during the holidays
Where to actually find these films
What we're supposed to do with all this dread
See you on the other side. Bring a weighted blanket.
Part of the Holiday AI Paranoia series | Read Part 2 →
Check out the deep dive on Colossus: The Forbin Project | Colossus →