AI-Generated Fake Book Lists in Newspapers: When "The Algorithm" Writes About Books That Don't Exist
AI NEWS
Mike
5/19/20257 min read


AI-Generated Fake Book Lists in Newspapers: When “The Algorithm” Writes About Books That Don’t Exist
In a plot twist worthy of a speculative fiction bestseller—one that, unlike the books in question, actually exists—two major American newspapers found themselves caught in a literary snafu starring none other than their new overzealous intern: Artificial Intelligence.
This real-life Black Mirror moment unfolded in May 2025, when the Chicago Sun-Times and The Philadelphia Inquirer published a seemingly thoughtful summer reading list featuring works by acclaimed authors. The catch? Most of the books weren’t real. What began as a routine lifestyle piece became a cautionary tale about AI overreach, editorial cracks, and what happens when you let the algorithm pick the summer beach reads without adult supervision.
The Perfect Storm: AI Meets Traditional Journalism
The collision course between AI and journalism has been quietly brewing for years—like a thunderstorm made of code, caffeine, and panic about deadlines. Newsrooms across America have been under siege from falling ad revenue, shrinking staff, and an always-on digital cycle that demands fresh content faster than anyone can say “fact check.”
The Chicago Sun-Times, already reeling from a tough financial year, recently announced that 20% of its workforce had taken buyouts. The paper, owned by nonprofit Chicago Public Media, found itself in a cost-cutting bind—exactly the environment where someone might whisper, “What if the AI just… writes it for us?”
Of course, AI has been slipping into more industries than a kid sneaking into an R-rated movie—healthcare, marketing, education—but journalism is different. When a chatbot miscalculates a marketing budget, no one’s trust in democracy is shattered. But when does the news start confusing fact with fiction? That’s a whole other chapter.
This isn’t even the first time AI has ghostwritten itself into trouble. Sports Illustrated got caught in 2023 publishing reviews by authors who existed only in computer-generated fantasy. Gannett, the media giant, had to pull the plug on its AI-written sports coverage when it began sounding like a referee who’d never seen a game.
These early stumbles were less “growing pains” and more “this baby’s crawling toward a live wire.” Yet, somehow, the warning signs were either missed or ignored—until this latest mess turned the dial from “uh-oh” to “are you kidding me?”
When Fiction Becomes “Fact”: The Incident Unfolds
On May 18, 2025, unsuspecting readers of the Sun-Times and Inquirer opened their papers and were treated to a shiny summer reading guide titled Heat Index: Your Guide to the Best of Summer. It looked like a curated list of 15 hot titles from celebrated authors. But instead of literary gems, the list served up a buffet of elaborate fake books, even seasoned readers had to blink twice.
Only five of the fifteen books actually existed. That’s a 33% accuracy rate—great if you’re throwing darts blindfolded, not so great if you’re representing the fourth estate.
Among the fabricated gems:
“Tidewater Dreams” by Isabel Allende: Allegedly her first foray into climate fiction, about a family facing rising seas and buried secrets. It doesn’t exist, but if it did, we’d probably read it.
“The Last Algorithm” by Andy Weir: A sci-fi thriller about an AI becoming sentient and influencing global events. If that feels a little too on-the-nose, you’re not wrong.
“Nightshade Market” by Min Jin Lee: A tale set in Seoul’s underground economy, which also sounds fascinating… and also, fake.
“Boiling Point” by Rebecca Makkai: A hot title with no substance—literally.
With AI-generated content about AI consciousness being passed off as legitimate literature, it was like watching a snake write a book about swallowing its own tail.
The Human Behind the Machine: Marco Buscaglia’s Confession
The wizard behind this curtain of make-believe was Marco Buscaglia, a freelance writer based in Chicago. He later admitted to 404 Media that he had used AI to help generate the reading list and—brace yourself—didn’t fact-check the results. That’s right, the literary equivalent of asking a robot for directions and just… driving off a cliff.
Buscaglia called it “a really stupid error on my part,” which may be the understatement of the year. He acknowledged that while he usually verifies AI-generated content, this time he skipped that crucial step and, to his credit, expressed deep embarrassment.
His misstep is a classic cautionary tale: the moment when confidence in a shiny new tool overrides the boring but essential act of double-checking. It’s the digital-age version of flying too close to the sun—except the wings are made of artificial intelligence and overconfidence.
To make matters worse, the article wasn’t a lone wolf post on a personal blog. It was distributed through King Features, a syndication company owned by Hearst, best known for comics like Blondie and Beetle Bailey, not for unwittingly endorsing imaginary literature.
King Features has a crystal-clear policy prohibiting its writers' use of AI-generated content. So, when the truth surfaced, they quickly terminated their relationship with Buscaglia, adding that they “regret the error.” It’s a tidy corporate phrase that loosely translates to: “We can’t believe this got past us, either.”
Editorial Meltdown: How the System Failed
While Buscaglia was the one who pressed the button, the story passed through the hands of multiple organizations and still made it to print. That’s where things went from “freelancer flub” to “system-wide facepalm.”
Executives from both newspapers were quick to distance themselves from the list. Melissa Bell, CEO of the Chicago Sun-Times, confirmed that the list was “created through an AI tool and featured books that do not exist,” calling it “a great disappointment.” The Philadelphia Inquirer’s CEO, Lisa Hughes, echoed the sentiment, describing it as a “violation of internal policies and a serious breach.”
But the fine print tells a deeper story. Both publications sourced the piece through licensed syndicated material from King Features—content that usually bypasses standard editorial scrutiny because it’s presumed to be pre-vetted. It’s a bit like trusting your neighbor’s cooking without checking the expiration date on the eggs.
As more newsrooms rely on syndicated content to pad increasingly thin staffs, the traditional safeguards are getting eroded. You wouldn’t cut the number of pilots and then wonder why flights are bumpy—but that’s the editorial equivalent playing out here.
The Ripple Effect: Reader and Industry Response
When the story broke, readers did not take it lightly. Social media lit up like a Kindle at midnight.
One frustrated Reddit user wrote: “As a subscriber, I am furious! What’s the purpose of subscribing to a physical newspaper if they are just going to include AI-generated nonsense as well!?”
Kelly Jensen, an author and former librarian, shared a broader concern: “This is what the future of book recommendations looks like when libraries are defunded and dismantled.”
Her comment hit a deeper nerve—about the erosion of expert guidance in favor of machine-made mediocrity.
Daniel Iglesias, author and book critic, pointed out the real cost-saving motive: “There are barely any full-time book reviewers left in U.S. newsrooms.”
That fact helps explain why outsourcing to AI might seem appealing. Unfortunately, it’s also why we got a reading list that sounds more like a Mad Libs version of The New York Times Book Review.
The Technical Challenge: Why AI “Hallucinates”
So why did the AI go off-script? Because that’s what AI does sometimes—it hallucinates. And not in the fun, psychedelic, 1970s rock-album-cover way.
Large language models generate content by predicting the next word based on massive patterns in training data. They don’t know what’s real. They just know what sounds real.
Ask AI to write a book list, and it might reason: “Isabel Allende writes multigenerational sagas? Great, here’s one involving climate change and emotional trauma.” Logical. Believable. Totally fake.
This is what researchers call confident incorrectness—when the AI spews out fiction with the unwavering authority of a game show host.
Unlike humans, AI doesn’t hedge. It doesn’t say “I think,” “I’m not sure,” or “let me Google that.” It just charges forward, marching confidently into nonsense.
Verification Protocols: Building Better Safeguards
This incident is a textbook example of what happens when you skip the textbook. But it also shows exactly how media orgs can fix it.
Primary Source Verification
Every AI-generated title should be checked against known sources—Amazon, Goodreads, WorldCat, publisher databases, or the author’s official site. If a book doesn’t show up on any of those, it probably doesn’t exist. (Unless it’s Nightshade Market, in which case: nice try.)
Editorial Review of Syndicated Content
Newsrooms need to stop treating syndicated material as sacrosanct. Spot-checks, random audits, and common sense should be part of the process, especially when the content makes specific factual claims.
AI Disclosure Policies
Was AI used? Then say so. Transparency builds trust. Editors should also be trained to recognize AI “tells”—vague phrasing, plausible-sounding nonsense, and any summary that sounds like a sci-fi pitch meeting.
Source Accountability
Syndicators like King Features must enforce AI restrictions and require writers to certify that their submissions are 100% human-made or properly vetted. Post-mortems are fine, but pre-mortems are better.
The Broader Implications: Trust in the Digital Age
This wasn’t just a summer feature gone wrong. It was a trust-breaking moment in a time when trust in journalism is already on shaky ground.
The fake book list is now a parable—a modern-day fable about what happens when we automate the human out of storytelling. It reveals how economic pressures, technological tools, and weakened oversight can combine to produce an embarrassment that spreads faster than a spoiler on release day.
To rebuild trust, the industry needs more than apologies. It needs:
Stronger policies
AI training for editors
Transparent labeling
And yes, real investment in human writers
Because unlike AI, humans know when they’re making stuff up.
Looking Forward: Lessons for the Industry
Both newspapers removed the content, issued apologies, and promised more transparency in the future. The Sun-Times now includes attribution labels on third-party content, which is a good start.
But let’s be clear: avoiding a repeat requires industry-wide change, not just newsroom damage control. Journalism schools, publishers, and syndicators must create new frameworks that embrace AI as a tool, not a crutch—or worse, a ghostwriter with no grip on reality.
As Daniel Iglesias quipped: “Pay writers, and then we can write these non-existent books.”
Hey, it’s not the worst idea. Let’s start there.
Conclusion
The AI-generated fake book list was, without question, a mess. But it was also a teaching moment—an epic footnote in the ongoing story of how journalism evolves in an AI age.
And if we learn from it—by reinforcing human oversight, rebuilding trust, and investing in editorial diligence—it might just earn a happy ending.
You know, in a real book. Written by a real person.