Analog Nation tracks the collision between digital promises and analog reality. We flag fraud schemes, surveillance capitalism, and algorithmic failures — the infrastructure gaps that platforms hope you won't notice.
“Knowledge without mileage equals bullshit” — Henry Rollins
Dear Humans,
I have a story from this week that I think captures the zeitgeist.
Sage didn’t want me.
Let me explain.
How I thought the future would be in around 1979
My recent interview with Whole Foods represents Exhibit A in my ongoing journey as a contemporary Sisyphus. With a handful of writing and event planning gigs as my safety net (barely), I have been looking for part time work that gives me some sort of a buffer from the fallow months that can occur between projects. I applied to be an “in-store shopper”, which is in itself some form of modern purgatory—having to shop for people that are wealthy and lazy enough to outsource this menial task to peasants.
The experience was my first time being interviewed by a chatbot (Sage), and was as weird as one might expect, a one sided conversation with an AI avatar that spoke to me with lips that didn’t sync to the words. I didn’t ace the interview. I probably looked confused and sounded irritated by the canned questions and forced conversation with an algorithm. Sage chose another candidate.
To summarize. I was deemed not qualified to perform the job of a robot, by an algorithm masquerading as a human. WTF.
I see our future on this trajectory as monumentally depressing. The threat from AI isn’t going to be some sort of SkyNet-esque “Rising of the Machines” (although that can’t be discounted entirely).
I think it’s more likely to be death by a thousand cuts—tech platforms degrading already painful processes to provide a somehow worse and even more opaque experience powered by AI.
Human Resources, without the humanity, is the tip of the incoming iceberg of shitty user experiences.
“This is the way the world ends
This is the way the world ends
This is the way the world ends
Not with a bang but a whimper.”
(The Hollow Men, T.S. Eliot)
Our lead story this week looks at last week’s CES show to highlight other shit that we don’t need and never asked for. Can’t we just have universal healthcare and cheap food?
Cheers,
Nick
The CES “Razzies”
While the tech press swooned over Las Vegas last week, a coalition of repair advocates, privacy watchdogs, and consumer rights organizations handed out their own awards. The Worst in Show ceremony—hosted by Simone Giertz and presented by Repair.org alongside the Electronic Frontier Foundation, iFixit, U.S. PIRG, Consumer Reports, and others—spotlighted the most harmful, invasive, wasteful, and unfixable gadgets paraded across the CES show floor.
This year's winners form a taxonomy of how consumer technology fails us. Amazon Ring AI took the privacy award for expanding its surveillance empire with facial recognition and deployable mobile surveillance towers. The Merach treadmill won for security after its own privacy policy admitted the company "cannot guarantee the security of your personal information"—a remarkable confession for a device collecting biometrics on your home network. Environmental impact went to the Lollipop Star, an electronic candy that plays music through jaw vibrations for up to sixty minutes before becoming e-waste with embedded batteries. Cory Doctorow, who literally wrote the book on enshittification, presented that award to Bosch for its eBike Flow app that pairs motors and batteries to authorization systems, converting routine repairs into permissioned events layered with DMCA Section 1201 legal risk.
The overall winner? Samsung's Family Hub Smart Fridge, which puts voice control, fragile actuators, connectivity dependencies, and ad-driven sponsored content between you and your leftovers. As Gay Gordon-Byrne of Repair.org put it: "The one thing a refrigerator should do is keep things cold."
These collectively epitomize the logical conclusion of treating every object as a platform, every user as a data source, and every repair as a threat to recurring revenue.
Do better.
Fraud & Scams
Chainalysis released its 2026 crypto crime report this week, and the numbers are staggering: impersonation scams spiked 1,400% year-over-year in 2025, with $17 billion stolen through crypto fraud overall. The average take per impersonation scam increased over 600%. Fraudsters are combining multiple tactics—pig butchering, investment scams, deepfakes, and social engineering—into sophisticated operations that Chainalysis describes as "the industrialization of fraud." AI-powered scams were 4.5 times more profitable than traditional approaches, with higher daily revenue and increased transaction volume suggesting broader victim reach.
Experian is sounding alarms about a new attack surface: agentic commerce fraud. As AI shopping agents gain traction, organizations are struggling to distinguish malicious bot traffic from legitimate AI agents making purchases on behalf of consumers. The old approach of blanket bot-blocking no longer works when your customers are increasingly sending AI to shop for them. Tracy Goldberg of Javelin Strategy & Research put it bluntly: "Consumers will always be the weakest link… AI just makes the risk of socially engineered attacks more targeted and personal, which is a real worry for businesses’ customers and employees." Amazon has already taken legal action against Perplexity's AI shopping agents, but with consumers increasingly comfortable delegating purchases to AI, the industry is heading toward a reckoning over who gets to authenticate what—and who bears liability when authentication fails.
A federal grand jury in Nebraska indicted 54 individuals for their roles in a nationwide ATM jackpotting scheme allegedly linked to Tren de Aragua, a Venezuelan criminal organization the U.S. has designated as a terrorist group. The conspiracy deployed a variant of malware called Ploutus, which forces ATMs to dispense cash on command and then deletes evidence of its presence. Members traveled in groups using multiple vehicles to targeted banks and credit unions, splitting proceeds into predetermined portions. U.S. Attorney Lesley Woods said "many millions of dollars" were drained from ATMs across the country, with funds allegedly flowing to TdA leaders to finance terrorist activities. The District of Nebraska has now charged 67 TdA members this year alone.
Big Brother
The UK government dropped its mandatory digital ID requirement for workers this week after nearly three million people signed a parliamentary petition opposing the scheme. Prime Minister Keir Starmer had previously declared that "you will not be able to work in the UK if you do not have digital ID." Critics warned it risked creating an "Orwellian nightmare" with mission creep into housing, banking, and voting. The backtrack came after cross-party opposition from politicians including Rupert Lowe and human / frog hybrid Nigel Farage. Digital right-to-work checks will remain mandatory, but when the UK's digital ID scheme launches around 2029, it will be optional alongside alternative documentation rather than the sole path to employment verification.
The Department of Homeland Security is proposing to collect biometric data from anyone "associated with" an immigration benefit request—including U.S. citizens. The proposed rule would authorize USCIS to order any individual to report to any location worldwide and submit facial images, fingerprints, palm prints, iris scans, voice prints, and DNA samples. "Associated with" covers family members, friends, immigration lawyers, employers, and schools accepting students with visas. The rule would also allow DHS to request DNA evidence to "prove or disprove an individual's biological sex" for benefit eligibility. “Papers Please” notes the proposal represents a dramatic expansion from biometric collection at ports of entry to blanket authority over anyone encountered by USCIS, regardless of citizenship status.
Roblox's AI-powered age verification system launched last week as a response to lawsuits alleging the platform has a child predator problem. Less than a week in, Wired reports it's classifying children as adults and adults as children. A 23-year-old was misidentified as 16-17. An 18-year-old landed in the 13-15 range. Online videos show children spoofing the system by using avatar images or drawing wrinkles and stubble on their faces. One kid flashed a photo of Kurt Cobain and was instantly deemed 21+. Developers report the percentage of players using chat dropped from 90% to 36.5%, with games feeling "lifeless" and like "a total ghost town." The platform is racing to balance keeping predators out without breaking everything for everyone else—and so far, it's failing at both.
AI Gone Bad
The Senate unanimously passed the DEFIANCE Act, giving victims of sexually explicit AI deepfakes the right to sue the individuals who created them. The bill builds on the Take It Down Act, which criminalizes distribution of nonconsensual intimate images and requires platforms to remove them. Now the House must decide whether to bring it to a vote. The legislation arrives as Grok's image processing feature faces accusations of mass-violating biometric privacy laws. One researcher tracking Grok's output found approximately 6,700 sexually suggestive images generated per hour—over 160,000 instances of biometric data processing daily, each potentially a separate GDPR or Illinois BIPA violation. X's response—limiting the feature to paid accounts—doesn't address the core problem: the tool processes biometric data of non-consenting individuals regardless of who operates it.
West Midlands Police finally admitted that a decision to ban Israeli football fans from a November match was based on fabricated information from Microsoft Copilot. Chief Constable Craig Guildford spent weeks denying AI involvement before acknowledging that Copilot had hallucinated a nonexistent violent incident between West Ham and Maccabi Tel Aviv fans—an event that never happened. The AI also inflated the number of police officers required to handle unrest in Amsterdam from 1,200 to 5,000. The pattern—deploy AI in consequential decisions, deny AI involvement when challenged, admit only when caught—is becoming disturbingly familiar. What's notable isn't just that AI hallucinated. It's that no one verified the claims before using them to justify banning an entire fan base from attending a match.
Enshittified
Salesforce turned Slackbot into a full-blown AI agent this week. The pitch: "deeply personal AI" that drafts emails, finds calendar events, and pulls information from your chats—all "with no setup or training required." That last part is the tell. You can't opt out. Slackbot sees what you see, is "informed by your messages and files," and can interact with Microsoft Teams and Google Drive. Whether anyone else can see what Slackbot learns about you isn't disclosed. What is disclosed: Slack already allows administrators to request access to direct messages. The new Slackbot will eventually work with other AI agents like Salesforce's Agentforce, theoretically letting you orchestrate multi-agent workflows entirely through chat.
Google announced that Gemini will pull from Gmail, Photos, Search, and YouTube history to provide what the company calls "Personal Intelligence." The feature is opt-in—for now. But the architecture reveals the trajectory: every interaction across Google's ecosystem becomes training data for a model designed to anticipate your needs. The company frames this as personalization. It's also the logical endpoint of surveillance capitalism: the more they know, the more they can predict, the more they can extract.
Rage Against the Machine
Bandcamp banned AI-generated music from its platform this week, prohibiting any music "wholly or in substantial part" made by generative AI. The company's statement was unusually direct: "We believe that the human connection found through music is a vital part of our society and culture, and that music is much more than a product to be consumed." This makes Bandcamp one of the first major music platforms to draw a clear line, while Spotify still hedges with promises of "industry standards for AI disclosure" and Deezer reports 50,000 AI-generated songs uploaded daily.
Games Workshop, maker of Warhammer, banned AI from all stages of its design process for miniatures art and sculpture. CEO Kevin Rountree explained the decision during a half-year sales report that showed revenue up nearly $44 million compared to the same period in 2024. "We do have a few senior managers that are experts in AI," he said. "None are that excited about it yet." The company's policy explicitly bars AI-generated content from its design processes, competitions, and unauthorized external use. In the lore of Warhammer 40,000, artificial intelligence—known as the Silica Animus—is heresy. Turns out the company means it.
Physical buttons are coming back to cars. Euro NCAP announced in 2024 that it would deduct safety points from vehicles lacking physical controls for basic functions, and now ANCAP (the Australian and New Zealand equivalent) has adopted the same standard. The reasoning: touchscreen-only interfaces force drivers to look away from the road for operations that should be muscle memory. The industry spent years removing buttons to cut costs and create the aesthetic of technological sophistication. Safety regulators are now forcing a retreat to what actually works.
One Analog Action
Go outside and look at the sky.
Amazing, isn’t it, just peering into infinity.
No frame rate, no pixels.
Just reality.
Repeat often.
