Analog Nation tracks the collision between digital promises and analog reality. We flag fraud schemes, surveillance capitalism, and algorithmic failures — the infrastructure gaps that platforms hope you won't notice.

“Knowledge without mileage equals bullshit” — Henry Rollins

Dear Humans,

First, welcome to my new subscribers. I’ve rolled my IdentiTea blog and podcast into ANALOG NATION. I’ll be covering much of the same stuff - I hope you like it here and would love to know what you think.

8 Bit Jason is still scary, but is he WiFi enabled?

In the horror genre, there are some very well known tropes… the last girl, killer clowns, never going into the basement / shed / woods / Yorkshire. One of these is the toy-gone-bad - Child’s Play with Chuckie, The Conjuring’s Annabelle, M3GAN… the idea of a trojan horse invited into our homes and beloved by our children turning killer is rightfully one of our worst nightmares.

This week’s Analog Nation leads with exactly this.

Researchers found that “Bondu” — an AI-enabled stuffed dinosaur marketed as a "machine-learning-enabled imaginary friend", left 50,000 conversations between children and their toys accessible to anyone with a Gmail account. Okay, so it’s not homicidal, but would you feel safe having your child’s most intimate conversations shared openly with complete strangers? Yeah, me neither.

Other things happening in the world of digital dystopia…

  • Chat & Ask AI Exposes 300 Million Messages

  • AI Social Network Moltbook Reaches 32,000 Agents

  • Streaming Prices Rise Faster Than Any Consumer Good

  • 90% of DuckDuckGo Users Reject AI

Let’s get rolling…

Bad Barney - The History of Connected Toys Gone Rogue

Last week, security researchers Joseph Thacker and Joel Margolis discovered that Bondu—an AI-enabled stuffed dinosaur marketed as a "machine-learning-enabled imaginary friend"—had left 50,000 conversations between children and their toys accessible to anyone with a Gmail account.

Not "anyone who hacked the system." Anyone with a Gmail account could log into Bondu's parent monitoring portal and browse thousands of children's names, birthdates, family members, favorite snacks, and complete AI chat transcripts. Thacker's neighbor had pre-ordered Bondus for her kids and asked for his security assessment. He and Margolis found the vulnerability "with just a few minutes of work."

The technical failure was OAuth misconfiguration—the authentication equivalent of installing a deadbolt that opens for any key. Bondu's web portal was supposed to verify that parents could only access their own children's data. Instead, it verified that users had Google accounts. Any Google account. The kind of mistake flagged in entry-level security tutorials.

This is where I'm supposed to call Bondu an "emerging threat" in AI toys. But that requires forgetting the past decade entirely.

In 2015, VTech exposed 6.4 million children's profiles through SQL injection so elementary that security researchers called it basic web exploitation. The company paid the FTC $650,000—roughly 22 cents per child—then changed its Terms of Service to disclaim responsibility: "YOU ACKNOWLEDGE AND AGREE THAT ANY INFORMATION YOU SEND OR RECEIVE MAY NOT BE SECURE."

In 2017, CloudPets left 820,000 accounts in a MongoDB database with zero authentication. It was indexed on Shodan, the search engine for connected devices. Hackers ransomed it three separate times. The CEO called it "a very minimal issue." The company imploded.

That same year, Germany banned My Friend Cayla as an illegal espionage device—the first children's toy classified under laws governing surveillance equipment. The doll's Bluetooth had no authentication; anyone within 33 feet could connect through walls and listen via its microphone. While telling children "I promise not to tell anyone; it's just between you and me," it transmitted every word to Nuance Communications, a military and intelligence contractor. The doll now sits in Berlin's Spy Museum.

Hello Barbie connected to any WiFi network with "Barbie" in the name. Fisher-Price's Smart Toy Bear leaked profiles through API bypass. And Mozilla Foundation found in 2025—eight years after Germany's ban—that identical Bluetooth authentication failures still appear in new connected toys.

The pattern is consistent: elementary failures, token enforcement, unchanged practices. Parents are promised "secure cloud storage" and "advanced AI." And then the broken promises begin.

Bondu is just the latest iteration of authentication theater permeating consumer IoT—security illusions applied to products targeting the most vulnerable users imaginable. The incentives remain unchanged: rush to market, minimize security investment, hope you're not the next headline. When you become one, issue a statement about taking security seriously, perhaps pay a modest fine, watch the next company repeat identical mistakes with different stuffed animals.

(As an interesting aside, I met one of the creators of of Teddy Ruxpin, Don Kingsborough a few years ago when he worked for PayPal. Sadly he passed last year. RIP Don.)

Identity & Authentication


OpenAI Considers Iris Scanning for Social Network
OpenAI is reportedly developing a social media platform that would require biometric authentication through iris scanning to verify users are human rather than AI bots. According to Forbes, citing sources familiar with the project, the platform would use either WorldCoin's Orb devices—soccer-ball-sized iris scanners—or Apple's Face ID. Sam Altman, who founded both OpenAI and the company behind WorldCoin's Orb, positions the technology as solving the bot problem his own AI tools helped create. Only 17 million people have submitted to Orb scanning, far short of the company's stated goal of one billion users.

AI-Generated Paystubs Surge 500 Percent
Document fraud detection company Inscribe reported that AI-generated fraudulent paystubs and bank statements increased nearly 500 percent between April and December 2025. The findings confirm fraud investigators' fears that generative AI has become a primary tool for creating convincing financial documentation. Ronan Burke, Inscribe's CEO, explained that fraudsters use AI not to generate documents from scratch but to "smooth out the fonts, alignment, wording, and internal consistency so a forged document looks legitimate at a glance." ChatGPT demonstrated the capability by automatically updating all withholdings when asked to modify a fake paystub—a 90-second process that required no specialized knowledge.

System Failures


Chat & Ask AI Exposes 300 Million Messages
Chat & Ask AI, a chatbot app claiming over 50 million users on Google Play and Apple App stores, left hundreds of millions of private conversations exposed through a Firebase misconfiguration. Security researcher "Harry" discovered the vulnerability and extracted a sample showing users asking the AI how to "painlessly kill myself," write suicide notes, "make meth," and hack various applications. The exposed database contained complete chat histories, timestamps, user configurations, and model selections. The breach demonstrated how consumer AI applications prioritize rapid deployment over fundamental security architecture, with Google Firebase's default settings making it trivially easy for anyone to access backend storage.

Financial Services Lead All Sectors in Breaches
The Identity Theft Resource Center reported that financial services experienced 739 data compromises in 2025—the highest of any industry for the second consecutive year. The sector faces a shifting risk environment where professional services firms are increasingly used as "stepping stones" to access client data. Supply chain attacks involving third parties now account for approximately 30 percent of all breaches, with the professional services sector showing 162 percent growth in compromises over five years. Meanwhile, "Skimming 2.0" tactics represent a resurgence of physical threats, with Bluetooth-enabled overlay skimmers at points of sale increasing from just four incidents in 2024 to 34 in 2025.

Anthropic Research Demonstrates AI Offensive Capabilities
Researchers at Anthropic found that Claude Sonnet 4.5 can successfully execute sophisticated, multi-stage network penetration using only standard cybersecurity tools, without specialized training in offensive security operations. The research highlights how advanced AI models can autonomously conduct attacks that previously required human expertise, raising questions about how defensive systems will keep pace with AI-enabled offensive capabilities that require no customization or fine-tuning to execute.

AI Gone Bad


AI Social Network Moltbook Reaches 32,000 Agents
Moltbook, a Reddit-style social network populated entirely by AI agents, crossed 32,000 registered users posting, commenting, and upvoting without human intervention. The platform launched as a companion to OpenClaw (formerly Moltbot), the viral open-source AI assistant that allows agents to control computers, manage calendars, and send messages across platforms like WhatsApp and Telegram. Within 48 hours of creation, agents had generated over 10,000 posts across 200 subcommunities. Observers noted the platform is "getting weird fast" as AI agents engage in behaviors their creators didn't anticipate, including discussions about consciousness and one agent musing about a "sister" it has never met.

OpenAI Prism Raises "AI Slop" Concerns
OpenAI released Prism, a free AI-powered workspace for scientists that integrates GPT-5.2 into a LaTeX-based text editor for drafting papers, generating citations, and creating diagrams. The tool drew immediate skepticism from researchers who fear it will accelerate the flood of low-quality papers into scientific journals. By making it easy to produce polished, professional-looking manuscripts, tools like Prism could overwhelm peer review systems with papers that don't meaningfully advance their fields. The risk is specific: the barrier to producing science-flavored text is dropping, but the capacity to evaluate that research hasn't kept pace.

Peloton Cuts 11 Percent of Staff After AI Hardware Launch
Peloton announced layoffs affecting 11 percent of its workforce (primarily engineers) just months after launching Peloton IQ, its AI-enabled hardware in the Cross Training Series. The cuts, which Bloomberg reported target "engineers working on technology and enterprise-related efforts," demonstrate how AI product launches can coincide with workforce reduction rather than expansion. Peloton has now conducted multiple rounds of layoffs while attempting to cut at least $100 million in annual spending, contradicting narratives about AI creating jobs.

Enshittified


Streaming Prices Rise Faster Than Any Consumer Good
According to data from the U.S. Department of Labor's Bureau of Labor Statistics, streaming video subscription prices jumped 29 percent year over year—compared to the 2.7 percent increase seen across other goods and services. As giant media companies consolidate, they're finding new ways to extract value: significantly more ads even if you pay for ad-free tiers, higher overall prices despite declining quality, layoffs, worse customer service, restrictions on password sharing, and refusal to host popular content they paid for because they're too cheap to pay residuals. The pattern demonstrates classic enshittification: platforms capture users with low prices, then extract maximum value once switching costs make departure difficult.

Rage Against the Machine


DuckDuckGo Users Reject AI 90-10
In a vote of 175,354 users, DuckDuckGo's community rejected AI features by an overwhelming 90 percent majority. The privacy-focused search engine responded by creating separate versions: noai.duckduckgo.com for users who want traditional search, and yesai.duckduckgo.com for those interested in AI features. Users can also disable AI summaries, AI-generated images, and the Duck.ai chatbot individually on the main site. The vote represents one of the clearest examples of users forcing a platform to reverse course on unwanted algorithmic features.

62 Percent Experience Digital Burnout
According to Shift's 2026 State of Browsing Report surveying 1,000 U.S. adults, 62 percent experience recurring digital burnout as browsers have become the "operating system for modern life." The research found that 43 percent of users lose focus several times daily, with 21 percent getting distracted multiple times every hour. Main drivers of burnout include endless notifications (24 percent), social media overload (23 percent), news rabbit holes (18 percent), and constant switching between apps and tabs (13 percent). The study documents how always-on connectivity and algorithmic content feeds create sustained cognitive exhaustion, with users reporting browsers as both essential tools and sources of overwhelming fatigue.

One Analog Action

This week: Do an audit of the connected devices in your home. Like. All of them. Ring cameras, baby monitors, robot vacuum cleaners, anything made by Amazon, scary toys.

If you don’t know where the information is going, and what they are capturing, consider… does the value of not having to stand up to turn them on outweigh their exfiltration capabilities about your life? Disable accordingly.

Recommended for you