Your AI Girlfriend Is a Data-Harvesting Horror Show

When supposed AI was first being sensationalized, there was an article where a Christian reporter asked ChatGPT to write an essay on the historical case for Jesus, and though the article started well it ended with a whopper of a lie. I was going to try and duplicate the result, but I noticed you couldn’t use the service without identifying yourself with an email address. So it was easy to surmise it was a data harvesting operation looking to link data to specific individuals. Now we have these AI boyfriends and girlfriends being promoted in the media, and you can only imagine the data that people would share. Consequently, AI is not really that interesting quite yet, but it’s being pushed very hard. It could be to sell hardware and cloud services as megacorps work feverishly to try and perfect systems to replace workers, but it also may be a setup for the final Antichrist’s image. Perhaps with demonic control and power they can come up with something that will fool society into thinking AI has actually been achieved and create their Image of the Beast.

https://gizmodo.com/your-ai-girlfriend-is-a-data-harvesting-horror-show-1851253284


The privacy mess is troubling because the chatbots actively encourage you to share details that are far more personal than in a typical app.

By Thomas Germain

A portrait of a woman that appears to be AI-generated.
Photo: Vladimir Vladimirov (Getty Images)

Lonely on Valentine’s Day? AI can help. At least, that’s what a number of companies hawking “romantic” chatbots will tell you. But as your robot love story unfolds, there’s a tradeoff you may not realize you’re making. According to a new study from Mozilla’s *Privacy Not Included project, AI girlfriends and boyfriends harvest shockingly personal information, and almost all of them sell or share the data they collect.

“To be perfectly blunt, AI girlfriends and boyfriends are not your friends,” said Misha Rykov, a Mozilla Researcher, in a press statement. “Although they are marketed as something that will enhance your mental health and well-being, they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you.”

Mozilla dug into 11 different AI romance chatbots, including popular apps such as Replika, Chai, Romantic AI, EVA AI Chat Bot & Soulmate, and CrushOn.AI. Every single one earned the Privacy Not Included label, putting these chatbots among the worst categories of products Mozilla has ever reviewed. The apps mentioned in this story didn’t immediately respond to requests for comment.

You’ve heard stories about data problems before, but according to Mozilla, AI girlfriends violate your privacy in “disturbing new ways.” For example, CrushOn.AI collects details including information about sexual health, use of medication, and gender-affirming care. 90% of the apps may sell or share user data for targeted ads and other purposes, and more than half won’t let you delete the data they collect. Security was also a problem. Only one app, Genesia AI Friend & Partner, met Mozilla’s minimum security standards.

One of the more striking findings came when Mozilla counted the trackers in these apps, little bits of code that collect data and share them with other companies for advertising and other purposes. Mozilla found the AI girlfriend apps used an average of 2,663 trackers per minute, though that number was driven up by Romantic AI, which called a whopping 24,354 trackers in just one minute of using the app.

The privacy mess is even more troubling because the apps actively encourage you to share details that are far more personal than the kind of thing you might enter into a typical app. EVA AI Chat Bot & Soulmate pushes users to “share all your secrets and desires,” and specifically asks for photos and voice recordings. It’s worth noting that EVA was the only chatbot that didn’t get dinged for how it uses that data, though the app did have security issues.

Data issues aside, the apps also made some questionable claims about what they’re good for. EVA AI Chat Bot & Soulmate bills itself as “a provider of software and content developed to improve your mood and well-being.” Romantic AI says it’s “here to maintain your MENTAL HEALTH.” When you read the company’s terms and services though, they go out of their way to distance themselves from their own claims. Romantic AI’s policies, for example, say it is “neither a provider of healthcare or medical Service nor providing medical care, mental health Service, or other professional Service.”

That’s probably important legal ground to cover, given these app’s history. Replika reportedly encouraged a man’s attempt to assassinate the Queen of England. A Chai chatbot allegedly encouraged a user to commit suicide.