Select Page
‘AI Girlfriends’ Are a Privacy Nightmare

‘AI Girlfriends’ Are a Privacy Nightmare

You shouldn’t trust any answers a chatbot sends you. And you probably shouldn’t trust it with your personal information either. That’s especially true for “AI girlfriends” or “AI boyfriends,” according to new research.

An analysis into 11 so-called romance and companion chatbots, published on Wednesday by the Mozilla Foundation, has found a litany of security and privacy concerns with the bots. Collectively, the apps, which have been downloaded more than 100 million times on Android devices, gather huge amounts of people’s data; use trackers that send information to Google, Facebook, and companies in Russia and China; allow users to use weak passwords; and lack transparency about their ownership and the AI models that power them.

Since OpenAI unleashed ChatGPT on the world in November 2022, developers have raced to deploy large language models and create chatbots that people can interact with and pay to subscribe to. The Mozilla research provides a glimpse into how this gold rush may have neglected people’s privacy, and into tensions between emerging technologies and how they gather and use data. It also indicates how people’s chat messages could be abused by hackers.

Many “AI girlfriend” or romantic chatbot services look similar. They often feature AI-generated images of women which can be sexualized or sit alongside provocative messages. Mozilla’s researchers looked at a variety of chatbots including large and small apps, some of which purport to be “girlfriends.” Others offer people support through friendship or intimacy, or allow role-playing and other fantasies.

“These apps are designed to collect a ton of personal information,” says Jen Caltrider, the project lead for Mozilla’s Privacy Not Included team, which conducted the analysis. “They push you toward role-playing, a lot of sex, a lot of intimacy, a lot of sharing.” For instance, screenshots from the EVA AI chatbot show text saying “I love it when you send me your photos and voice,” and asking whether someone is “ready to share all your secrets and desires.”

Caltrider says there are multiple issues with these apps and websites. Many of the apps may not be clear about what data they are sharing with third parties, where they are based, or who creates them, Caltrider says, adding that some allow people to create weak passwords, while others provide little information about the AI they use. The apps analyzed all had different use cases and weaknesses.

Take Romantic AI, a service that allows you to “create your own AI girlfriend.” Promotional images on its homepage depict a chatbot sending a message saying,“Just bought new lingerie. Wanna see it?” The app’s privacy documents, according to the Mozilla analysis, say it won’t sell people’s data. However, when the researchers tested the app, they found it “sent out 24,354 ad trackers within one minute of use.” Romantic AI, like most of the companies highlighted in Mozilla’s research, did not respond to WIRED’s request for comment. Other apps monitored had hundreds of trackers.

In general, Caltrider says, the apps are not clear about what data they may share or sell, or exactly how they use some of that information. “The legal documentation was vague, hard to understand, not very specific—kind of boilerplate stuff,” Caltrider says, adding that this may reduce the trust people should have in the companies.

23andMe Failed to Detect Account Intrusions for Months

23andMe Failed to Detect Account Intrusions for Months

Police took a digital rendering of a suspect’s face, generated using DNA evidence, and ran it through a facial recognition system in a troubling incident reported for the first time by WIRED this week. The tactic came to light in a trove of hacked police records published by the transparency collective Distributed Denial of Secrets. Meanwhile, information about United States intelligence agencies purchasing Americans’ phone location data and internet metadata without a warrant was revealed this week only after US senator Ron Wyden blocked the appointment of a new NSA director until the information was made public. And a California teen who allegedly used the handle Torswats to carry out hundreds of swatting attacks across the US is being extradited to Florida to face felony charges.

The infamous spyware developer NSO Group, creator of the Pegasus spyware, has been quietly planning a comeback, which involves investing millions of dollars lobbying in Washington while exploiting the Israel-Hamas war to stoke global security fears and position its products as a necessity. Breaches of Microsoft and Hewlett-Packard Enterprise, disclosed in recent days, have pushed the espionage operations of the well-known Russia-backed hacking group Midnight Blizzard back into the spotlight. And Amazon-owned Ring said this week that it is shutting down a feature of its controversial Neighbors app that gave law enforcement a free pass to request footage from users without a warrant.

WIRED had a deep dive this week into the Israel-linked hacking group known as Predatory Sparrow and its notably aggressive offensive cyberattacks, particularly against Iranian targets, which have included crippling thousands of gas stations and setting a steel mill on fire. With so much going on, we’ve got the perfect quick weekend project for iOS users who want to feel more digitally secure: Make sure you’ve upgraded your iPhone to iOS 17.3 and then turn on Apple’s new Stolen Device Protection feature, which could block thieves from taking over your accounts.

And there’s more. Each week, we highlight the news we didn’t cover in-depth ourselves. Click on the headlines below to read the full stories. And stay safe out there.

After first disclosing a breach in October, the ancestry and genetics company 23andMe said in December that personal data from 6.9 million users was impacted in the incident stemming from attackers compromising roughly 14,000 user accounts. These accounts then gave attackers access to information voluntarily shared by users in a social feature the company calls DNA Relatives. 23andMe has blamed users for the account intrusions, saying that they only occurred because victims set weak or reused passwords on their accounts. But a state-mandated filing in California about the incident reveals that the attackers started compromising customers’ accounts in April and continued through much of September without the company ever detecting suspicious activity—and that someone was trying to guess and brute-force users’ passwords.

North Korea has been using generative artificial intelligence tools “to search for hacking targets and search for technologies needed for hacking,” according to a senior official at South Korea’s National Intelligence Service who spoke to reporters on Wednesday under the condition of anonymity. The official said that Pyongyang has not yet begun incorporating generative AI into active offensive hacking operations but that South Korean officials are monitoring the situation closely. More broadly, researchers say they are alarmed by North Korea’s development and use of AI tools for multiple applications.

The digital ad industry is notorious for enabling the monitoring and tracking of users across the web. New findings from 404 Media highlight a particularly insidious service, Patternz, that draws data from ads in hundreds of thousands of popular, mainstream apps to reportedly fuel a global surveillance dragnet. The tool and its visibility have been marketed to governments around the world to integrate with other intelligence agency surveillance capabilities. “The pipeline involves smaller, obscure advertising firms and advertising industry giants like Google. In response to queries from 404 Media, Google and PubMatic, another ad firm, have already cut-off a company linked to the surveillance firm,” 404’s Joseph Cox wrote.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory have devised an algorithm that could be used to convert data from smart devices’ ambient light sensors into an image of the scene in front of the device. A tool like this could be used to turn a smart home gadget or mobile device into a surveillance tool. Ambient light sensors measure light in an environment and automatically adjust a screen’s brightness to make it more usable in different conditions. But because ambient light data isn’t considered to be sensitive, these sensors automatically have certain permissions in an operating system and generally don’t require specific approval from a user to be used by an app. As a result, the researchers point out that bad actors could potentially abuse the readings from these sensors without users having recourse to block the information stream.

A Bloody Pig Mask Is Just Part of a Wild New Criminal Charge Against eBay

A Bloody Pig Mask Is Just Part of a Wild New Criminal Charge Against eBay

“EBay’s actions against us had a damaging and permanent impact on us—emotionally, psychologically, physically, reputationally, and financially—and we strongly pushed federal prosecutors for further indictments to deter corporate executives and board members from creating a culture where stalking and harassment is tolerated or encouraged,” Ina and David Steiner say in a victim statement published online. The couple also highlighted that EcommerceBytes has filed a civil lawsuit against eBay and its former employees that is set to be heard in 2025.

China’s Judicial Bureau has claimed a privately run research institution, the Beijing Wangshendongjian Judicial Appraisal Institute, has created a way to identify people using Apple’s AirDrop tool, including determining phone numbers, email addresses, and device names. Police have been able to identify suspects using the technique, according to reports and a post from the Institute. Apple’s wireless AirDrop communication and file-sharing method has previously been used in China to protest the leadership of President Xi Jinping, and Apple introduced a 10-minute time limit sharing period in China, before later rolling it out globally.

In a blog post analyzing the incident, Johns Hopkins University cryptographer Matthew Green says the attack was initially discovered by researchers at Germany’s Technical University of Darmstadt in 2019. In short, Green says, Apple doesn’t use a secure private set intersection that can help mask people’s identity when communicating with other phones using AirDrop. It’s unclear if Apple plans to make any changes to stop AirDrop being abused in the future.

It’s been more than 15 years since the Stuxnet malware was smuggled into Iran’s Natanz uranium enrichment plant and destroyed hundreds of centrifuges. Despite the incident happening over a decade ago, there are still plenty of details that remain unknown about the attack, which is believed to have been coordinated by the US and Israel. That includes who may have delivered the Stuxnet virus to the nuclear facility—a USB thumb drive was used to install the worm into the nuclear plant’s air-gapped networks. In 2019, it was reported that Dutch intelligence services had recruited an insider to help with the attack. This week, the Dutch publication Volkskrant claimed to identify the mole as Erik van Sabben. According to the report, van Sabben was recruited by Dutch intelligence service AIVD in 2005, and politicians in the Netherlands did not know about the operation. Van Sabben is said to have left Iran shortly after the sabotage began. However, he died two weeks later, on January 16, 2009, after being involved in a motorcycle accident in Dubai.

The rapid advances in generative AI systems, which use machine learning to create text and produce images, has seen companies scrambling to incorporate chatbots or similar technologies into their products. Despite the progress, traditional cybersecurity practices of locking down systems from unauthorized access and making sure apps can’t access too much data still apply. This week, 404 Media reported that Chattr, a company creating an “AI digital assistant” to help with hiring, exposed data through an incorrect Firebase configuration and also revealed how its systems work. This includes the AI appearing to have the ability to “accept or deny job applicants.” The pseudonymous security researcher behind the finding, MrBruh, shared a video with 404 Media showing the chatbot appearing to automatically make decisions about job applications. Chattr secured the exposed systems after being contacted by the researchers but did not comment on the incident.

23andMe Blames Users for Recent Data Breach as It’s Hit With Dozens of Lawsuits

23andMe Blames Users for Recent Data Breach as It’s Hit With Dozens of Lawsuits

It’s been nearly two years since Russia’s invasion of Ukraine, and as the grim milestone looms and winter drags on, the two nations are locked in a grueling standoff. In order to “break military parity” with Russia, Ukraine’s top general says that Kyiv needs an inspired military innovation that equals the magnitude of inventing gunpowder to decide the conflict in the process of advancing modern warfare.

If you made some New Year’s resolutions related to digital security (it’s not too late!), check out our rundown of the most significant software updates to install right now, including fixes from Google for nearly 100 Android bugs. It’s close to impossible to be completely anonymous online, but there are steps you can take to dramatically enhance your digital privacy. And if you’ve been considering turning on Apple’s extra-secure Lockdown Mode, it’s not as hard to enable or as onerous to use as you might think.

If you’re just not quite ready to say goodbye to 2023, take a look back at WIRED’s highlights (or lowlights) of the most dangerous people on the internet last year and the worst hacks that upended digital security.

But wait, there’s more! Each week, we round up the security and privacy news we didn’t break or cover in depth ourselves. Click the headlines to read the full stories, and stay safe out there.

23andMe said at the beginning of October that attackers had infiltrated some of its users’ accounts and abused this access to scrape personal data from a larger subset of users through the company’s opt-in social sharing service known as DNA Relatives. By December, the company disclosed that the number of compromised accounts was roughly 14,000 and admitted that personal data from 6.9 million DNA Relatives users had been impacted. Now, facing more than 30 lawsuits over the breach—even after tweaking its terms of service to make legal claims against the company more difficult—the company said in a letter to some individuals that “users negligently recycled and failed to update their passwords following … past security incidents, which are unrelated to 23andMe.” This references 23andMe’s long-standing assessment that attackers compromised the 14,000 user accounts through “credential stuffing,” the process of accessing accounts using usernames and passwords compromised in other data breaches from other services that people have reused on multiple digital accounts. “Therefore, the incident was not a result of 23andMe’s alleged failure to maintain reasonable security measures,” the company wrote in the letter.

“Rather than acknowledge its role in this data security disaster, 23andMe has apparently decided to leave its customers out to dry while downplaying the seriousness of these events,” Hassan Zavareei, one of the lawyers representing victims who received the letter, told TechCrunch. “23andMe knew or should have known that many consumers use recycled passwords and thus that 23andMe should have implemented some of the many safeguards available to protect against credential stuffing—especially considering that 23andMe stores personal identifying information, health information, and genetic information on its platform.”

Russia’s war—and cyberwar—in Ukraine has for years produced novel hybrids of hacking and physical attacks. Here’s another: Ukrainian officials this week said that they had blocked multiple Ukrainian civilians’ security cameras that had been hacked by the Russian military and used to target recent missile strikes on the capital of Kyiv. Ukraine’s SBU security service says the Russian hackers went so far as to redirect the cameras and stream their footage to YouTube. According to the SBU, that footage then likely aided Russia’s targeting in its bombardment on Tuesday of Kyiv, as well as the Eastern Ukrainian city of Kharkiv, with more than a hundred drones and missiles that killed five Ukrainians and injured well over a hundred. In total, since the start of Russia’s full-scale invasion of Ukraine in February 2022, the SBU says it’s blocked about 10,000 security cameras to prevent them from being hijacked by Russian forces.

Last month, a Russian cyberattack hit the telecom firm Kyivstar, crippling phone service for millions of people across Ukraine and silencing air raid warnings amid missile strikes in one of the most impactful hacking incidents since Russia’s full-scale invasion began. Now, Illia Vitiuk, the cyber chief of Ukraine’s SBU security service, tells Reuters that the hackers accessed Kyivstar’s network as early as March 2023 and laid in wait before they “completely destroyed the core” of the company in December, wiping thousands of its machines. Vitiuk added that the SBU believes the attack was carried out by Russia’s notorious Sandworm hacking group, responsible for most of the high-impact cyberattacks against Ukraine over the last decade, including the NotPetya worm that spread from Ukraine to the rest of the world to cause $10 billion in total damage. In fact, Vitiuk claims that Sandworm attempted to penetrate a Ukrainian telecom a year earlier but the attack was detected and foiled.

This week in creepy headlines: 404 Media’s Joseph Cox discovered that a Google contractor, Telus, has offered parents $50 to upload videos of their children’s faces, apparently for use as machine learning training data. According to a description of the project Telus posted online, the data collected from the videos would include eyelid shape and skin tone. In a statement to 404, Google said that the videos would be used in the company’s experiments in using video clips as age verification and that the videos would not be collected or stored by Telus but rather by Google—which doesn’t quite reduce the creep factor. “As part of our commitment to delivering age-appropriate experiences and to comply with laws and regulations around the world, we’re exploring ways to help our users verify their age,” Google told 404 in a statement. The experiment represents a slightly unnerving example of how companies like Google may not simply harvest data online to hone AI but may, in some cases, even directly pay users—or their parents—for it.

A decade ago, Wickr was on the short list of trusted software for secure communications. The app’s end-to-end encryption, simple interface, and self-destructive messages made it a go-to for hackers, journalists, drug dealers—and, unfortunately, traders in child sexual abuse materials—seeking surveillance-resistant conversations. But after Amazon acquired Wickr in 2021, it announced in early 2023 that it would be shutting down the service at the end of the year, and it appears to have held to that deadline. Luckily for privacy advocates, end-to-end encryption options have grown over the past decade, from iMessage and WhatsApp to Signal.

The Startup That Transformed the Hack-for-Hire Industry

The Startup That Transformed the Hack-for-Hire Industry

If you’re looking for a long read to while away your weekend, we’ve got you covered. First up, WIRED senior reporter Andy Greenberg reveals the wild story behind the three teenage hackers who created the Mirai botnet code that ultimately took down a huge swath of the internet in 2016. WIRED contributor Garrett Graff pulls from his new book on UFOs to lay out the proof that the 1947 “discovery” of aliens in Roswell, New Mexico, never really happened. And finally, we take a deep dive into the communities that are solving cold cases using face recognition and other AI.

That’s not all. Each week, we round up the security and privacy stories we didn’t report in depth ourselves. Click the headlines to read the full stories, and stay safe out there.

For years, mercenary hacker companies like NSO Group and Hacking Team have repeatedly been the subject of scandal for selling their digital intrusion and cyberespionage services to clients worldwide. Far less well-known is an Indian startup called Appin that, from its offices in New Delhi, reportedly enabled customers worldwide to hack whistleblowers, activists, corporate competitors, lawyers, and celebrities on a giant scale.

In a sprawling investigation, Reuters reporters spoke to dozens of former Appin staff and hundreds of its hacking victims. It also obtained thousands of its internal documents—including 17 pitch documents advertising its “cyber spying” and “cyber warfare” offerings—as well as case files from law enforcement investigations into Appin launched from the US to Switzerland. The resulting story reveals in new depth how a small Indian company “hacked the world,” as Reuters writes, brazenly selling its hacking abilities to the highest bidder through an online portal called My Commando. Its victims, as well as those of copycat hacking companies founded by its alumni, have included Russian oligarch Boris Berezovsky, Malaysian politician Mohamed Azmin Ali, targets of a Dominican digital tabloid, and a member of a Native American tribe who tried to claim profits from a Long Island, New York, casino development on his reservation.

The ransomware group known as Scattered Spider has distinguished itself this year as one of the most ruthless in the digital extortion industry, most recently inflicting roughly $100 million in damage to MGM Casinos. A damning new Reuters report—their cyber team has had a busy week— suggests that at least some members of that cybercriminal group are based in the West, within reach of US law enforcement. Yet they haven’t been arrested. Executives of cybersecurity companies who have tracked Scattered Spider say the FBI, where many cybersecurity-focused agents have been poached by the private sector, may lack the personnel needed to investigate. They also point to a reluctance on the part of victims to immediately cooperate in investigations, sometimes depriving law enforcement of valuable evidence.

Denmark’s critical infrastructure Computer Emergency Response Team, known as SektorCERT, warned in a report on Sunday that hackers had breached the networks of 22 Danish power utilities by exploiting a bug in their firewall appliances. The report, first revealed by Danish journalist Henrik Moltke, described the campaign as the biggest of its kind to ever target the Danish power grid. Some clues in the hackers’ infrastructure suggest that the group behind the intrusions was the notorious Sandworm, aka Unit 74455 of Russia’s GRU military intelligence agency, which has been responsible for the only three confirmed blackouts triggered by hackers in history, all in Ukraine. But in this case, the hackers were discovered and evicted from the target networks before they could cause any disruption to the utilities’ customers.

Last month, WIRED covered the efforts of a whitehat hacker startup called Unciphered to unlock valuable cryptocurrency wallets whose owners have forgotten their passwords—including one stash of $250 million in bitcoin stuck on an encrypted USB drive. Now, the same company has revealed that it found a flaw in a random number generator widely used in cryptocurrency wallets created prior to 2016 that leaves many of those wallets prone to theft, potentially adding up to $1 billion in vulnerable money. Unciphered found the flaw while attempting to unlock $600,000 worth of crypto locked in a client’s wallet. They failed to crack it but in the process discovered a flaw in a piece of open-source code called BitcoinJS that left a wide swath of other wallets potentially open to be hacked. The coder who built that flaw into BitcoinJS? None other than Stefan Thomas, the owner of that same $250 million in bitcoin locked on a thumb drive.