Select Page
Change Healthcare Faces Another Ransomware Threat—and It Looks Credible

Change Healthcare Faces Another Ransomware Threat—and It Looks Credible

For months, Change Healthcare has faced an immensely messy, months-long ransomware debacle that has left hundreds of pharmacies and medical practices across the United States unable to process claims. Now, thanks to an apparent dispute within the ransomware criminal ecosystem, it may have just become far messier still.

Last month, the ransomware group AlphV, which had claimed credit for encrypting Change Healthcare’s network and threatened to leak reams of the company’s sensitive health care data, received a $22 million payment—evidence, publicly captured on Bitcoin’s blockchain, that Change Healthcare had very likely caved to its tormentors’ ransom demand, though the company has yet to confirm that it paid. But in a new definition of a worst-case ransomware, a different ransomware group claims to be holding Change Healthcare’s stolen data and is demanding a payment of their own.

Since Monday, RansomHub, a relatively new ransomware group, has posted to its dark-web site that it has 4 terabytes of Change Healthcare’s stolen data, which it threatened to sell to the “highest bidder” if Change Healthcare didn’t pay an unspecified ransom. RansomHub tells WIRED it is not affiliated with AlphV and “can’t say” how much it’s demanding as a ransom payment.

RansomHub initially declined to publish or provide WIRED any sample data from that stolen trove to prove its claim. But on Friday, a representative for the group sent WIRED several screenshots of what appeared to be patient records and a data-sharing contract for United Healthcare, which owns Change Healthcare, and Emdeon, which acquired Change Healthcare in 2014 and later took its name.

While WIRED could not fully confirm RansomHub’s claims, the samples suggest that this second extortion attempt against Change Healthcare may be more than an empty threat. “For anyone doubting that we have the data, and to anyone speculating the criticality and the sensitivity of the data, the images should be enough to show the magnitude and importance of the situation and clear the unrealistic and childish theories,” the RansomHub contact tells WIRED in an email.

Change Healthcare didn’t immediately respond to WIRED’s request for comment on RansomHub’s extortion demand.

Brett Callow, a ransomware analyst with security firm Emsisoft, says he believes AlphV did not originally publish any data from the incident, and the origin of RansomHub’s data is unclear. “I obviously don’t know whether the data is real—it could have been pulled from elsewhere—but nor do I see anything that indicates it may not be authentic,” he says of the data shared by RansomHub.

Jon DiMaggio, chief security strategist at threat intelligence firm Analyst1, says he believes RansomHub is “telling the truth and does have Change HealthCare’s data,” after reviewing the information sent to WIRED. While RansomHub is a new ransomware threat actor, DiMaggio says, they are quickly “gaining momentum.”

If RansomHub’s claims are real, it will mean that Change Healthcare’s already catastrophic ransomware ordeal has become a kind of cautionary tale about the dangers of trusting ransomware groups to follow through on their promises, even after a ransom is paid. In March, someone who goes by the name “notchy” posted to a Russian cybercriminal forum that AlphV had pocketed that $22 million payment and disappeared without sharing a commission with the “affiliate” hackers who typically partner with ransomware groups and often penetrate victims’ networks on their behalf.

Russia Attacked Ukraine’s Power Grid at Least 66 Times to ‘Freeze It Into Submission’

Russia Attacked Ukraine’s Power Grid at Least 66 Times to ‘Freeze It Into Submission’

Last week marked the second anniversary of Russia’s full-scale invasion of Ukraine, a conflict that has been marked by multiple reports that Russia may have committed war crimes by indiscriminately targeting civilians and civilian infrastructure. During the first winter of the conflict, Russia pursued a strategy that US secretary of state Antony Blinken described as trying to “freeze [Ukraine] into submission” by attacking its power infrastructure, shutting citizens off from heat and electricity.

Now, using satellite imagery and open source information, a new report from the Conflict Observatory, a US-government-backed initiative between Yale University’s Humanitarian Research Lab, the Smithsonian Cultural Rescue Initiative, PlanetScape AI, and the mapping software Esri, offers a clearer picture of the scale of this strategy. Between October 1, 2022, and April 30, 2023, researchers found more than 200 instances of damage to the country’s power infrastructure, amounting to more than $8 billion in estimated destruction. Of the 223 instances identified in the report, researchers were able to confirm 66 of them with high confidence, meaning they were able to cross-reference the damage across multiple trustworthy sources and data points.

Map of Ukraine showing verified incidents

Courtesy of Yale Humanitarian Research Lab

“What we see here is that there was a pattern of bombardment that hit front lines and non-frontline areas, at a scale that must have had civilian effect,” says Nathaniel Raymond, a coleader of the Humanitarian Research Lab and lecturer at Yale’s Jackson School of Global Affairs. The UN Office for the Coordination of Humanitarian Affairs estimated at the time that attacks on Ukraine’s power grid had left “millions” of people without electricity throughout the country.

Researchers found and were able to identify and verify damage to power infrastructure in 17 of the country’s 24 oblasts, or administrative units.

Documenting specific instances of damage to power infrastructure has been particularly difficult for researchers and investigators, because the Ukrainian government has sought to limit public information about which sites have been damaged and which continue to be operational in an effort to prevent further attacks. (For this reason, the report itself avoids getting too specific about which locations it analyzed and the extent of the destruction.) But this can also make it difficult to collect, verify, and build upon the data necessary to prove violations of international law.

By making its methodology public, Raymond hopes that it will make further investigation possible. “Having common standards to a common dataset is a prerequisite for accountability,” he says.

How a Right-Wing Controversy Could Sabotage US Election Security

How a Right-Wing Controversy Could Sabotage US Election Security

It remains unclear how many of Warner’s colleagues agree with him. But when WIRED surveyed the other 23 Republican secretaries who oversee elections in their states, several of them said they would continue working with CISA.

“The agency has been beneficial to our office by providing information and resources as it pertains to cybersecurity,” says JoDonn Chaney, a spokesperson for Missouri’s Jay Ashcroft.

South Dakota’s Monae Johnson says her office “has a good relationship with its CISA partners and plans to maintain the partnership.”

But others who praised CISA’s support also sounded notes of caution.

Idaho’s Phil McGrane says CISA is doing “critical work … to protect us from foreign cyber threats.” But he also tells WIRED that the Elections Infrastructure Information Sharing and Analysis Center (EI-ISAC), a public-private collaboration group that he helps oversee, “is actively reviewing past efforts regarding mis/disinformation” to determine “what aligns best” with CISA’s mission.

Mississippi’s Michael Watson says that “statements following the 2020 election and some internal confidence issues we’ve since had to navigate have caused concern.” As federal and state officials gear up for this year’s elections, he adds, “my hope is CISA will act as a nonpartisan organization and stick to the facts.”

CISA’s relationships with Republican secretaries are “not as strong as they’ve been before,” says John Merrill, who served as Alabama’s secretary of state from 2015 to 2023. In part, Merrill says, that’s because of pressure from the GOP base. “Too many conservative Republican secretaries are not just concerned about how the interaction with those federal agencies is going, but also about how it’s perceived … by their constituents.”

Free Help at Risk

CISA’s defenders say the agency does critical work to help underfunded state and local officials confront cyber and physical threats to election systems.

The agency’s career civil servants and political leaders “have been outstanding” during both the Trump and Biden administrations, says Minnesota secretary of state Steve Simon, a Democrat.

Others specifically praised CISA’s coordination with tech companies to fight misinformation, arguing that officials only highlighted false claims and never ordered companies to delete posts.

“They’re just making folks aware of threats,” says Arizona’s Democratic secretary of state, Adrian Fontes. The real “bad actors,” he says, are the people who “want the election denialists and the rumor-mongers to run amok and just spread out whatever lies they want.”

If Republican officials begin disengaging from CISA, their states will lose critical security protections and resources. CISA sponsors the EI-ISAC, which shares information about threats and best practices for thwarting them; provides free services like scanning election offices’ networks for vulnerabilities, monitoring those networks for intrusions and reviewing local governments’ contingency plans; and convenes exercises to test election officials’ responses to crises.

“For GOP election officials to back away from [CISA] would be like a medical patient refusing to accept free wellness assessments, check-ups, and optional prescriptions from one of the world’s greatest medical centers,” says Eddie Perez, a former director for civic integrity at Twitter and a board member at the OSET Institute, a nonprofit group advocating for improved election technology.

Anne Neuberger, a Top White House Cyber Official, Is Staying Surprisingly Optimistic

Anne Neuberger, a Top White House Cyber Official, Is Staying Surprisingly Optimistic

The fact that in 2023 we’re rolling out mandated minimum cybersecurity practices for the first time in critical infrastructure—we’re one of the last countries to do that.

Building in the red-teaming, the testing, the human-in-the-loop before those models are deployed is a core lesson learned from cybersecurity that we want to make in the AI space.

In the AI executive order, regulators were tasked to determine where their existing regulations—let’s say for safety—already account for the risks around AI, and where are there deltas? Those first risk assessments have come in, and we’re going to use those both to inform the Hill’s work and also to think about how we roll those into the same cybersecurity minimum practices that we just talked about that regulators are doing.

Where are you starting to see threat actors actually use AI in attacks on the US? Are there places where you’re seeing this technology already being deployed by threat actors?

We mentioned voice cloning and deepfakes. We can say we’re seeing some criminal actors—or some countries—experimenting. You saw FraudGPT that ostensibly advances criminal use cases. That’s about all we can say we’re releasing right now.

You have been more engaged recently on autonomous vehicles. What’s drawn your interest there?

There’s a whole host of risks that we have to look at, the data that’s collected, patching—bulk patches, should we have checks to ensure they’re safe before millions of cars get a software patch? The administration is working on an effort that probably will include both some requests for input as well as assessing the need for new standards. Then we’re looking very likely in the near term to come up with a plan to test those standards, ideally in partnership with our European allies. This is something we both care about, and it’s another example of “Let’s get ahead of it.”

You already see with AVs large amounts of data being collected. We’ve seen a few states, for example, that have given approval for Chinese car models to drive around and collect. We’re taking a look at that and thinking, “Hold on a second, maybe before we allow this kind of data collection that can potentially be around military bases, around sensitive sites, we want to really take a look at that more carefully.” We’re interested both from the perspective of what data is being collected, what are we comfortable being collected, as well as what new standards are needed to ensure American cars and foreign-made cars are built safely. Cars used to be hardware, and they’ve shifted to including a great deal of software, and we need to reboot how we think about security and long-term safety.

You’ve also been working a lot on spectrum—you had a big gathering about 6G standards last year. Where do you see that work going, and what are the next steps?

First, I would say there’s a domestic and an international part. It comes from a foundational belief that wireless telecommunications is core to our economic growth—it’s both manufacturing robotics in a smart manufacturing factory, and then I just went to CES and John Deere was showing their smart tractors, where they use connectivity to adjust irrigation based on the weather. On the CES floor, they noted that integrating AI in agriculture requires changes to US policies on spectrum. I said, “I don’t understand, America’s broadband plan deploys to rural sites.” He said, “Yeah, you’re deploying to the farm, but there’s acres and acres of fields that have no connectivity. How are we going to do this stuff?” I hadn’t expected to get pinged on spectrum there, on the floor talking about tractors. But it shows how it’s core to what we want to do—this huge promise of drones monitoring electricity infrastructure after storms and determining lines are down to make maintenance far more efficient, all of that needs connectivity.

‘AI Girlfriends’ Are a Privacy Nightmare

‘AI Girlfriends’ Are a Privacy Nightmare

You shouldn’t trust any answers a chatbot sends you. And you probably shouldn’t trust it with your personal information either. That’s especially true for “AI girlfriends” or “AI boyfriends,” according to new research.

An analysis into 11 so-called romance and companion chatbots, published on Wednesday by the Mozilla Foundation, has found a litany of security and privacy concerns with the bots. Collectively, the apps, which have been downloaded more than 100 million times on Android devices, gather huge amounts of people’s data; use trackers that send information to Google, Facebook, and companies in Russia and China; allow users to use weak passwords; and lack transparency about their ownership and the AI models that power them.

Since OpenAI unleashed ChatGPT on the world in November 2022, developers have raced to deploy large language models and create chatbots that people can interact with and pay to subscribe to. The Mozilla research provides a glimpse into how this gold rush may have neglected people’s privacy, and into tensions between emerging technologies and how they gather and use data. It also indicates how people’s chat messages could be abused by hackers.

Many “AI girlfriend” or romantic chatbot services look similar. They often feature AI-generated images of women which can be sexualized or sit alongside provocative messages. Mozilla’s researchers looked at a variety of chatbots including large and small apps, some of which purport to be “girlfriends.” Others offer people support through friendship or intimacy, or allow role-playing and other fantasies.

“These apps are designed to collect a ton of personal information,” says Jen Caltrider, the project lead for Mozilla’s Privacy Not Included team, which conducted the analysis. “They push you toward role-playing, a lot of sex, a lot of intimacy, a lot of sharing.” For instance, screenshots from the EVA AI chatbot show text saying “I love it when you send me your photos and voice,” and asking whether someone is “ready to share all your secrets and desires.”

Caltrider says there are multiple issues with these apps and websites. Many of the apps may not be clear about what data they are sharing with third parties, where they are based, or who creates them, Caltrider says, adding that some allow people to create weak passwords, while others provide little information about the AI they use. The apps analyzed all had different use cases and weaknesses.

Take Romantic AI, a service that allows you to “create your own AI girlfriend.” Promotional images on its homepage depict a chatbot sending a message saying,“Just bought new lingerie. Wanna see it?” The app’s privacy documents, according to the Mozilla analysis, say it won’t sell people’s data. However, when the researchers tested the app, they found it “sent out 24,354 ad trackers within one minute of use.” Romantic AI, like most of the companies highlighted in Mozilla’s research, did not respond to WIRED’s request for comment. Other apps monitored had hundreds of trackers.

In general, Caltrider says, the apps are not clear about what data they may share or sell, or exactly how they use some of that information. “The legal documentation was vague, hard to understand, not very specific—kind of boilerplate stuff,” Caltrider says, adding that this may reduce the trust people should have in the companies.