Select Page

The federal government’s campaign to reform internet platforms dramatically escalated this week. The Surgeon General cited disinformation as a public health menace. The White House press secretary called on Facebook to remove 12 accounts that may be responsible for as much as 65 percent of the Covid disinformation on the site. In reference to Facebook, President Joe Biden said, “They’re killing people,” only to walk that back a day later. Then he appointed Jonathan Kanter, architect of the EU’s antitrust case against Google, to run the Justice Department’s Antitrust Division. The table may finally be set for necessary reform.

Facebook, Youtube, Instagram, and Twitter have become core communications platforms in our society, but they are collectively undermining public health, democracy, privacy, and competition, with disastrous consequences. Most Americans understand this, but don’t want to be inconvenienced by losing what they like about internet platforms. And they struggle to understand the problem’s scope. The platforms have successfully muddied the waters, using their massive wealth to co-opt huge swaths of academia, think tanks, and NGOs, as well as many politicians.

It’s easy to see why platforms fight so hard to resist reform. Covid disinformation, subversion of democracy, invasions of privacy, and anticompetitive behavior are not bugs. They are examples of the business models of internet platforms working exactly as designed. The problem is that platforms like Google and Facebook are too big to be safe.

At their current scale, with roughly twice as many active users as there are people in China, platforms like Google and Facebook are a systemic threat analogous to climate change or the pandemic. Fixing them would be a challenge under the best of circumstances. But today, the courts defer to economic power and Congress remains paralyzed, leaving the administration as our best hope. Forty years of deregulation and reduced funding have left our regulatory infrastructure with few tools and little muscle tone. Fortunately, the appointments of former FTC advisor Tim Wu to the National Economic Council, antitrust scholar Lina Khan as chair of the FTC, FTC commissioner Rohit Chopra to lead the Consumer Finance Protection Bureau, former Commodity Futures Trading Commission head Gary Gensler at the SEC, and Kanter are brilliant moves because those leaders understand the issues and will make the most of the limited tools at their disposal. The payoff from getting this right will be huge.

The first challenge facing the president and his team is to frame the problem properly. The tendency of policymakers to date has been to view the harms from internet platforms not as systemic, but as a series of coincident issues. With limited tools and time, the administration must look for high-leverage opportunities.

Internet platforms are media companies, dependent on consumer attention, but they have huge advantages over traditional media. They have unprecedented scale and influence. They are surveillance engines that gather data about users. They supplement that by acquiring location data from cell phones; health data from prescriptions, medical tests, and apps; web browsing history and the like. With all this, platforms create data voodoo dolls that enable them to both make predictions of user behavior that can be sold to advertisers and power manipulative recommendation engines. Platforms could use this power to make users happier, healthier, or more successful, but instead they use data to exploit the emotional triggers of each user because it’s easier to do and generates more revenue and profit.

The past five years have proven that internet platforms cannot be persuaded to reform themselves. They don’t believe they’re responsible for the harms caused by their products. They believe these harms are a reasonable cost of their success. That is why Facebook did nothing meaningful after learning it had been used to interfere in Brexit and the 2016 presidential election. Why the company shrugged after the ethnic cleansing of the Rohingya in Myanmar and the livestreamed terrorist attack in Christchurch. Why it ignored warnings about radicalizing users into QAnon and being used to organize and execute the insurrection. And why Mark Zuckerberg and his team pretend not to be responsible for spreading Covid disinformation. Since 2016, politicians, civil society groups, and activists like me have been trying to persuade Facebook to alter its business practices for the public good and the executives have consistently chosen company over country.