Select Page

As companies develop ever more types of technology to find and remove content in different ways, there becomes an expectation they should use it. Can moderate implies ought to moderate. After all, once a tool has been put into use, it’s hard to put it back in the box. But content moderation is now snowballing, and the collateral damage in its path is too often ignored.

There’s an opportunity now for some careful consideration about the path forward. Trump’s social media accounts and the election are in the rearview mirror, which means content moderation is no longer the constant A1 story. Perhaps that proves the actual source of much of the angst was politics, not platforms. But there is—or should be—some lingering unease at the awesome display of power that a handful of company executives showed in flipping the off-switch on the accounts of the leader of the free world.

The chaos of 2020 shattered any notion that there’s a clear category of harmful “misinformation” that a few powerful people in Silicon Valley must take down, or even that there’s a way to distinguish health from politics. Last week, for instance, Facebook reversed its policy and said it will no longer take down posts claiming Covid-19 is human-made or manufactured. Only a few months ago The New York Times had cited belief in this “baseless” theory as evidence that social media had contributed to an ongoing “reality crisis.” There was a similar back-and-forth with masks. Early in the pandemic, Facebook banned ads for them on the site. This lasted until June, when the WHO finally changed its guidance to recommend wearing masks, despite many experts advising it much earlier. The good news, I guess, is they weren’t that effective at enforcing the ban in the first place. (At the time, however, this was not seen as good news.)

As more comes out about what authorities got wrong during the pandemic or instances where politics, not expertise, determined narratives, there will naturally be more skepticism about trusting them or private platforms to decide when to shut down conversation. Issuing public health guidance for a particular moment is not the same as declaring the reasonable boundaries of debate.

The calls for further crack-downs have geopolitical costs, too. Authoritarian and repressive governments around the world have pointed to the rhetoric of liberal democracies in justifying their own censorship. This is obviously a specious comparison. Shutting down criticism of the government’s handling of a public health emergency, as the Indian government is doing, is as clear an affront to free speech as it gets. But there is some tension in yelling at platforms to take more down here but stop taking so much down over there. So far, Western governments have refused to address this. They have largely left platforms to fend for themselves in the global rise of digital authoritarianism. And the platforms are losing. Governments need to walk and chew gum in how they talk about platform regulation and free speech if they want to stand up for the rights of the many users outside their borders.

There are other trade-offs. Because content moderation at scale will never be perfect, the question is always which side of the line to err on when enforcing rules. Stricter rules and more heavy-handed enforcement necessarily means more false positives: That is, more valuable speech will be taken down. This problem is exacerbated by the increased reliance on automated moderation to take down content at scale: These tools are blunt and stupid. If told to take down more content, algorithms won’t think twice about it. They can’t evaluate context or tell the difference between content glorifying violence or recording evidence of human rights abuses, for example. The toll of this kind of approach has been clear during the Palestinian–Israeli conflict of the past few weeks as Facebook has repeatedly removed essential content from and about Palestinians. This is not a one-off. Maybe can should not always imply ought—especially as we know that these errors tend to fall disproportionately on already marginalized and vulnerable communities.