Select Page
TikTok Must Not Fail Ukrainians

TikTok Must Not Fail Ukrainians

Vietnam was known as the first televised war. The Iran Green Movement and the Arab Spring were called the first Twitter Revolutions. And now the Russian invasion of Ukraine is being dubbed the first TikTok War. As The Atlantic and others have pointed out, it’s not, neither literally nor figuratively: TikTok is merely the latest social media platform to see its profitable expansion turn into a starring role in a crisis.

But as its #ukraine and #украина posts near a combined 60 billion views, TikTok should learn from the failings of other platforms over the past decade, failings that have exacerbated the horrors of war, facilitated misinformation, and impeded access to justice for human rights crimes. TikTok should take steps now to better support creators sharing evidence and experience, viewers, and the people and institutions who use these videos for reliable information and human rights accountability.

First, TikTok can help people on the ground in Ukraine who want to galvanize action and be trusted as frontline witnesses. The company should provide targeted guidance directly to these vulnerable creators. This could include notifications or videos in their For You page that demonstrate (1) how to film in a way that is more verifiable and trustworthy to outside sources, (2) how to protect themselves and others in case a video shot in crisis becomes a tool of surveillance and outright targeting, and (3) how to share their footage without it getting taken down or made less visible as graphic content. TikTok should begin the process of incorporating emerging approaches (such as the C2PA standards) that allow creators to choose to show a video’s provenance. And it should offer easy ways, prominently available when recording, to protectively and not just aesthetically blur faces of vulnerable people.

TikTok should also be investing in robust, localized, contextual content moderation and appeals routing for this conflict and the next crisis. Social media creators are at the mercy of capricious algorithms that cannot navigate the difference between harmful violent content and victims of war sharing their experiences. If a clip or account is taken down or suspended—often because it breaches a rule the user never knew about—it’s unlikely they’ll be able to access a rapid or transparent appeals process. This is particularly true if they live outside North America and Western Europe. The company should bolster its content moderation in Ukraine immediately.

The platform is poorly designed for accurate information but brilliantly designed for quick human engagement. The instant fame that the For You page can grant has brought the everyday life and dark humor of young Ukrainians like Valeria Shashenok (@valerissh) from the city of Chernihiv into people’s feeds globally. Human rights activists know that one of the best ways to engage people in meaningful witnessing and to counter the natural impulse to look away occurs when you experience their realities in a personal, human way. Undoubtedly some of this insight into real people’s lives in Ukraine is moving people to a place of greater solidarity. Yet the more decontextualized the suffering of others is—and the For You page also encourages flitting between disparate stories—the more the suffering is experienced as spectacle. This risks a turn toward narcissistic self-validation or worse: trolling of people at their most vulnerable.

And that’s assuming that the content we’re viewing is shared in good faith. The ability to remix audio, along with TikTok’s intuitive ease in editing, combining, and reusing existing footage, among other factors, make the platform vulnerable to misinformation and disinformation. Unless spotted by an automated match-up with a known fake, labeled as state-affiliated media, or identified by a fact-checker as incorrect or by TikTok teams as being part of a coordinated influence campaign, many deceptive videos circulate without any guidance or tools to help viewers exercise basic media literacy.

TikTok should do more to ensure that it promptly identifies, reviews, and labels these fakes for their viewers, and takes them down or removes them from recommendations. They should ramp up capacity to fact-check on the platform and address how their business model and its resulting algorithm continues to promote deceptive videos with high engagement. We, the people viewing the content, also need better direct support. One of the first steps that professional fact-checkers take to verify footage is to use a reverse image search to see if a photo or video existed before the date it claims to have been made or is from a different location or event than what it is claimed to be. As the TikTok misinfo expert Abbie Richards has pointed out, TikTok doesn’t even indicate the date a video was posted when it appears in the For You feed. Like other platforms, TikTok also doesn’t make an easy reverse image search or video search available in-platform to its users or offer in-feed indications of previous video dupes. It’s past time to make it simpler to be able to check whether a video you see in your feed comes from a different time and place than it claims, for example with intuitive reverse image/video search or a simple one-click provenance trail for videos created in-platform.

No one visits the “Help Center.” Tools need to be accompanied by guidance in videos that appear on people’s For You page. Viewers need to build the media literacy muscles for how to make good judgements about the footage they are being exposed to. This includes sharing principles like SIFT as well as tips specific to the ways TikTok works, such as what to look for on TikTok’s extremely popular livestreams: For example, check the comments and look at the creator’s previous content, and on any video, always check to make sure the audio is original (as both Richards and Marcus Bösch, another TikTok misinfo expert, have suggested). Reliable news sources also need to be part of the feed, as TikTok appears to have started to do increasingly.

TikTok also demonstrates a problem that arises as content recommender algorithms intersect with good media literacy practices of “lateral reading.” Perversely, the more attention you pay to a suspicious video, the more you return to it after looking for other sources, the more the TikTok algorithm feeds you more of the same and prioritizes sharing that potentially false video to other people.

Content moderation policies are meant to be a safeguard against the spread of violent, inciting, or other banned content. Platforms take down vast quantities of footage, which often includes content that can help investigate human rights violations and war crimes. AI algorithms and humans—correctly and incorrectly—identify these videos as dangerous speech, terrorist content, or graphic violence unacceptable for viewing. A high percentage of the content is taken down by a content moderation algorithm, in many cases before it’s seen by a human eye. This can have a catastrophic effect in the quest for justice and accountability. How can investigators request information they don’t know exists? How much material is lost forever because human rights organizations haven’t had the chance to see it and preserve it? For example, in 2017 the independent human rights archiving organization Syrian Archive found that hundreds of thousands of videos from the Syrian Civil War had been swept away by the YouTube algorithm. In the blink of an eye, it removed critical evidence that could contribute to accountability, community memory, and justice.

It’s beyond time that we have far better transparency on what is lost and why, and clarify how platforms will be regulated, compelled, or agree to create so-called digital “evidence lockers” that selectively and appropriately safeguard material that is critical for justice. We need this both to preserve content that falls afoul of platform policy, as well as content that is incorrectly removed, particularly knowing that content moderation is broken. Groups like WITNESS, Mnemonic, the Human Rights Center at Berkeley, and Human Rights Watch are working on finding ways these archives could be set up—balancing accountability with human rights, privacy, and hopefully ultimate community control of their archives. TikTok now joins the company of other major social media platforms in needing to step up to this challenge. To start with, they should be taking proactive action to understand what needs to be preserved, and engage with accountability mechanisms and civil society groups who have been preserving video evidence.

The invasion of Ukraine is not the first social media war. But it can be the first time a social media company does what it should do for people bearing witness on the front lines, from a distance, and in the courtroom.


More Great WIRED Stories

The World Needs Deepfake Experts to Stem This Chaos

The World Needs Deepfake Experts to Stem This Chaos

Recently the military coup government in Myanmar added serious allegations of corruption to a set of existing spurious cases against Burmese leader Aung San Suu Kyi. These new charges build on the statements of a prominent detained politician that were first released in a March video that many in Myanmar suspected of being a deepfake.

In the video, the political prisoner’s voice and face appear distorted and unnatural as he makes a detailed claim about providing gold and cash to Aung San Suu Kyi. Social media users and journalists in Myanmar immediately questioned whether the statement was real. This incident illustrates a problem that will only get worse. As real deepfakes get better, the willingness of people to dismiss real footage as a deepfake increases. What tools and skills will be available to investigate both types of claims, and who will use them?

In the video, Phyo Min Thein, the former chief minister of Myanmar’s largest city, Yangon, sits in a bare room, apparently reading from a statement. His speaking sounds odd and not like his normal voice, his face is static, and in the poor-quality version that first circulated, his lips look out of sync with his words. Seemingly everyone wanted to believe it was a fake. Screen-shotted results from an online deepfake detector spread rapidly, showing a red box around the politician’s face and an assertion with 90-percent-plus confidence that the confession was a deepfake. Burmese journalists lacked the forensic skills to make a judgement. Past state and present military actions reinforced cause for suspicion. Government spokespeople have shared staged images targeting the Rohingya ethnic group while military coup organizers have denied that social media evidence of their killings could be real.

But was the prisoner’s “confession” really a deepfake? Along with deepfake researcher Henry Ajder, I consulted deepfake creators and media forensics specialists. Some noted that the video was sufficiently low-quality that the mouth glitches people saw were as likely to be artifacts from compression as evidence of deepfakery. Detection algorithms are also unreliable on low-quality compressed video. His unnatural-sounding voice could be a result of reading a script under extreme pressure. If it is a fake, it’s a very good one, because his throat and chest move at key moments in sync with words. The researchers and makers were generally skeptical that it was a deepfake, though not certain. At this point it is more likely to be what human rights activists like myself are familiar with: a coerced or forced confession on camera. Additionally, the substance of the allegations should not be trusted given the circumstances of the military coup unless there is a legitimate judicial process.

Why does this matter? Regardless of whether the video is a forced confession or a deepfake, the results are most likely the same: words digitally or physically compelled out of a prisoner’s mouth by a coup d’état government. However, while the usage of deepfakes to create nonconsensual sexual images currently far outstrips political instances, deepfake and synthetic media technology is rapidly improving, proliferating, and commercializing, expanding the potential for harmful uses. The case in Myanmar demonstrates the growing gap between the capabilities to make deepfakes, the opportunities to claim a real video is a deepfake, and our ability to challenge that.

It also illustrates the challenges of having the public rely on free online detectors without understanding the strengths and limitations of detection or how to second-guess a misleading result. Deepfakes detection is still an emerging technology, and a detection tool applicable to one approach often does not work on another. We must also be wary of counter-forensics—where someone deliberately takes steps to confuse a detection approach. And it’s not always possible to know which detection tools to trust.

How do we avoid conflicts and crises around the world being blindsided by deepfakes and supposed deepfakes?

We should not be turning ordinary people into deepfake spotters, parsing the pixels to discern truth from falsehood. Most people will do better relying on simpler approaches to media literacy, such as the SIFT method, that emphasize checking other sources or tracing the original context of videos. In fact, encouraging people to be amateur forensics experts can send people down the conspiracy rabbit hole of distrust in images.