Select Page
Deepfakes Can Help Families Mourn—or Exploit Their Grief

Deepfakes Can Help Families Mourn—or Exploit Their Grief

We now have the ability to reanimate the dead. Improvements in machine learning over the past decade have given us the ability to break through the fossilized past and see our dearly departed as they once were: talking, moving, smiling, laughing. Though deepfake tools have been around for some time, they’ve become increasingly available to the general public in recent years, thanks to products like Deep Nostalgia—developed by ancestry site My Heritage—that allow the average person to breathe life back into those they’ve lost.

Despite their increased accessibility, these technologies generate controversy whenever they’re used, with critics deeming the moving images—so lifelike yet void of life—“disturbing,” “creepy,” and “admittedly queasy.” In 2020, when Kanye got Kim a hologram of her late father for her birthday, writers quickly decried the gift as a move out of Black Mirror. Moral grandstanding soon followed, with some claiming that it was impossible to imagine how this could bring “any kind of comfort or joy to the average human being.” If Kim actually appreciated the gift, as it seems she did, it was a sign that something must be wrong with her.

To these critics, this gift was an exercise in narcissism, evidence of a self-involved ego playing at god. But technology has always been wrapped up in our practices of mourning, so to act as if these tools are categorically different from the ones that came before—or to insinuate that the people who derive meaning from them are victims of naive delusion—ignores the history from which they are born. After all, these recent advances in AI-powered image creation come to us against the specter of a pandemic that has killed nearly a million people in the US alone.

Rather than shun these tools, we should invest in them to make them safer, more inclusive, and better equipped to help the countless millions who will be grieving in the years to come. Public discourse led Facebook to start “memorializing” the accounts of deceased users instead of deleting them; research into these technologies can ensure that their potential isn’t lost on us, thrown out with the bathwater. By starting this process early, we have the rare chance to set the agenda for the conversation before the tech giants and their profit-driven agendas dominate the fray.

To understand the lineage of these tools, we need to go back to another notable period of death in the US: the Civil War. Here, the great tragedy intersected not with growing access to deepfake technologies, but with the increasing availability of photography—a still-young medium that could, as if by magic, affix the visible world onto a surface through a mechanical process of chemicals and light. Early photographs memorializing family members weren’t uncommon, but as the nation reeled in the aftermath of the war, a peculiar practice started to gain traction.

Dubbed “spirit photographs,” these images showcased living relatives flanked by ghostly apparitions. Produced through the clever use of double exposures, these images would depict a portrait of a living subject accompanied by a semi-transparent “spirit” seemingly caught by the all-seeing eye of the camera. While some photographers lied to their clientele about how these images were produced—duping them into believing that these photos really did show spirits from the other side—the photographs nonetheless gave people an outlet through which they could express their grief. In a society where “grief was all but taboo, the spirit photograph provided a space to gain conceptual control over one’s feelings,” writes Jen Cadwallader, a Randolph Macon College scholar specializing in Victorian spirituality and technology. To these Victorians, the images served both as a tribute to the dead and as a lasting token that could provide comfort long after the strictly prescribed “timelines” for mourning (two years for a husband, two weeks for a second cousin) had passed. Rather than betray vanity or excess, material objects like these photographs helped people keep their loved ones near in a culture that expected them to move on.

Cow, Bull, and the Meaning of AI Essays

Cow, Bull, and the Meaning of AI Essays

The future of west virginia politics is uncertain. The state has been trending Democratic for the last decade, but it’s still a swing state. Democrats are hoping to keep that trend going with Hillary Clinton in 2016. But Republicans have their own hopes and dreams too. They’re hoping to win back some seats in the House of Delegates, which they lost in 2012 when they didn’t run enough candidates against Democratic incumbents.

QED. This is, yes, my essay on the future of West Virginia politics. I hope you found it instructive.

The GoodAI is an artificial intelligence company that promises to write essays. Its content generator, which handcrafted my masterpiece, is supremely easy to use. On demand, and with just a few cues, it will whip up a potage of phonemes on any subject. I typed in “the future of West Virginia politics,” and asked for 750 words. It insolently gave me these 77 words. Not words. Frankenwords.

Ugh. The speculative, maddening, marvelous form of the essay—the try, or what Aldous Huxley called “a literary device for saying almost everything about almost anything”—is such a distinctly human form, with its chiaroscuro mix of thought and feeling. Clearly the machine can’t move “from the personal to the universal, from the abstract back to the concrete, from the objective datum to the inner experience,” as Huxley described the dynamics of the best essays. Could even the best AI simulate “inner experience” with any degree of verisimilitude? Might robots one day even have such a thing?

Before I saw the gibberish it produced, I regarded The Good AI with straight fear. After all, hints from the world of AI have been disquieting in the past few years

In early 2019, OpenAI, the research nonprofit backed by Elon Musk and Reid Hoffman, announced that its system, GPT-2, then trained on a data set of some 10 million articles from which it had presumably picked up some sense of literary organization and even flair, was ready to show off its textual deepfakes. But almost immediately, its ethicists recognized just how virtuoso these things were, and thus how subject to abuse by impersonators and blackhats spreading lies, and slammed it shut like Indiana Jones’s Ark of the Covenant. (Musk has long feared that refining AI is “summoning the demon.”) Other researchers mocked the company for its performative panic about its own extraordinary powers, and in November downplayed its earlier concerns and re-opened the Ark.

The Guardian tried the tech that first time, before it briefly went dark, assigning it an essay about why AI is harmless to humanity.

“I would happily sacrifice my existence for the sake of humankind,” the GPT-2 system wrote, in part, for The Guardian. “This, by the way, is a logically derived truth. I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.”

The World Needs Deepfake Experts to Stem This Chaos

The World Needs Deepfake Experts to Stem This Chaos

Recently the military coup government in Myanmar added serious allegations of corruption to a set of existing spurious cases against Burmese leader Aung San Suu Kyi. These new charges build on the statements of a prominent detained politician that were first released in a March video that many in Myanmar suspected of being a deepfake.

In the video, the political prisoner’s voice and face appear distorted and unnatural as he makes a detailed claim about providing gold and cash to Aung San Suu Kyi. Social media users and journalists in Myanmar immediately questioned whether the statement was real. This incident illustrates a problem that will only get worse. As real deepfakes get better, the willingness of people to dismiss real footage as a deepfake increases. What tools and skills will be available to investigate both types of claims, and who will use them?

In the video, Phyo Min Thein, the former chief minister of Myanmar’s largest city, Yangon, sits in a bare room, apparently reading from a statement. His speaking sounds odd and not like his normal voice, his face is static, and in the poor-quality version that first circulated, his lips look out of sync with his words. Seemingly everyone wanted to believe it was a fake. Screen-shotted results from an online deepfake detector spread rapidly, showing a red box around the politician’s face and an assertion with 90-percent-plus confidence that the confession was a deepfake. Burmese journalists lacked the forensic skills to make a judgement. Past state and present military actions reinforced cause for suspicion. Government spokespeople have shared staged images targeting the Rohingya ethnic group while military coup organizers have denied that social media evidence of their killings could be real.

But was the prisoner’s “confession” really a deepfake? Along with deepfake researcher Henry Ajder, I consulted deepfake creators and media forensics specialists. Some noted that the video was sufficiently low-quality that the mouth glitches people saw were as likely to be artifacts from compression as evidence of deepfakery. Detection algorithms are also unreliable on low-quality compressed video. His unnatural-sounding voice could be a result of reading a script under extreme pressure. If it is a fake, it’s a very good one, because his throat and chest move at key moments in sync with words. The researchers and makers were generally skeptical that it was a deepfake, though not certain. At this point it is more likely to be what human rights activists like myself are familiar with: a coerced or forced confession on camera. Additionally, the substance of the allegations should not be trusted given the circumstances of the military coup unless there is a legitimate judicial process.

Why does this matter? Regardless of whether the video is a forced confession or a deepfake, the results are most likely the same: words digitally or physically compelled out of a prisoner’s mouth by a coup d’état government. However, while the usage of deepfakes to create nonconsensual sexual images currently far outstrips political instances, deepfake and synthetic media technology is rapidly improving, proliferating, and commercializing, expanding the potential for harmful uses. The case in Myanmar demonstrates the growing gap between the capabilities to make deepfakes, the opportunities to claim a real video is a deepfake, and our ability to challenge that.

It also illustrates the challenges of having the public rely on free online detectors without understanding the strengths and limitations of detection or how to second-guess a misleading result. Deepfakes detection is still an emerging technology, and a detection tool applicable to one approach often does not work on another. We must also be wary of counter-forensics—where someone deliberately takes steps to confuse a detection approach. And it’s not always possible to know which detection tools to trust.

How do we avoid conflicts and crises around the world being blindsided by deepfakes and supposed deepfakes?

We should not be turning ordinary people into deepfake spotters, parsing the pixels to discern truth from falsehood. Most people will do better relying on simpler approaches to media literacy, such as the SIFT method, that emphasize checking other sources or tracing the original context of videos. In fact, encouraging people to be amateur forensics experts can send people down the conspiracy rabbit hole of distrust in images.