Select Page
Deepfakes Can Help Families Mourn—or Exploit Their Grief

Deepfakes Can Help Families Mourn—or Exploit Their Grief

We now have the ability to reanimate the dead. Improvements in machine learning over the past decade have given us the ability to break through the fossilized past and see our dearly departed as they once were: talking, moving, smiling, laughing. Though deepfake tools have been around for some time, they’ve become increasingly available to the general public in recent years, thanks to products like Deep Nostalgia—developed by ancestry site My Heritage—that allow the average person to breathe life back into those they’ve lost.

Despite their increased accessibility, these technologies generate controversy whenever they’re used, with critics deeming the moving images—so lifelike yet void of life—“disturbing,” “creepy,” and “admittedly queasy.” In 2020, when Kanye got Kim a hologram of her late father for her birthday, writers quickly decried the gift as a move out of Black Mirror. Moral grandstanding soon followed, with some claiming that it was impossible to imagine how this could bring “any kind of comfort or joy to the average human being.” If Kim actually appreciated the gift, as it seems she did, it was a sign that something must be wrong with her.

To these critics, this gift was an exercise in narcissism, evidence of a self-involved ego playing at god. But technology has always been wrapped up in our practices of mourning, so to act as if these tools are categorically different from the ones that came before—or to insinuate that the people who derive meaning from them are victims of naive delusion—ignores the history from which they are born. After all, these recent advances in AI-powered image creation come to us against the specter of a pandemic that has killed nearly a million people in the US alone.

Rather than shun these tools, we should invest in them to make them safer, more inclusive, and better equipped to help the countless millions who will be grieving in the years to come. Public discourse led Facebook to start “memorializing” the accounts of deceased users instead of deleting them; research into these technologies can ensure that their potential isn’t lost on us, thrown out with the bathwater. By starting this process early, we have the rare chance to set the agenda for the conversation before the tech giants and their profit-driven agendas dominate the fray.

To understand the lineage of these tools, we need to go back to another notable period of death in the US: the Civil War. Here, the great tragedy intersected not with growing access to deepfake technologies, but with the increasing availability of photography—a still-young medium that could, as if by magic, affix the visible world onto a surface through a mechanical process of chemicals and light. Early photographs memorializing family members weren’t uncommon, but as the nation reeled in the aftermath of the war, a peculiar practice started to gain traction.

Dubbed “spirit photographs,” these images showcased living relatives flanked by ghostly apparitions. Produced through the clever use of double exposures, these images would depict a portrait of a living subject accompanied by a semi-transparent “spirit” seemingly caught by the all-seeing eye of the camera. While some photographers lied to their clientele about how these images were produced—duping them into believing that these photos really did show spirits from the other side—the photographs nonetheless gave people an outlet through which they could express their grief. In a society where “grief was all but taboo, the spirit photograph provided a space to gain conceptual control over one’s feelings,” writes Jen Cadwallader, a Randolph Macon College scholar specializing in Victorian spirituality and technology. To these Victorians, the images served both as a tribute to the dead and as a lasting token that could provide comfort long after the strictly prescribed “timelines” for mourning (two years for a husband, two weeks for a second cousin) had passed. Rather than betray vanity or excess, material objects like these photographs helped people keep their loved ones near in a culture that expected them to move on.

Dumbed Down AI Rhetoric Harms Everyone

Dumbed Down AI Rhetoric Harms Everyone

When the European Union Commission released its regulatory proposal on artificial intelligence last month, much of the US policy community celebrated. Their praise was at least partly grounded in truth: The world’s most powerful democratic states haven’t sufficiently regulated AI and other emerging tech, and the document marked something of a step forward. Mostly, though, the proposal and responses to it underscore democracies’ confusing rhetoric on AI.

Over the past decade, high-level stated goals about regulating AI have often conflicted with the specifics of regulatory proposals, and what end-states should look like aren’t well-articulated in either case. Coherent and meaningful progress on developing internationally attractive democratic AI regulation, even as that may vary from country to country, begins with resolving the discourse’s many contradictions and unsubtle characterizations.

The EU Commission has touted its proposal as an AI regulation landmark. Executive vice president Margrethe Vestager said upon its release, “We think that this is urgent. We are the first on this planet to suggest this legal framework.” Thierry Breton, another commissioner, said the proposals “aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”

This is certainly better than many national governments, especially the US, stagnating on rules of the road for the companies, government agencies, and other institutions. AI is already widely used in the EU despite minimal oversight and accountability, whether for surveillance in Athens or operating buses in Málaga, Spain.

But to cast the EU’s regulation as “leading” simply because it’s first only masks the proposal’s many issues. This kind of rhetorical leap is one of the first challenges at hand with democratic AI strategy.

Of the many “specifics” in the 108-page proposal, its approach to regulating facial recognition is especially consequential. “The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement,” it reads, “is considered particularly intrusive in the rights and freedoms of the concerned persons,” as it can affect private life, “evoke a feeling of constant surveillance,” and “indirectly dissuade the exercise of the freedom of assembly and other fundamental rights.” At first glance, these words may signal alignment with the concerns of many activists and technology ethicists on the harms facial recognition can inflict on marginalized communities and grave mass-surveillance risks.

The commission then states, “The use of those systems for the purpose of law enforcement should therefore be prohibited.” However, it would allow exceptions in “three exhaustively listed and narrowly defined situations.” This is where the loopholes come into play.

The exceptions include situations that “involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localization, identification or prosecution of perpetrators or suspects of the criminal offenses.” This language, for all that the scenarios are described as “narrowly defined,” offers myriad justifications for law enforcement to deploy facial recognition as it wishes. Permitting its use in the “identification” of “perpetrators or suspects” of criminal offenses, for example, would allow precisely the kind of discriminatory uses of often racist and sexist facial-recognition algorithms that activists have long warned about.

The EU’s privacy watchdog, the European Data Protection Supervisor, quickly pounced on this. “A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives,” the EDPS statement read. Sarah Chander from the nonprofit organization European Digital Rights described the proposal to the Verge as “a veneer of fundamental rights protection.” Others have noted how these exceptions mirror legislation in the US that on the surface appears to restrict facial recognition use but in fact has many broad carve-outs.