Select Page
Deepfake Porn Reveals a ‘Pervert’s Dilemma’

Deepfake Porn Reveals a ‘Pervert’s Dilemma’

April 10 was a very bad day in the life of celebrity gamer and YouTuber Atrioc (Brandon Ewing). Ewing was broadcasting one of his usual Twitch livestreams when his browser window was accidentally exposed to his audience. During those few moments, viewers were suddenly face-to-face with what appeared to be deepfake porn videos featuring female YouTubers and gamers QTCinderella and Pokimane—colleagues and, to my understanding, Ewing’s friends. Moments later, a quick-witted viewer uploaded a screenshot of the scene to Reddit, and thus the scandal was a fact. 

Deepfakes refer broadly to media doctored by AI, commonly to superimpose a person’s face onto that of, say, an actor in a movie or video clip. But sadly, as reported by Vice journalist Samantha Cole, its primary function has been to create porn starring female celebrities, and perhaps more alarmingly, to visualize sexual fantasies of friends or acquaintances. Given its increasing sophistication and availability, anyone with a picture of your face now can basically turn it into a porno. “We are all fucked,” as Cole concisely puts it. 

For most people, I believe, it is obvious that Ewing committed some kind of misconduct in consuming the fictive yet nonconsensual pornography of his friends. Indeed, the comments on Reddit, and the strong (justified) reactions from the women whose faces were used in the clips, testify to a deep sense of disgust. This is understandable, yet specifying exactly where the crime lies is a surprisingly difficult undertaking. In fact, the task of doing so brings to the fore a philosophical problem that forces us to reconsider not only porn, but the very nature of human imagination. I call it the pervert’s dilemma. 

On the one hand, one may argue that by consuming the material, Ewing was incentivizing its production and dissemination, which, in the end, may harm the reputation and well-being of his fellow female gamers. But I doubt that the verdict in the eyes of the public would have been much softer had he produced the videos by his own hand for personal pleasure. And few people see his failure to close the tab as the main problem. The crime, that is, appears to lie in the very consumption of the deepfakes, not the downstream effects of doing so. Consuming deepfakes is wrong, full stop, irrespective of whether the people “starring” in the clips, or anyone else, find out about it. 

At the same time, we are equally certain that sexual fantasies are morally neutral. Indeed, no one (except perhaps some hard-core Catholics) would have blamed Ewing for creating pornographic pictures of QTCinderella in his mind. But what is the difference, really? Both the fantasy and the deepfake are essentially virtual images produced by previous data input, only one exists in one’s head, the other on a screen. True, the latter can more easily be shared, but if the crime lies in the personal consumption, and not the external effects, this should be irrelevant. Hence the pervert’s dilemma: We think sexual fantasies are fine as long as they are only ever generated and contained in a person’s head, and abhorrent the moment they exist in the brain with the aid of somewhat realistic representation—yet we struggle to identify any morally relevant distinction to justify this assessment.

In the long run, it is likely that this will force us to reevaluate our moral attitudes to both deepfakes and sexual fantasies, at least insofar as we want to maintain consistency in our morality. There are two obvious ways in which this could go.

Military AI’s Next Frontier: Your Work Computer

Military AI’s Next Frontier: Your Work Computer

It’s probably hard to imagine that you are the target of spycraft, but spying on employees is the next frontier of military AI. Surveillance techniques familiar to authoritarian dictatorships have now been repurposed to target American workers.

Over the past decade, a few dozen companies have emerged to sell your employer subscriptions for services like “open source intelligence,” “reputation management,” and “insider threat assessment”—tools often originally developed by defense contractors for intelligence uses. As deep learning and new data sources have become available over the past few years, these tools have become dramatically more sophisticated. With them, your boss may be able to use advanced data analytics to identify labor organizing, internal leakers, and the company’s critics.

It’s no secret that unionization is already monitored by big companies like Amazon. But the expansion and normalization of tools to track workers has attracted little comment, despite their ominous origins. If they are as powerful as they claim to be—or even heading in that direction—we need a public conversation about the wisdom of transferring these informational munitions into private hands. Military-grade AI was intended to target our national enemies, nominally under the control of elected democratic governments, with safeguards in place to prevent its use against citizens. We should all be concerned by the idea that the same systems can now be widely deployable by anyone able to pay.

FiveCast, for example, began as an anti-terrorism startup selling to the military, but it has turned its tools over to corporations and law enforcement, which can use them to collect and analyze all kinds of publicly available data, including your social media posts. Rather than just counting keywords, FiveCast brags that its “commercial security” and other offerings can identify networks of people, read text inside images, and even detect objects, images, logos, emotions, and concepts inside multimedia content. Its “supply chain risk management” tool aims to forecast future disruptions, like strikes, for corporations.

Network analysis tools developed to identify terrorist cells can thus be used to identify key labor organizers so employers can illegally fire them before a union is formed. The standard use of these tools during recruitment may prompt employers to avoid hiring such organizers in the first place. And quantitative risk assessment strategies conceived to warn the nation against impending attacks can now inform investment decisions, like whether to divest from areas and suppliers who are estimated to have a high capacity for labor organizing.

It isn’t clear that these tools can live up to their hype. For example, network analysis methods assign risk by association, which means that you could be flagged simply for following a particular page or account. These systems can also be tricked by fake content, which is easily produced at scale with new generative AI. And some companies offer sophisticated machine learning techniques, like deep learning, to identify content that appears angry, which is assumed to signal complaints that could result in unionization, though emotion detection has been shown to be biased and based on faulty assumptions.

But these systems’ capabilities are growing rapidly. Companies are advertising that they will soon include next-generation AI technologies in their surveillance tools. New features promise to make exploring varied data sources easier through prompting, but the ultimate goal appears to be a routinized, semi-automatic, union-busting surveillance system.

What’s more, these subscription services work even if they don’t work. It may not matter if an employee tarred as a troublemaker is truly disgruntled; executives and corporate security could still act on the accusation and unfairly retaliate against them. Vague aggregate judgements of a workforce’s “emotions” or a company’s public image are presently impossible to verify as accurate. And the mere presence of these systems likely has chilling effect on legally protected behaviors, including labor organizing.

Better Government Tech Is Possible

Better Government Tech Is Possible

Now with the explosion of interest in artificial intelligence, Congress is turning its attention to ensuring that those who work in government learn more about the technology. US senators Gary Peters (D-Michigan) and Mike Braun (R-Indiana) are calling for universal leadership training in AI with the AI Leadership Training Act, which is moving forward to the full Senate for consideration. The bill directs the Office of Personnel Management (OPM), the federal government’s human resources department, to train federal leadership in AI basics and risks. However, it does not yet mandate the teaching of how to use AI to improve how the government works.

The AI Leadership Training Act is an important step in the right direction, but it needs to go beyond mandating basic AI training. It should require that the OPM teach public servants how to use AI technologies to enhance public service by making government services more accessible, providing constant access to city services, helping analyze data to understand citizen needs, and creating new opportunities for the public to participate in democratic decisionmaking.

For instance, cities are already experimenting with AI-based image generation for participatory urban planning, while San Francisco’s PAIGE AI chatbot is helping to answer business owners’ questions about how to sell to the city. Helsinki, Finland, uses an AI-powered decisionmaking tool to analyze data and provide recommendations on city policies. In Dubai, leaders are not just learning AI in general, but learning how to use ChatGPT specifically. The legislation, too, should mandate that the OPM not just teach what AI is, but how to use it to serve citizens.

In keeping with the practice in every other country, the legislation should require that training to be free. This is already the case for the military. On the civilian side, however, the OPM is required to charge a fee for its training programs. A course titled Enabling 21st-Century Leaders, for example, costs $2,200 per person. Even if the individual applies to their organization for reimbursement, too often programs do not have budgets set aside for up-skilling.

If we want public servants to understand AI, we cannot charge them for it. There is no need to do so, either. Building on a program created in New Jersey, six states are now collaborating with each other in a project called InnovateUS to develop free live and self-paced learning in digital, data, and innovation skills. Because the content is all openly licensed and designed specifically for public servants, it can easily be shared across states and with the federal government as well.

The Act should also demand that the training be easy to find. Even if Congress mandates the training, public professionals will have a hard time finding it without the physical infrastructure to ensure that public servants can take and track their learning about tech and data. In Germany, the federal government’s Digital Academy offers a single site for digital up-skilling to ensure widespread participation. By contrast, in the United States, every federal agency has its own (and sometimes more than one) website where employees can look for training opportunities, and the OPM does not advertise its training across the federal government. While the Department of Defense has started building USALearning.gov so that all employees could eventually have access to the same content, this project needs to be accelerated.

Podcasts Could Unleash a New Age of Enlightenment

Podcasts Could Unleash a New Age of Enlightenment

The attraction of interview podcasts is their DIY nature. It is a return to the intellectual imitation that marked the birth of the public. But it is of an entirely different scale and reach. The group that listens to hours-long intellectual conversations every week these days numbers in the millions. And many of them live, like my high school friends, in places where it would have been impossible to overhear an intellectual conversation only 15 years ago.

Anecdotally, people are picking up new behaviors and mental models from the conversations they overhear. They are imitating, at least on a superficial level, the strategies intellectuals use when confronting hard questions in real time (“You are saying …”, “Let me rephrase that question,” “There are several sub-questions here; let me start with …”). They absorb the tone that successful people use to establish casual rapport with someone they have just met. Podcast listeners also hear, again and again, how someone good at asking questions provides a context for someone else to be interesting.

We might also be picking up dysfunctional patterns. Putting these thoughts to my friends in the village, they played the devil’s advocate (saying the phrase in English). One of them observed that he felt like they were getting worse at turn-taking when talking—which could be a pattern picked up by listening to people who monolog while the podcast host does all the conversational labor. 

As we consider the impact of the podcast phenomenon on a global scale, it is intriguing to ponder where the trend might lead us. The French Revolution, the founding of the United States, industrialization, the growth of science—these trends and events can be parsed as the Republic of Letters attempting to remake the world in its image: cosmopolitan, skeptical of received authority, and rational.

The values, ideas, and norms that spread  through DIY broadcasting and parasocial imitation today—can that shape the world, too? It is tempting to be dismissive of such ideas. For every person listening to an eight-hour intellectual podcast, there are 10 who listen to gossip and entertainment.

But this was true of the early modern age too. When Erasmus sat on horseback sketching letters, it didn’t look like much. He was just talking to his friends, and what difference can a few antiquity nerds make? The world around them was descending into witch hunts and religious wars. The budding public, who listened in on the intellectual conversations, was a rounding error in the population statistics. Yet we now live in the world they wrote into being.

We shouldn’t underestimate the power of social learning, and what can happen when the social environment that intellectually curious people can access improves. Podcasts are an experiment in expanding access to specific types of intellectual conversations of a scale that has never been attempted before. People in rural Sweden listen in, as do millions in India, Nigeria, Brazil, and other areas that until recently had no access to the conversations and thought patterns at American research institutions or Silicon Valley startups. As they start identifying with these ways of being through parasocial relationships—as they start talking like this, as they start companies and blogs and engage in conversations about nuclear fusion or AI alignment or Georgist economics—what will happen then?

My Father’s Death in 7 Gigabytes

My Father’s Death in 7 Gigabytes

I set my scanner for JPEGs at 70 percent compression, then assembled them into PDFs. Fast and cheap. I also took photos of various ephemera with my phone at god-knows-what resolution. Not every version of every poem would survive. But I’d do my best to preserve the words themselves. 

I began to rip the hell out of his folders. Unbinding, yanking, feeding stacks through the scanner and watching some originals crumble as they came out the other side. It felt good being a bad librarian. A little destructive, drunken joy. (A large bottle of bourbon vanished over two weeks of night scans.) Ah well, Dad! What are you going to say now? I put many duplicate manuscripts in the recycling bin, at first relishing the idea that this heavy, heavy paper would go out of my life, and then, as I pulled the bag to the curb, well—lossy. 

But that was just the atoms. Dad also left a lot of bits. There was his daily poetry blog, which I spidered and parsed into a many-thousand-page virtual book. That was easy enough, one night’s work. He also wrote flash poems for decades—a few lines a few times a day, one file per thought, yielding thousands of documents with names like POEM12A.WPD, inside of hundreds of folders with names like COPYAAA.199. I loaded them into a database and threw away all the duplicates. I converted the remainder into more modern, tractable LibreOffice files. That format would preserve all the tabs and spaces that were so important to my father. He was a devotee of white space.

I intended to organize the flash poems into one volume per year, but the time stamps were screwy after decades of moving files between computers. I loved my father, but not enough to undertake thousands of forensic poem investigations. So I fulfilled my filial duty through batch processing. I used all the wonder-tools at my disposal: text-chomping parsing code and Unix utilities galore; Pandoc, which can convert anything to text; SpaCy, a Python natural language library that can extract subjects and tags (“New Haven,” “God,” “Korea,” “Shakespeare,” “Republican,” “Democrat,” “America”). I decided that my father wrote two things—Poems, which are less than 300 words, and Longer Works, which are longer. I let the computer sort the rest. 

My father’s last decade was one of relentless downsizing, from apartment to assisted living to nursing home, shedding belongings, throwing away clothes and furniture. And at the end: Two boxes and a tiny green urn. The ultimate zip file. After I parsed and processed and batched his digital legacy, it came to 7,382 files and around 7 gigabytes. 

The sum of Frank took two days and nights to upload to the Internet Archive, at a rate of a few files per minute. I wonder what the universe will make of this bundle of information. Who will care? Scholars of short plays about the Korean War? Sociologists studying 1930s Irish childhoods? I am sure his words will be ingested, digested, and excreted as chat by untold bots and search engines. Perhaps they’ll be able to make sense of all the modernist imagery. At least he’ll have slowed them down a little. In time, we all end up in a folder somewhere, if we’re lucky. Frank belongs to the world now; I released the files under Creative Commons 0, No Rights Reserved. And I know he would have loved his archive. 

The two boxes have become one, taped back up and placed in the attic. No one will worry about that box besides me, and one day my inner bad librarian may feel ready to throw it away. All the digital files are zipped up in one place too—partly because I don’t want his poems to show up every time I search my computer for something. Tomorrow I head to the interment, just my brother and I, and the green urn, too, will be filed away into the ground. I am glad this project is over, but I ended up welcoming the work, guiding these last phases of compression. My father needed a great deal of space, but now he takes up almost none. Almost. Death is a lossy process, but something always remains.