Select Page
AI-Powered ‘Thought Decoders’ Won’t Just Read Your Mind—They’ll Change It

AI-Powered ‘Thought Decoders’ Won’t Just Read Your Mind—They’ll Change It

For centuries, mentalists astounded crowds by seeming to plumb the depth of their souls—effortlessly unearthing audience members’ memories, desires, and thoughts. Now, there’s concern that neuroscientists might be doing the same by developing technologies capable of “decoding” our thoughts and laying bare the hidden contents of our mind. Though neural decoding has been in development for decades, it broke into popular culture earlier this year, thanks to a slew of high-profile papers. In one, researchers used data from implanted electrodes to reconstruct the Pink Floyd song participants were listening to. In another paper, published in Nature, scientists combined brain scans with AI-powered language generators (like those undergirding ChatGPT and similar tools) to translate brain activity into coherent, continuous sentences. This method didn’t require invasive surgery, and yet it was able to reconstruct the meaning of a story from purely imagined, rather than spoken or heard, speech.

Dramatic headlines have boldly, and prematurely, announced that “mind-reading technology has arrived.” These methodologies currently require participants to spend an inordinate amount of time in fMRIs so the decoders can be trained on their specific brain data. The Nature study had research subjects spend up to 16 hours in the machine listening to stories, and even after that the subjects were able to misdirect the decoder if they wanted to. As Jerry Tang, one of the lead researchers, phrased it, at this stage these technologies aren’t all-powerful mind readers capable of deciphering our latent beliefs as much as they are “a dictionary between patterns of brain activity and descriptions of mental content.” Without a willing and active participant supplying brain activity, that dictionary is of little use.

Still, critics claim that we might lose the “last frontier of privacy” if we allow these technologies to progress without thoughtful oversight. Even if you don’t subscribe to this flavor of techno-dystopian pessimism, general skepticism is rarely a bad idea. The “father of public relations,” Edward L. Bernays, was not only Freud’s nephew, he actively employed psychoanalysis in his approach to advertising. Today,  a range of companies hire cognitive scientists to help “optimize” product experiences and hack your attention. History assures us that as soon as the financial calculus works out, businesses looking to make a few bucks will happily incorporate these tools into their operations.

A singular focus on privacy, however, has led us to misunderstand the full implications of these tools. Discourse has positioned this emergent class of technologies as invasive mind readers at worst and neutral translation mechanisms at best. But this picture ignores the truly porous and enmeshed nature of the human mind. We won’t appreciate the full scope of this tool’s capabilities and risks until we learn to reframe it as a part of our cognitive apparatus.

For most of history, the mind has been conceptualized as a sort of internal, private book or database—a self-contained domain that resides somewhere within ourselves, populated by fixed thoughts that only we have direct intimate access to. Once we posit the mind as a privately accessible diary containing clearly defined thoughts (or “internalism,” as it’s sometimes called) it’s not much of a jump to begin asking how we might open this diary to the external world—how someone on the outside might decipher the hidden language of the mind to pierce this inner sanctum. Theologians thought that this access would come from the divine through a God capable of reading our deepest thoughts. Freud thought that a trained psychoanalyst could make out the true contents of the mind through hermeneutical methods like dream interpretation. Descartes, ever a man of the Enlightenment, had a more physicalist hypothesis. He argued that our souls and minds are closely attached to the pineal gland and expressed its will. In doing so, he opened us up to the idea that if we could establish the right correspondence between thought and bodily motion, we might be able to work backward to mental content itself.

More contemporary approaches have followed in these footsteps. Polygraphs, or lie detectors, attempt to use physiological changes to read the content of our beliefs. Tang’s own statements on the thought decoder as a “dictionary” between brain scans and mental content expresses the modern version of this notion that we might decipher the mind through the neural body. Even critics of thought decoding, with their concerns about privacy, take this internalist theory of mind for granted. It’s precisely because of the supposedly sheltered nature of our thoughts that the threat of outside access is so profoundly disturbing.

Everyone Is a Girl Online

Everyone Is a Girl Online

“What do you mean my actions have consequences? I’m literally just a girl.” This year, your feed has likely been blessed by the avatars of machinic girlhood: angels, bimbos, and the collective entity of “girls,” divine creatures who have transcended earthly bodies, curiously evacuated of anger, pain, attachment, who have nonetheless become wildly popular on every social platform. Which is to say that, while angels and girls have existed since time immemorial—and bimbos as we know them since at least the 1980s—it’s only recently that they’ve become a bit, floating away from history and into memetic shorthand. Whether it’s the girl in the “girl dinner” or the angels spied in Bella Hadid’s carousel, they appear as perfected conduits for collective consciousness—she’s just like me for real. As for man, once the king of the online condition? “Hit him with your car!” says head bimbo Chrissy Chlapecka with heavenly vocal fry, to the tune of 4 million TikTok hearts. It’s a girl’s world now; we’re just living in it.

Memes, obviously, don’t come out of nowhere. The angel-bimbo girl-swarm gives voice to something collectively experienced and soon-to-be historical, a kind of subconscious metabolization of recent events into a general disassociated vibe. Maybe you, too, are a side character in the story that supposedly ends all stories: the emergence of the postpolitical, delivering a smooth and tranquilized subjectivity so dispersed that it feels nothing and is moved to no action in spite of the Real delivering destruction to their door. The rise of the “NPC influencer”—smiling and spiritually lobotomized, fine-tuned for an increasingly instinctive response to live cash stimulus—is the endgame for all that terrifies people about digital culture and how it affects human minds. Be not afraid of this other type of angel, the super-evolved brainless doll slurping dollar-pegged ice cream at the end of the infinite scroll.

Haters will say that the girl has no access to individual agency and political autonomy, and is therefore an enemy of serious activism—or seriousness, at all. Lovers will reply that the girl is simply emptied of traditional humanist traits to make room for something else. She is closely networked with other minds, with an intelligence that is intuitive, cunning, and sophisticated, yet maligned and dismissed because it is little understood. In the post-platform economy, it is not just a question of wanting to be a girl as ironic posture or fun reality. The fact of the matter is that everyone has to be a girl online. Even an “everyone” that is not exactly human. As user @heartlocket tweeted, “All LLMs are girls.” I don’t make the rules. But why is that? To answer that question, we first have to answer: What are girls?

I understand that I have to get you, the reader, to accept the girl as a condition. As a term, “girl” is polarizing: feared for how tightly it connects youth and desire, reviled for its infantilizing, passivity-inducing properties. On the face of it, girlishness is simply dismissed as being frivolous, immature, unmasculine, disempowering, reductive. At worst, the girl is an apolitical neutralizer of direct action. At best, she is simply enjoying herself with the junk society has given her. In either state—harmless or neutralizing, hedonic or willfully ignorant—the girl becomes an attractor of hatred, envy, and fear. As opposed to mainstream narratives of female empowerment and their sliding scale of access to power and resources, the girl is a far more politically ambivalent state.

One: Consider that the girl is a symbolic category, unfixed from biological sex or social gender. It’s a perspective best articulated by Andrea Long Chu in her 2018 book Females. Long Chu updates old-school psychoanalysis in which “female” denotes a subject formed through psychological, social, and symbolic aspects rather than springing from some essential biology. “The female [is] any psychic operation in which the self is sacrificed to make room for the desires of another,” she asserts. And since everyone’s desire arrives without their authorship, everyone is symbolically female. Desire for another, desire for recognition, desire for political change, desire for change within yourself, all riding in on un- and subconscious processes, afloat on a raft of experience and sociocultural codes.

Two: The girl is a consumer category that can’t be delinked from capital. This stems from Tiqqun’s contentious Preliminary Materials for a Theory of the Young-Girl (1999), a text that was such a horror of gender that its English-language translator, Ariana Reines, says she was repeatedly and violently ill while working on the project. Unfortunately for the rest of us, the text accurately describes reality. Turns out, we’re all sick for it. In 1999, Tiqqun wrote that “all the old figures of patriarchal authority, from statesmen to bosses and cops have become Young-Girlified, every last one of them, even the Pope.” Tiqqun describes the Young-Girl as less of a person, and more of a force. She is a “living currency,” a “war machine,” and a “technique of the self” driven by the “desire to be desired.” Her state is what coheres a society that has been empty of meaning and ritual since industrialization. Young-Girls are “beings that no longer have any intimacy with [themselves] except as value, and whose every activity, in every detail, is directed towards self-valorisation.” In the post-platform age—where the base architecture of social engagement is still predicated on behavioral capture to achieve ever more accurate advertising—the subject of the Young-Girl has not become obsolete. She has only been intensified. Every ordinary person has to, in some way, pay attention to their semipublic image, even if that image is one that resists appearing on a platform. In 2012, reviewers of the translation sniffed at the cognitive dissonance of having the likes of Berlusconi cited within an otherwise girl-coded text: “They have offended the thing I most hold dear: my image.” Consider the proliferation of memes skinning trad daddies as “babygirls”—like Succession’s Kendall Roy, whether “he’s actively having a mental breakdown [or] the killer his father wanted him to be,” as Gita Jackson reports for Polygon. Is nothing more 2023?

Super Apps Are Terrible for People—and Great for Companies

Super Apps Are Terrible for People—and Great for Companies

Recently, Elon Musk told X employees that he wants the platform to become a WeChat-style super app, a one-stop shop for social media, communication, travel, on-demand labor, payments, and more. But the realities of our tech ecosystem—in which the largest players seem committed to surveillance, labor exploitation, weaponizing tech, algorithmic discrimination, and privatizing every part of the public sphere in the name of profit—indicate that a new super app might not be such a great idea.

At the moment, the US tech sector is all too eager to play up fears about an impending Cold War 2.0 with China, and snuffing out the competition has long been one of Silicon Valley’s favorite plays. And yet, each sermon by a patriotic tech executive preaching the gospel of techno-nationalism (we can’t trust anyone but ourselves to build X or Y tech), each blog post by a doe-eyed venture capitalist about the threats of “Chinese AI”, makes it that much easier to secure contracts with police departments and militaries and government agencies, to rationalize self-regulation here or abroad in the name of outpacing China, and to reframe the ongoing privatization of state infrastructure and public life with sleek, dynamic digitization. These processes and rationalizations often result in American technologies that can be deployed to augment racial discrimination, corporate and government surveillance, social control, worker immiseration, and above all else, profit maximization—an inconvenient fact that doesn’t matter until it does.

Still, if you ignore this reality, you can understand some of the super app hype. For some tech firms like Microsoft, super apps may provide an opportunity to break the hold of more established monopolies like Apple or Google. And for consumers, one application with a core function brings together a diverse array of services such as calling a cab, investing money, or even making a quick buck.

But app-based solutions to structural problems are just shining examples of insisting the disease is the cure. Silicon Valley has long exploited existing societal and infrastructural gaps. Ride-hail platforms have savaged public transit and the taxi industry, but the need for drivers in cities without adequate transport options remains. The same goes for platforms offering app-based solutions to housing or financial services: Their popularity is less a testament to their innovation than to how desiccated the nonmarket alternatives were, thanks to older deregulatory campaigns stretching back to the ascent of neoliberal governance in the 1970s. In the end, they perpetuate problems with the systems they claim to hack. The crypto industry preys on nonwhite communities without access to the traditional financial system, and on-demand labor platforms have been hard at work eroding this country’s threadbare labor laws.

Looking overseas, it’s easy to see how super apps could give tech companies further grounds to take advantage of existing structural holes. In China, Tencent’s WeChat started as an instant messaging chat but eventually grew to include food delivery, utility payments, social media, banking, urban transit, health care appointments, air travel, biometrics, news, and more. This has led to the explosive growth of digital infrastructure focused largely on government and corporate surveillance, social control, and the creation of new markets. The sleek integration of various apps into a larger ecosystem may provide convenience, but these are still apps concerned with extracting as much as they can out of each one of us either by labor exploitation or endless commodification.

In the US, where social welfare and public goods are, at best, neglected, there’s little reason to think the launch of super apps would go any differently. All too often, app-based platforms just make it easier for companies to skirt regulations and realize profits. They attract users with temporary below-cost prices (that are eventually hiked) and regulators with the promise of reducing public expense (in exchange for some sort of public subsidy). When investors look at these apps, they see an opportunity to keep users locked in and consuming goods and services that should be publicly provisioned.

Some of the odious investor logic underwriting super apps can be further explained by the concept of “luxury surveillance,”, which Chris Gilliard and David Golumbia introduced in an article for Real Life in 2021. Luxury surveillance is a phenomenon where “some people pay to subject themselves to surveillance that others are forced to endure and would, if anything, pay to be free of.” You might buy a GPS bracelet to track your biometric data (which will be used by other firms), while others might be forced to wear one (and still pay for it) as part of their parole agreement. When you agree to surveillance, you are exercising discipline and control over yourself and affirming your sovereignty. When others are subjected to surveillance, it is for their own good because they’ve demonstrated a need to be controlled. The super app’s vision is an intensification of this approach: Luxury surveillance lets you opt into a regime that gives corporations greater latitude to reorganize the city, our social relations, our cultural production, the horizon of our politics and imagination—allegedly to help us realize our best selves, and for our own good.

Do Not Fear the Robot Uprising. Join It

Do Not Fear the Robot Uprising. Join It

it’s become a veritable meme subgenre at this point: a photo of Linda Hamilton as The Terminator’s Sarah Connor, glaring into the camera, steely eyed, with some variant of the caption “Sarah Connor seeing you become friends with ChatGPT.” Our society has interpreted the sudden, dizzying rise of this new chatbot generation through the pop cultural lens of our youth.

With it comes the sense that the straightforward “robots will kill us all” stories were prescient (or at least accurately captured the current vibe), and that there was a staggering naivete in the more forgiving “AI civil rights” narratives—famously epitomized by Star Trek’s Commander Data, an android who fought to be treated the same as his organic Starfleet colleagues. Patrick Stewart’s Captain Picard, defending Data in a trial to prove his sapience, thundered, “Your honor, Starfleet was founded to seek out new life: Well, there it sits! Waiting.” But far from being a relic of a bygone, more optimistic age, the AI civil rights narrative is more relevant than ever. It just needs to be understood in its proper context.

There are understandable fears that seemingly naive narratives about AI or robots being “just like us” have only paved the way for the morally impoverished moment in which we now find ourselves. In this way of looking at things, we need more fear of AI in order to resist the exploitation we’re now faced with, surely. Thus, we need to retrench into the other AI narrative cliché: They’re here to kill us all.

But analogizing ChatGPT or Google’s Bard to even embryonic forms of Skynet is priceless PR for tech companies, which benefit greatly from the “criti-hype” of such wild exaggerations. For example, during a 60 Minutes interview, Google vice president James Manyika remarked, “We discovered that with very few amounts of prompting in Bengali, [Bard] can now translate all of Bengali.” In his narration, CBS journalist Scott Pelley glossed this comment by saying “one Google AI program adapted on its own after it was prompted in the language of Bangladesh, which it was not trained to know”—suggesting that this learning was a potentially dangerous “emergent property” of Bard. But it also implied that Bard had no Bengali in its training data, when in fact it did. Such hyperbole, which portrays the algorithms as bordering on self-awareness, makes these tools seem far more capable than they really are.

That, of course, hasn’t stopped some of my fellow nerds, reared on C-3PO and Data, from being all too eager to join the final frontier of civil rights battles—even when every other one remains woefully unfinished.

So what’s the use in continuing to tell the happier “AI deserves civil rights” stories? After all, we’re a long way from boldly arguing for the rights of such beings in a Starfleet courtroom, and such stories might only further engender anthropomorphization, which only helps companies profit from tools that fall short even at their stated functions. Well, those stories might help us keep our priorities straight.

It’s easy to forget that, in fiction, the AI/robot is almost always a metaphor. Even in Star Trek: The Next Generation, Data and the androids like him were analogized to humanity’s ugly history of slavery—the grotesque dream of free labor that never questions, never fights back. This was equally evident in Ex Machina, a horror film about how an AI woman, built to be a classic “fembot,” liberates herself from a male tech baron who wants nothing more than to build a woman who loves to be abused. What we yearn for in machines is so often a reflection of what we yearn for in humanity, for good and ill, asking us what we really want. Stories of such yearnings also illustrate a key requirement for sapience: resistance to oppression.

Such qualities take us back to the earliest forms of fiction that humans wove about the prospect of creating artificial life. Not just Karel Čapek’s 1921 Rossum’s Universal Robots (RUR), but the Jewish legend of the golem that it clearly drew inspiration from. In that tale, artificial life exists to defend people against violent oppression. Although the original fable sees the golem run amok, the idea of the creature endures as an empowering fantasy in a time of rising anti-Semitism. The myth has left its mark on everything from superhero fantasies to tales of benevolent robots—narratives where artificial or alien life is in communion with human life and arrayed against the ugliest forces that sapience can produce. If that isn’t relevant, nothing is.

AI Can Be An Extraordinary Force For Good—if It’s Contained

AI Can Be An Extraordinary Force For Good—if It’s Contained

In a quaint Regency-era office overlooking London’s Russell Square, I cofounded a company called DeepMind with two friends, Demis Hassabis and Shane Legg, in the summer of 2010. Our goal, one that still feels as ambitious and crazy and hopeful as it did back then, was to replicate the very thing that makes us unique as a species: our intelligence.

To achieve this, we would need to create a system that could imitate and then eventually outperform all human cognitive abilities, from vision and speech to planning and imagination, and ultimately empathy and creativity. Since such a system would benefit from the massively parallel processing of supercomputers and the explosion of vast new sources of data from across the open web, we knew that even modest progress toward this goal would have profound societal implications.

It certainly felt pretty far-out at the time.

But AI has been climbing the ladder of cognitive abilities for decades, and it now looks set to reach human-level performance across a very wide range of tasks within the next three years. That is a big claim, but if I’m even close to right, the implications are truly profound.

Further progress in one area accelerates the others in a chaotic and cross-catalyzing process beyond anyone’s direct control. It was clear that if we or others were successful in replicating human intelligence, this wasn’t just profitable business as usual but a seismic shift for humanity, inaugurating an era when unprecedented opportunities would be matched by unprecedented risks. Now, alongside a host of technologies including synthetic biology, robotics, and quantum computing, a wave of fast-developing and extremely capable AI is starting to break. What had, when we founded DeepMind, felt quixotic has become not just plausible but seemingly inevitable.

As a builder of these technologies, I believe they can deliver an extraordinary amount of good. But without what I call containment, every other aspect of a technology, every discussion of its ethical shortcomings, or the benefits it could bring, is inconsequential. I see containment as an interlocking set of technical, social, and legal mechanisms constraining and controlling technology, working at every possible level: a means, in theory, of evading the dilemma of how we can keep control of the most powerful technologies in history. We urgently need watertight answers for how the coming wave can be controlled and contained, how the safeguards and affordances of the democratic nation-state, critical to managing these technologies and yet threatened by them, can be maintained. Right now no one has such a plan. This indicates a future that none of us want, but it’s one I fear is increasingly likely.

Facing immense ingrained incentives driving technology forward, containment is not, on the face of it, possible. And yet for all our sakes, containment must be possible.

It would seem that the key to containment is deft regulation on national and supranational levels, balancing the need to make progress alongside sensible safety constraints, spanning everything from tech giants and militaries to small university research groups and startups, tied up in a comprehensive, enforceable framework. We’ve done it before, so the argument goes; look at cars, planes, and medicines. Isn’t this how we manage and contain the coming wave?

If only it were that simple. Regulation is essential. But regulation alone is not enough. Governments should, on the face of it, be better primed for managing novel risks and technologies than ever before. National budgets for such things are generally at record levels. Truth is, though, novel threats are just exceptionally difficult for any government to navigate. That’s not a flaw with the idea of government; it’s an assessment of the scale of the challenge before us. Governments fight the last war, the last pandemic, regulate the last wave. Regulators regulate for things they can anticipate.