Select Page

“I can’t code, and this bums me out because—with so many books and courses and camps—there are so many opportunities to learn these days. I suspect I’ll understand the machine revolution a lot better if I speak their language. Should I at least try?” 

—Decoder


Dear Decoder,
Your desire to speak the “language” of machines reminds me of Ted Chiang’s short story “The Evolution of Human Science.” The story imagines a future in which nearly all academic disciplines have become dominated by superintelligent “metahumans” whose understanding of the world vastly surpasses that of human experts. Reports of new metahuman discoveries—although ostensibly written in English and published in scientific journals that anyone is welcome to read—are so complex and technically abstruse that human scientists have been relegated to a role akin to theologians, trying to interpret texts that are as obscure to them as the will of God was to medieval Scholastics. Instead of performing original research, these would-be scientists now practice the art of hermeneutics.

There was a time, not so long ago, when coding was regarded as among the most forward-looking skill sets, one that initiated a person into the technological elite who would determine our future. Chiang’s story, first published in 2000, was prescient in its ability to foresee the limits of this knowledge. In fields like deep learning and other forms of advanced AI, many technologists already seem more like theologians or alchemists than “experts” in the modern sense of the word: Although they write the initial code, they’re often unable to explain the emergence of higher-level skills that their programs develop while training on data sets. (One still recalls the shock of hearing David Silver, principal research scientist at DeepMind, insist in 2016 that he could not explain how AlphaGo—a program he designed—managed to develop its winning strategy: “It discovered this for itself,” Silver said, “through its own process of introspection and analysis.”)

Meanwhile, algorithms like GPT-3 or GitHub’s Copilot have learned to write code, sparking debates about whether software developers, whose profession was once considered a placid island in the coming tsunami of automation, might soon become irrelevant—and stoking existential fears about self-programming. Runaway AI scenarios have long relied on the possibility that machines might learn to evolve on their own, and while coding algorithms are not about to initiate a Skynet takeover, they nevertheless raise legitimate concerns about the growing opacity of our technologies. AI has a well-established tendency, after all, to discover idiosyncratic solutions and invent ad hoc languages that are counterintuitive to humans. Many have understandably started to wonder: What happens when humans can’t read code anymore?

I mention all this, Decoder, by way of acknowledging the stark realities, not to disparage your ambitions, which I think are laudable. For what it’s worth, the prevailing fears about programmer obsolescence strike me as alarmist and premature. Automated code has existed in some form for decades (recall the web editors of the 1990s that generated HTML and CSS), and even the most advanced coding algorithms are, at present, prone to simple errors and require no small amount of human oversight. It sounds to me, too, that you’re not looking to make a career out of coding so much as you are motivated by a deeper sense of curiosity. Perhaps you are considering the creative pleasures of the hobbyist—contributing to open source projects or suggesting fixes to simple bugs in programs you regularly use. Or maybe you’re intrigued by the possibility of automating tedious aspects of your work. What you most desire, if I’m reading your question correctly, is a fuller understanding of the language that undergirds so much of modern life.

There’s a convincing case to be made that coding is now a basic form of literacy—that a grasp of data structures, algorithms, and programming languages is as crucial as reading and writing when it comes to understanding the larger ideologies in which we are enmeshed. It’s natural, of course, to distrust the dilettante. (Amateur developers are often disparaged for knowing just enough to cause havoc, having mastered the syntax of programming languages but possessing none of the foresight and vision required to create successful products.) But this limbo of expertise might also be seen as a discipline in humility. One benefit of amateur knowledge is that it tends to spark curiosity simply by virtue of impressing on the novice how little they know. In an age of streamlined, user-friendly interfaces, it’s tempting to take our technologies at face value without considering the incentives and agendas lurking beneath the surface. But the more you learn about the underlying structure, the more basic questions will come to preoccupy you: How does code get translated into electric impulses? How does software design subtly change the experience of users? What is the underlying value of principles like open access, sharing, and the digital commons? For instance, to the casual user, social platforms may appear to be designed to connect you with friends and impart useful information. An awareness of how a site is structured, however, inevitably leads one to think more critically about how its features are marshaled to maximize attention, create robust data trails, and monetize social graphs.

Ultimately, this knowledge has the potential to inoculate us against fatalism. Those who understand how a program is built and why are less likely to accept its design as inevitable. You spoke of a machine revolution, but it’s worth mentioning that the most celebrated historical revolutions (those initiated, that is, by humans) were the result of mass literacy combined with technological innovation. The invention of the printing press and the demand for books from a newly literate public laid the groundwork for the Protestant Reformation, as well as the French and American Revolutions. Once a substantial portion of the populace was capable of reading for themselves, they started to question the authority of priests and kings and the inevitability of ruling assumptions.

The cadre of technologists who are currently weighing our most urgent ethical questions—about data justice, automation, and AI values—frequently stress the need for a larger public debate, but nuanced dialog is difficult when the general public lacks a fundamental knowledge of the technologies in question. (One need only glance at a recent US House subcommittee hearing, for example, to see how far lawmakers are from understanding the technologies they seek to regulate.) As New York Times technology writer Kevin Roose has observed, advanced AI models are being developed “behind closed doors,” and the curious laity are increasingly forced to weed through esoteric reports on their inner workings—or take the explanations of experts on faith. “When information about [these technologies] is made public,” he writes, “it’s often either watered down by corporate PR or buried in inscrutable scientific papers.”

If Chiang’s story is a parable about the importance of keeping humans “in the loop,” it also makes a subtle case for ensuring that the circle of knowledge is as large as possible. At a moment when AI is becoming more and more proficient in our languages, stunning us with its ability to read, write, and converse in a way that can feel plausibly human, the need for humans to understand the dialects of programming has become all the more urgent. The more of us who are capable of speaking that argot, the more likely it is that we will remain the authors of the machine revolution, rather than its interpreters.

Faithfully,

Cloud


Be advised that CLOUD SUPPORT is experiencing higher than normal wait times and appreciates your patience.

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.

This article appears in the March 2023 issue issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.