Select Page
Should I Learn Coding as a Second Language?

Should I Learn Coding as a Second Language?

“I can’t code, and this bums me out because—with so many books and courses and camps—there are so many opportunities to learn these days. I suspect I’ll understand the machine revolution a lot better if I speak their language. Should I at least try?” 

—Decoder


Dear Decoder,
Your desire to speak the “language” of machines reminds me of Ted Chiang’s short story “The Evolution of Human Science.” The story imagines a future in which nearly all academic disciplines have become dominated by superintelligent “metahumans” whose understanding of the world vastly surpasses that of human experts. Reports of new metahuman discoveries—although ostensibly written in English and published in scientific journals that anyone is welcome to read—are so complex and technically abstruse that human scientists have been relegated to a role akin to theologians, trying to interpret texts that are as obscure to them as the will of God was to medieval Scholastics. Instead of performing original research, these would-be scientists now practice the art of hermeneutics.

There was a time, not so long ago, when coding was regarded as among the most forward-looking skill sets, one that initiated a person into the technological elite who would determine our future. Chiang’s story, first published in 2000, was prescient in its ability to foresee the limits of this knowledge. In fields like deep learning and other forms of advanced AI, many technologists already seem more like theologians or alchemists than “experts” in the modern sense of the word: Although they write the initial code, they’re often unable to explain the emergence of higher-level skills that their programs develop while training on data sets. (One still recalls the shock of hearing David Silver, principal research scientist at DeepMind, insist in 2016 that he could not explain how AlphaGo—a program he designed—managed to develop its winning strategy: “It discovered this for itself,” Silver said, “through its own process of introspection and analysis.”)

Meanwhile, algorithms like GPT-3 or GitHub’s Copilot have learned to write code, sparking debates about whether software developers, whose profession was once considered a placid island in the coming tsunami of automation, might soon become irrelevant—and stoking existential fears about self-programming. Runaway AI scenarios have long relied on the possibility that machines might learn to evolve on their own, and while coding algorithms are not about to initiate a Skynet takeover, they nevertheless raise legitimate concerns about the growing opacity of our technologies. AI has a well-established tendency, after all, to discover idiosyncratic solutions and invent ad hoc languages that are counterintuitive to humans. Many have understandably started to wonder: What happens when humans can’t read code anymore?

I mention all this, Decoder, by way of acknowledging the stark realities, not to disparage your ambitions, which I think are laudable. For what it’s worth, the prevailing fears about programmer obsolescence strike me as alarmist and premature. Automated code has existed in some form for decades (recall the web editors of the 1990s that generated HTML and CSS), and even the most advanced coding algorithms are, at present, prone to simple errors and require no small amount of human oversight. It sounds to me, too, that you’re not looking to make a career out of coding so much as you are motivated by a deeper sense of curiosity. Perhaps you are considering the creative pleasures of the hobbyist—contributing to open source projects or suggesting fixes to simple bugs in programs you regularly use. Or maybe you’re intrigued by the possibility of automating tedious aspects of your work. What you most desire, if I’m reading your question correctly, is a fuller understanding of the language that undergirds so much of modern life.

There’s a convincing case to be made that coding is now a basic form of literacy—that a grasp of data structures, algorithms, and programming languages is as crucial as reading and writing when it comes to understanding the larger ideologies in which we are enmeshed. It’s natural, of course, to distrust the dilettante. (Amateur developers are often disparaged for knowing just enough to cause havoc, having mastered the syntax of programming languages but possessing none of the foresight and vision required to create successful products.) But this limbo of expertise might also be seen as a discipline in humility. One benefit of amateur knowledge is that it tends to spark curiosity simply by virtue of impressing on the novice how little they know. In an age of streamlined, user-friendly interfaces, it’s tempting to take our technologies at face value without considering the incentives and agendas lurking beneath the surface. But the more you learn about the underlying structure, the more basic questions will come to preoccupy you: How does code get translated into electric impulses? How does software design subtly change the experience of users? What is the underlying value of principles like open access, sharing, and the digital commons? For instance, to the casual user, social platforms may appear to be designed to connect you with friends and impart useful information. An awareness of how a site is structured, however, inevitably leads one to think more critically about how its features are marshaled to maximize attention, create robust data trails, and monetize social graphs.

Ultimately, this knowledge has the potential to inoculate us against fatalism. Those who understand how a program is built and why are less likely to accept its design as inevitable. You spoke of a machine revolution, but it’s worth mentioning that the most celebrated historical revolutions (those initiated, that is, by humans) were the result of mass literacy combined with technological innovation. The invention of the printing press and the demand for books from a newly literate public laid the groundwork for the Protestant Reformation, as well as the French and American Revolutions. Once a substantial portion of the populace was capable of reading for themselves, they started to question the authority of priests and kings and the inevitability of ruling assumptions.

The cadre of technologists who are currently weighing our most urgent ethical questions—about data justice, automation, and AI values—frequently stress the need for a larger public debate, but nuanced dialog is difficult when the general public lacks a fundamental knowledge of the technologies in question. (One need only glance at a recent US House subcommittee hearing, for example, to see how far lawmakers are from understanding the technologies they seek to regulate.) As New York Times technology writer Kevin Roose has observed, advanced AI models are being developed “behind closed doors,” and the curious laity are increasingly forced to weed through esoteric reports on their inner workings—or take the explanations of experts on faith. “When information about [these technologies] is made public,” he writes, “it’s often either watered down by corporate PR or buried in inscrutable scientific papers.”

If Chiang’s story is a parable about the importance of keeping humans “in the loop,” it also makes a subtle case for ensuring that the circle of knowledge is as large as possible. At a moment when AI is becoming more and more proficient in our languages, stunning us with its ability to read, write, and converse in a way that can feel plausibly human, the need for humans to understand the dialects of programming has become all the more urgent. The more of us who are capable of speaking that argot, the more likely it is that we will remain the authors of the machine revolution, rather than its interpreters.

Faithfully,

Cloud


Be advised that CLOUD SUPPORT is experiencing higher than normal wait times and appreciates your patience.

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.

This article appears in the March 2023 issue issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.

The Real Harm of Crisis Text Line’s Data Sharing

The Real Harm of Crisis Text Line’s Data Sharing

Another week, another privacy horror show: Crisis Text Line, a nonprofit text message service for people experiencing serious mental health crises, has been using “anonymized” conversation data to power a for-profit machine learning tool for customer support teams. (After backlash, CTL announced it would stop.) Crisis Text Line’s response to the backlash focused on the data itself and whether it included personally identifiable information. But that response uses data as a distraction. Imagine this: Say you texted Crisis Text Line and got back a message that said “Hey, just so you know, we’ll use this conversation to help our for-profit subsidiary build a tool for companies who do customer support.” Would you keep texting?

That’s the real travesty—when the price of obtaining mental health help in a crisis is becoming grist for the profit mill. And it’s not just users of CTL who pay; it’s everyone who goes looking for help when they need it most.

Americans need help and can’t get it. The huge unmet demand for critical advice and help has given rise to a new class of organizations and software tools that exist in a regulatory gray area. They help people with bankruptcy or evictions, but they aren’t lawyers; they help people with mental health crises, but they aren’t care providers. They invite ordinary people to rely on them and often do provide real help. But these services can also avoid taking responsibility for their advice, or even abuse the trust people have put in them. They can make mistakes, push predatory advertising and disinformation, or just outright sell data. And the consumer safeguards that would normally protect people from malfeasance or mistakes by lawyers or doctors haven’t caught up.

This regulatory gray area can also constrain organizations that have novel solutions to offer. Take Upsolve, a nonprofit that develops software to guide people through bankruptcy. (The organization takes pains to claim it does not offer legal advice.) Upsolve wants to train New York community leaders to help others navigate the city’s notorious debt courts. One problem: These would-be trainees aren’t lawyers, so under New York (and nearly every other state) law, Upsolve’s initiative would be illegal. Upsolve is now suing to carve out an exception for itself. The company claims, quite rightly, that a lack of legal help means people effectively lack rights under the law.

The legal profession’s failure to grant Americans access to support is well-documented. But Upsolve’s lawsuit also raises new, important questions. Who is ultimately responsible for the advice given under a program like this, and who is responsible for a mistake—a trainee, a trainer, both? How do we teach people about their rights as a client of this service, and how to seek recourse? These are eminently answerable questions. There are lots of policy tools for creating relationships with elevated responsibilities: We could assign advice-givers a special legal status, establish a duty of loyalty for organizations that handle sensitive data, or create policy sandboxes to test and learn from new models for delivering advice.

But instead of using these tools, most regulators seem content to bury their heads in the sand. Officially, you can’t give legal advice or health advice without a professional credential. Unofficially, people can get such advice in all but name from tools and organizations operating in the margins. And while credentials can be important, regulators are failing to engage with the ways software has fundamentally changed how we give advice and care for one another, and what that means for the responsibilities of advice-givers.

And we need that engagement more than ever. People who seek help from experts or caregivers are vulnerable. They may not be able to distinguish a good service from a bad one. They don’t have time to parse terms of service dense with jargon, caveats, and disclaimers. And they have little to no negotiating power to set better terms, especially when they’re reaching out mid-crisis. That’s why the fiduciary duties that lawyers and doctors have are so necessary in the first place: not just to protect a person seeking help once, but to give people confidence that they can seek help from experts for the most critical, sensitive issues they face. In other words, a lawyer’s duty to their client isn’t just to protect that client from that particular lawyer; it’s to protect society’s trust in lawyers.

And that’s the true harm—when people won’t contact a suicide hotline because they don’t trust that the hotline has their sole interest at heart. That distrust can be contagious: Crisis Text Line’s actions might not just stop people from using Crisis Text Line. It might stop people from using any similar service. What’s worse than not being able to find help? Not being able to trust it.

Simulation Tech Can Help Predict the Biggest Threats

Simulation Tech Can Help Predict the Biggest Threats

The character of conflict between nations has fundamentally changed. Governments and militaries now fight on our behalf in the “gray zone,” where the boundaries between peace and war are blurred. They must navigate a complex web of ambiguous and deeply interconnected challenges, ranging from political destabilization and disinformation campaigns to cyberattacks, assassinations, proxy operations, election meddling, or perhaps even human-made pandemics. Add to this list the existential threat of climate change (and its geopolitical ramifications) and it is clear that the description of what now constitutes a national security issue has broadened, each crisis straining or degrading the fabric of national resilience.

Traditional analysis tools are poorly equipped to predict and respond to these blurred and intertwined threats. Instead, in 2022 governments and militaries will use sophisticated and credible real-life simulations, putting software at the heart of their decision-making and operating processes. The UK Ministry of Defence, for example, is developing what it calls a military Digital Backbone. This will incorporate cloud computing, modern networks, and a new transformative capability called a Single Synthetic Environment, or SSE.

This SSE will combine artificial intelligence, machine learning, computational modeling, and modern distributed systems with trusted data sets from multiple sources to support detailed, credible simulations of the real world. This data will be owned by critical institutions, but will also be sourced via an ecosystem of trusted partners, such as the Alan Turing Institute.

An SSE offers a multilayered simulation of a city, region, or country, including high-quality mapping and information about critical national infrastructure, such as power, water, transport networks, and telecommunications. This can then be overlaid with other information, such as smart-city data, information about military deployment, or data gleaned from social listening. From this, models can be constructed that give a rich, detailed picture of how a region or city might react to a given event: a disaster, epidemic, or cyberattack or a combination of such events organized by state enemies.

Defense synthetics are not a new concept. However, previous solutions have been built in a standalone way that limits reuse, longevity, choice, and—crucially—the speed of insight needed to effectively counteract gray-zone threats.

National security officials will be able to use SSEs to identify threats early, understand them better, explore their response options, and analyze the likely consequences of different actions. They will even be able to use them to train, rehearse, and implement their plans. By running thousands of simulated futures, senior leaders will be able to grapple with complex questions, refining policies and complex plans in a virtual world before implementing them in the real one.

One key question that will only grow in importance in 2022 is how countries can best secure their populations and supply chains against dramatic weather events coming from climate change. SSEs will be able to help answer this by pulling together regional infrastructure, networks, roads, and population data, with meteorological models to see how and when events might unfold.

The Turing Test Is Bad For Business

The Turing Test Is Bad For Business

Fears of Artificial intelligence fill the news: job losses, inequality, discrimination, misinformation, or even a superintelligence dominating the world. The one group everyone assumes will benefit is business, but the data seems to disagree. Amid all the hype, US businesses have been slow in adopting the most advanced AI technologies, and there is little evidence that such technologies are contributing significantly to productivity growth or job creation.

This disappointing performance is not merely due to the relative immaturity of AI technology. It also comes from a fundamental mismatch between the needs of business and the way AI is currently being conceived by many in the technology sector—a mismatch that has its origins in Alan Turing’s pathbreaking 1950 “imitation game” paper and the so-called Turing test he proposed therein.

The Turing test defines machine intelligence by imagining a computer program that can so successfully imitate a human in an open-ended text conversation that it isn’t possible to tell whether one is conversing with a machine or a person.

At best, this was only one way of articulating machine intelligence. Turing himself, and other technology pioneers such as Douglas Engelbart and Norbert Wiener, understood that computers would be most useful to business and society when they augmented and complemented human capabilities, not when they competed directly with us. Search engines, spreadsheets, and databases are good examples of such complementary forms of information technology. While their impact on business has been immense, they are not usually referred to as “AI,” and in recent years the success story that they embody has been submerged by a yearning for something more “intelligent.” This yearning is poorly defined, however, and with surprisingly little attempt to develop an alternative vision, it has increasingly come to mean surpassing human performance in tasks such as vision and speech, and in parlor games such as chess and Go. This framing has become dominant both in public discussion and in terms of the capital investment surrounding AI.

Economists and other social scientists emphasize that intelligence arises not only, or even primarily, in individual humans, but most of all in collectives such as firms, markets, educational systems, and cultures. Technology can play two key roles in supporting collective forms of intelligence. First, as emphasized in Douglas Engelbart’s pioneering research in the 1960s and the subsequent emergence of the field of human-computer interaction, technology can enhance the ability of individual humans to participate in collectives, by providing them with information, insights, and interactive tools. Second, technology can create new kinds of collectives. This latter possibility offers the greatest transformative potential. It provides an alternative framing for AI, one with major implications for economic productivity and human welfare.

Businesses succeed at scale when they successfully divide labor internally and bring diverse skill sets into teams that work together to create new products and services. Markets succeed when they bring together diverse sets of participants, facilitating specialization in order to enhance overall productivity and social welfare. This is exactly what Adam Smith understood more than two and a half centuries ago. Translating his message into the current debate, technology should focus on the complementarity game, not the imitation game.

We already have many examples of machines enhancing productivity by performing tasks that are complementary to those performed by humans. These include the massive calculations that underpin the functioning of everything from modern financial markets to logistics, the transmission of high-fidelity images across long distances in the blink of an eye, and the sorting through reams of information to pull out relevant items.

What is new in the current era is that computers can now do more than simply execute lines of code written by a human programmer. Computers are able to learn from data and they can now interact, infer, and intervene in real-world problems, side by side with humans. Instead of viewing this breakthrough as an opportunity to turn machines into silicon versions of human beings, we should focus on how computers can use data and machine learning to create new kinds of markets, new services, and new ways of connecting humans to each other in economically rewarding ways.

An early example of such economics-aware machine learning is provided by recommendation systems, an innovative form of data analysis that came to prominence in the 1990s in consumer-facing companies such as Amazon (“You may also like”) and Netflix (“Top picks for you”). Recommendation systems have since become ubiquitous, and have had a significant impact on productivity. They create value by exploiting the collective wisdom of the crowd to connect individuals to products.

Emerging examples of this new paradigm include the use of machine learning to forge direct connections between musicians and listeners, writers and readers, and game creators and players. Early innovators in this space include Airbnb, Uber, YouTube, and Shopify, and the phrase “creator economy” is being used as the trend gathers steam. A key aspect of such collectives is that they are, in fact, markets—economic value is associated with the links among the participants. Research is needed on how to blend machine learning, economics, and sociology so that these markets are healthy and yield sustainable income for the participants.

Democratic institutions can also be supported and strengthened by this innovative use of machine learning. The digital ministry in Taiwan has harnessed statistical analysis and online participation to scale up the kind of deliberative conversations that lead to effective team decisionmaking in the best managed companies.

Humans Can’t Be the Sole Keepers of Scientific Knowledge

Humans Can’t Be the Sole Keepers of Scientific Knowledge

There’s an old joke that physicists like to tell: Everything has already been discovered and reported in a Russian journal in the 1960s, we just don’t know about it. Though hyperbolic, the joke accurately captures the current state of affairs. The volume of knowledge is vast and growing quickly: The number of scientific articles posted on arXiv (the largest and most popular preprint server) in 2021 is expected to reach 190,000—and that’s just a subset of the scientific literature produced this year.

It’s clear that we do not really know what we know, because nobody can read the entire literature even in their own narrow field (which includes, in addition to journal articles, PhD theses, lab notes, slides, white papers, technical notes, and reports). Indeed, it’s entirely possible that in this mountain of papers, answers to many questions lie hidden, important discoveries have been overlooked or forgotten, and connections remain concealed.

Artificial intelligence is one potential solution. Algorithms can already analyze text without human supervision to find relations between words that help uncover knowledge. But far more can be achieved if we move away from writing traditional scientific articles whose style and structure has hardly changed in the past hundred years.

Text mining comes with a number of limitations, including access to the full text of papers and legal concerns. But most importantly, AI does not really understand concepts and the relationships between them, and is sensitive to biases in the data set, like the selection of papers it analyzes. It is hard for AI—and, in fact, even for a nonexpert human reader—to understand scientific papers in part because the use of jargon varies from one discipline to another and the same term might be used with completely different meanings in different fields. The increasing interdisciplinarity of research means that it is often difficult to define a topic precisely using a combination of keywords in order to discover all the relevant papers. Making connections and (re)discovering similar concepts is hard even for the brightest minds.

As long as this is the case, AI cannot be trusted and humans will need to double-check everything an AI outputs after text-mining, a tedious task that defies the very purpose of using AI. To solve this problem we need to make science papers not only machine-readable but machine-understandable, by (re)writing them in a special type of programming language. In other words: Teach science to machines in the language they understand.

Writing scientific knowledge in a programming-like language will be dry, but it will be sustainable, because new concepts will be directly added to the library of science that machines understand. Plus, as machines are taught more scientific facts, they will be able to help scientists streamline their logical arguments; spot errors, inconsistencies, plagiarism, and duplications; and highlight connections. AI with an understanding of physical laws is more powerful than AI trained on data alone, so science-savvy machines will be able to help future discoveries. Machines with a great knowledge of science could assist rather than replace human scientists.

Mathematicians have already started this process of translation. They are teaching mathematics to computers by writing theorems and proofs in languages like Lean. Lean is a proof assistant and programming language in which one can introduce mathematical concepts in the form of objects. Using the known objects, Lean can reason whether a statement is true or false, hence helping mathematicians verify proofs and identify places where their logic is insufficiently rigorous. The more mathematics Lean knows, the more it can do. The Xena Project at Imperial College London is aiming to input the entire undergraduate mathematics curriculum in Lean. One day, proof assistants may help mathematicians do research by checking their reasoning and searching the vast mathematics knowledge they possess.