Select Page
The Turing Test Is Bad For Business

The Turing Test Is Bad For Business

Fears of Artificial intelligence fill the news: job losses, inequality, discrimination, misinformation, or even a superintelligence dominating the world. The one group everyone assumes will benefit is business, but the data seems to disagree. Amid all the hype, US businesses have been slow in adopting the most advanced AI technologies, and there is little evidence that such technologies are contributing significantly to productivity growth or job creation.

This disappointing performance is not merely due to the relative immaturity of AI technology. It also comes from a fundamental mismatch between the needs of business and the way AI is currently being conceived by many in the technology sector—a mismatch that has its origins in Alan Turing’s pathbreaking 1950 “imitation game” paper and the so-called Turing test he proposed therein.

The Turing test defines machine intelligence by imagining a computer program that can so successfully imitate a human in an open-ended text conversation that it isn’t possible to tell whether one is conversing with a machine or a person.

At best, this was only one way of articulating machine intelligence. Turing himself, and other technology pioneers such as Douglas Engelbart and Norbert Wiener, understood that computers would be most useful to business and society when they augmented and complemented human capabilities, not when they competed directly with us. Search engines, spreadsheets, and databases are good examples of such complementary forms of information technology. While their impact on business has been immense, they are not usually referred to as “AI,” and in recent years the success story that they embody has been submerged by a yearning for something more “intelligent.” This yearning is poorly defined, however, and with surprisingly little attempt to develop an alternative vision, it has increasingly come to mean surpassing human performance in tasks such as vision and speech, and in parlor games such as chess and Go. This framing has become dominant both in public discussion and in terms of the capital investment surrounding AI.

Economists and other social scientists emphasize that intelligence arises not only, or even primarily, in individual humans, but most of all in collectives such as firms, markets, educational systems, and cultures. Technology can play two key roles in supporting collective forms of intelligence. First, as emphasized in Douglas Engelbart’s pioneering research in the 1960s and the subsequent emergence of the field of human-computer interaction, technology can enhance the ability of individual humans to participate in collectives, by providing them with information, insights, and interactive tools. Second, technology can create new kinds of collectives. This latter possibility offers the greatest transformative potential. It provides an alternative framing for AI, one with major implications for economic productivity and human welfare.

Businesses succeed at scale when they successfully divide labor internally and bring diverse skill sets into teams that work together to create new products and services. Markets succeed when they bring together diverse sets of participants, facilitating specialization in order to enhance overall productivity and social welfare. This is exactly what Adam Smith understood more than two and a half centuries ago. Translating his message into the current debate, technology should focus on the complementarity game, not the imitation game.

We already have many examples of machines enhancing productivity by performing tasks that are complementary to those performed by humans. These include the massive calculations that underpin the functioning of everything from modern financial markets to logistics, the transmission of high-fidelity images across long distances in the blink of an eye, and the sorting through reams of information to pull out relevant items.

What is new in the current era is that computers can now do more than simply execute lines of code written by a human programmer. Computers are able to learn from data and they can now interact, infer, and intervene in real-world problems, side by side with humans. Instead of viewing this breakthrough as an opportunity to turn machines into silicon versions of human beings, we should focus on how computers can use data and machine learning to create new kinds of markets, new services, and new ways of connecting humans to each other in economically rewarding ways.

An early example of such economics-aware machine learning is provided by recommendation systems, an innovative form of data analysis that came to prominence in the 1990s in consumer-facing companies such as Amazon (“You may also like”) and Netflix (“Top picks for you”). Recommendation systems have since become ubiquitous, and have had a significant impact on productivity. They create value by exploiting the collective wisdom of the crowd to connect individuals to products.

Emerging examples of this new paradigm include the use of machine learning to forge direct connections between musicians and listeners, writers and readers, and game creators and players. Early innovators in this space include Airbnb, Uber, YouTube, and Shopify, and the phrase “creator economy” is being used as the trend gathers steam. A key aspect of such collectives is that they are, in fact, markets—economic value is associated with the links among the participants. Research is needed on how to blend machine learning, economics, and sociology so that these markets are healthy and yield sustainable income for the participants.

Democratic institutions can also be supported and strengthened by this innovative use of machine learning. The digital ministry in Taiwan has harnessed statistical analysis and online participation to scale up the kind of deliberative conversations that lead to effective team decisionmaking in the best managed companies.

Facebook Drops Facial Recognition to Tag People in Photos

Facebook Drops Facial Recognition to Tag People in Photos

Raji said it’s always good when companies take public steps to signal that technology is dangerous but cautioned that people shouldn’t have to rely on voluntary corporate actions for protection. Whether Facebook’s decision to limit facial recognition use makes a larger difference will depend on policymakers.

“If this prompts a policymaker to take the conversation about facial recognition seriously enough to actually pull some legislation through Congress and really advocate for and lean into it, then this would become a turning point or a critical moment,” she says.

Despite occasionally bipartisan rhetoric about the threat facial recognition poses to civil liberties and a lack of standards in use by law enforcement, Congress has not passed any laws regulating use of the technology or setting standards for how businesses or governments can use facial recognition.

In a statement shared with WIRED, the group Fight for the Future said Facebook knows facial recognition is dangerous and renewed calls to ban use of the technology.

“Even as algorithms improve, facial recognition will only be more dangerous,” the group says. “This technology will enable authoritarian governments to target and crack down on religious minorities and political dissent; it will automate the funneling of people into prisons without making us safer; it will create new tools for stalking, abuse, and identity theft.”

Sneha Revanur, founder of Encode Justice, a group for young people seeking an end to the use of algorithms that automate oppression, said in a statement that the news represents a hard-earned victory for privacy and racial justice advocates and youth organizers. She said it’s one reform out of many needed to address hate speech, misinformation, and surveillance enabled by social media companies.

Luke Stark is an assistant professor at the University of Western Ontario and a longtime critic of facial recognition. He’s called facial recognition and computer vision pseudoscience with implications for biometric data privacy, anti-discrimination law, and civil liberties. In 2019, he argued that facial recognition is the plutonium of AI.

Stark said he thinks Facebook’s action amounts to a PR tactic and a deflection meant to grab good headlines, not a core change in philosophy. But he said the move also shows a company that doesn’t want to be associated with toxic technology.

He connected the decision to Facebook’s recent focus on virtual reality and the metaverse. Powering personalized avatars will require collecting other kinds of physiological data and invite new privacy concerns, he said. Stark also questioned the impact of scrapping the facial recognition database because he doesn’t know anybody younger than 45 who posts photos on Facebook.

Facebook characterized its decision as “one of the largest shifts in facial recognition usage in the technology’s history.” But Stark predicts “the actual impact is going to be quite minor” because Facebook hasn’t completely abandoned facial recognition and others still use it.

“I think it can be a turning point if people who are concerned about these technologies continue to press the conversation,” he says.


More Great WIRED Stories

Humans Can’t Be the Sole Keepers of Scientific Knowledge

Humans Can’t Be the Sole Keepers of Scientific Knowledge

There’s an old joke that physicists like to tell: Everything has already been discovered and reported in a Russian journal in the 1960s, we just don’t know about it. Though hyperbolic, the joke accurately captures the current state of affairs. The volume of knowledge is vast and growing quickly: The number of scientific articles posted on arXiv (the largest and most popular preprint server) in 2021 is expected to reach 190,000—and that’s just a subset of the scientific literature produced this year.

It’s clear that we do not really know what we know, because nobody can read the entire literature even in their own narrow field (which includes, in addition to journal articles, PhD theses, lab notes, slides, white papers, technical notes, and reports). Indeed, it’s entirely possible that in this mountain of papers, answers to many questions lie hidden, important discoveries have been overlooked or forgotten, and connections remain concealed.

Artificial intelligence is one potential solution. Algorithms can already analyze text without human supervision to find relations between words that help uncover knowledge. But far more can be achieved if we move away from writing traditional scientific articles whose style and structure has hardly changed in the past hundred years.

Text mining comes with a number of limitations, including access to the full text of papers and legal concerns. But most importantly, AI does not really understand concepts and the relationships between them, and is sensitive to biases in the data set, like the selection of papers it analyzes. It is hard for AI—and, in fact, even for a nonexpert human reader—to understand scientific papers in part because the use of jargon varies from one discipline to another and the same term might be used with completely different meanings in different fields. The increasing interdisciplinarity of research means that it is often difficult to define a topic precisely using a combination of keywords in order to discover all the relevant papers. Making connections and (re)discovering similar concepts is hard even for the brightest minds.

As long as this is the case, AI cannot be trusted and humans will need to double-check everything an AI outputs after text-mining, a tedious task that defies the very purpose of using AI. To solve this problem we need to make science papers not only machine-readable but machine-understandable, by (re)writing them in a special type of programming language. In other words: Teach science to machines in the language they understand.

Writing scientific knowledge in a programming-like language will be dry, but it will be sustainable, because new concepts will be directly added to the library of science that machines understand. Plus, as machines are taught more scientific facts, they will be able to help scientists streamline their logical arguments; spot errors, inconsistencies, plagiarism, and duplications; and highlight connections. AI with an understanding of physical laws is more powerful than AI trained on data alone, so science-savvy machines will be able to help future discoveries. Machines with a great knowledge of science could assist rather than replace human scientists.

Mathematicians have already started this process of translation. They are teaching mathematics to computers by writing theorems and proofs in languages like Lean. Lean is a proof assistant and programming language in which one can introduce mathematical concepts in the form of objects. Using the known objects, Lean can reason whether a statement is true or false, hence helping mathematicians verify proofs and identify places where their logic is insufficiently rigorous. The more mathematics Lean knows, the more it can do. The Xena Project at Imperial College London is aiming to input the entire undergraduate mathematics curriculum in Lean. One day, proof assistants may help mathematicians do research by checking their reasoning and searching the vast mathematics knowledge they possess.

20 Years After 9/11, Surveillance Has Become a Way of Life

20 Years After 9/11, Surveillance Has Become a Way of Life

Two decades after 9/11, many simple acts that were once taken for granted now seem unfathomable: strolling with loved ones to the gate of their flight, meandering through a corporate plaza, using streets near government buildings. Our metropolises’ commons are now enclosed with steel and surveillance. Amid the perpetual pandemic of the past year and a half, cities have become even more walled off. With each new barrier erected, more of the city’s defining feature erodes: the freedom to move, wander, and even, as Walter Benjamin said, to “lose one’s way … as one loses one’s way in a forest.”

It’s harder to get lost amid constant tracking. It’s also harder to freely gather when the public spaces between home and work are stripped away. Known as third places, they are the connective tissue that stitches together the fabric of modern communities: the public park where teens can skateboard next to grandparents playing chess, the library where children can learn to read and unhoused individuals can find a digital lifeline. When third places vanish, as they have since the attacks, communities can falter.

Without these spaces holding us together, citizens live more like several separate societies operating in parallel. Just as social-media echo chambers have undermined our capacity for conversations online, the loss of third places can create physical echo chambers.

America has never been particularly adept at protecting our third places. For enslaved and indigenous people, entering the town square alone could be a death sentence. Later, the racial terrorism of Jim Crow in the South denied Black Americans not only suffrage, but also access to lunch counters, public transit, and even the literal water cooler. In northern cities like New York, Black Americans still faced arrest and violence for transgressing rigid, but unseen, segregation codes.

Throughout the 20th century, New York built an infrastructure of exclusion to keep our unhoused neighbors from sharing the city institutions that are, by law, every bit as much theirs to occupy. In 1999, then mayor Rudy Giuliani warned unhoused New Yorkers that “streets do not exist in civilized societies for the purpose of people sleeping there.” His threats prompted thousands of NYPD officers to systematically target and push the unhoused out of sight, thus semi-privatizing the quintessential public place.

Despite these limitations, before 9/11 millions of New Yorkers could walk and wander through vast networks of modern commons—public parks, private plazas, paths, sidewalks, open lots, and community gardens, crossing paths with those whom they would never have otherwise met. These random encounters electrify our city and give us a unifying sense of self. That shared space began to slip away from us 20 years ago, and if we’re not careful, it’ll be lost forever.

In the aftermath of the attacks, we heard patriotic platitudes from those who promised to “defend democracy.” But in the ensuing years, their defense became democracy’s greatest threat, reconstructing cities as security spaces. The billions we spent to “defend our way of life” have proved to be its undoing, and it’s unclear if we’ll be able to turn back the trend.

In a country where the term “papers, please” was once synonymous with foreign authoritarianism, photo ID has become an ever present requirement. Before 9/11, a New Yorker could spend their entire day traversing the city without any need for ID. Now it’s required to enter nearly any large building or institution.

While the ID check has become muscle memory for millions of privileged New Yorkers, it’s a source of uncertainty and fear for others. Millions of Americans lack a photo ID, and for millions more, using ID is a risk, a source of data for Immigration and Customs Enforcement.

According to Mizue Aizeki, interim executive director of the New York–based Immigrant Defense Project, “ID systems are particularly vulnerable to becoming tools of surveillance.” Aizeki added, “data collection and analysis has become increasingly central to ICE’s ability to identify and track immigrants,” noting that the Department of Homeland Security dramatically increased its support for surveillance systems since its post-9/11 founding.

ICE has spent millions partnering with firms like Palantir, the controversial data aggregator that sells information services to governments at home and abroad. Vendors can collect digital sign-in lists from buildings where we show our IDs, facial recognition in plazas, and countless other surveillance tools that track the areas around office buildings with an almost military level of surveillance. According to Aizeki, “as mass policing of immigrants has escalated, advocates have been confronted by a rapidly expanding surveillance state.”

What Makes an Artist in the Age of Algorithms?

What Makes an Artist in the Age of Algorithms?

In 2021, technology’s role in how art is generated remains up for debate and discovery. From the rise of NFTs to the proliferation of techno-artists who use generative adversarial networks to produce visual expressions, to smartphone apps that write new music, creatives and technologists are continually experimenting with how art is produced, consumed, and monetized.

BT, the Grammy-nominated composer of 2010’s These Hopeful Machines, has emerged as a world leader at the intersection of tech and music. Beyond producing and writing for the likes of David Bowie, Death Cab for Cutie, Madonna, and the Roots, and composing scores for The Fast and the Furious, Smallville, and many other shows and movies, he’s helped pioneer production techniques like stutter editing and granular synthesis. This past spring, BT released GENESIS.JSON, a piece of software that contains 24 hours of original music and visual art. It features 15,000 individually sequenced audio and video clips that he created from scratch, which span different rhythmic figures, field recordings of cicadas and crickets, a live orchestra, drum machines, and myriad other sounds that play continuously. And it lives on the blockchain. It is, to my knowledge, the first composition of its kind.

Could ideas like GENESIS.JSON be the future of original music, where composers use AI and the blockchain to create entirely new art forms? What makes an artist in the age of algorithms? I spoke with BT to learn more.

What are your central interests at the interface of artificial intelligence and music?

I am really fascinated with this idea of what an artist is. Speaking in my common tongue—music—it’s a very small array of variables. We have 12 notes. There’s a collection of rhythms that we typically use. There’s a sort of vernacular of instruments, of tones, of timbres, but when you start to add them up, it becomes this really deep data set.

On its surface, it makes you ask, “What is special and unique about an artist?” And that’s something that I’ve been curious about my whole adult life. Seeing the research that was happening in artificial intelligence, my immediate thought was that music is low-hanging fruit.

These days, we can take the sum total of the artists’ output and we can take their artistic works and we can quantify the entire thing into a training set, a massive, multivariable training set. And we don’t even name the variables. The RNN (recurrent neural networks) and CNNs (convolutional neural networks) name them automatically.

So you’re referring to a body of music that can be used to “train” an artificial intelligence algorithm that can then create original music that resembles the music it was trained on. If we reduce the genius of artists like Coltrane or Mozart, say, into a training set and can recreate their sound, how will musicians and music connoisseurs respond?

I think that the closer we get, it becomes this uncanny valley idea. Some would say that things like music are sacrosanct and have to do with very base-level things about our humanity. It’s not hard to get into kind of a spiritual conversation about what music is as a language, and what it means, and how powerful it is, and how it transcends culture, race, and time. So the traditional musician might say, “That’s not possible. There’s so much nuance and feeling, and your life experience, and these kinds of things that go into the musical output.”

And the sort of engineer part of me goes, well Look at what Google has made. It’s a simple kind of MIDI-generation engine, where they’ve taken all Bach’s works and it’s able to spit out [Bach-like] fugues. Because Bach wrote so many fugues, he’s a great example. Also, he’s the father of modern harmony. Musicologists listen to some of those Google Magenta fugues and can’t distinguish them from Bach’s original works. Again, this makes us question what constitutes an artist.

I’m both excited and have incredible trepidation about this space that we’re expanding into. Maybe the question I want to be asking is less “We can, but should we?” and more “How do we do this responsibly, because it’s happening?”

Right now, there are companies that are using something like Spotify or YouTube to train their models with artists who are alive, whose works are copyrighted and protected. But companies are allowed to take someone’s work and train models with it right now. Should we be doing that? Or should we be speaking to the artists themselves first? I believe that there needs to be protective mechanisms put in place for visual artists, for programmers, for musicians.