Select Page
The Unnerving Rise of Video Games that Spy on You

The Unnerving Rise of Video Games that Spy on You

Tech conglomerate Tencent caused a stir last year with the announcement that it would comply with China’s directive to incorporate facial recognition technology into its games in the country. The move was in line with China’s strict gaming regulation policies, which impose limits on how much time minors can spend playing video games—an effort to curb addictive behavior, since gaming is labeled by the state as “spiritual opium.”

The state’s use of biometric data to police its population is, of course, invasive, and especially undermines the privacy of underage users—but Tencent is not the only video game company to track its players, nor is this recent case an altogether new phenomenon. All over the world, video games, one of the most widely adopted digital media forms, are installing networks of surveillance and control.

In basic terms, video games are systems that translate physical inputs—such as hand movement or gesture—into various electric or electronic machine-readable outputs. The user, by acting in ways that comply with the rules of the game and the specifications of the hardware, is parsed as data by the video game. Writing almost a decade ago, the sociologists Jennifer R. Whitson and Bart Simon argued that games are increasingly understood as systems that easily allow the reduction of human action into knowable and predictable formats.

Video games, then, are a natural medium for tracking, and researchers have long argued that large data sets about players’ in-game activities are a rich resource in understanding player psychology and cognition. In one study from 2012, Nick Yee, Nicolas Ducheneaut, and Les Nelson scraped player activity data logged on the World of Warcraft Armory website—essentially a database that records all the things a player’s character has done in the game (how many of a certain monster I’ve killed, how many times I’ve died, how many fish I’ve caught, and so on).

The researchers used this data to infer personality characteristics (in combination with data yielded through a survey). The paper suggests, for example, that there is a correlation between the survey respondents classified as more conscientious in their game-playing approach and the tendency to spend more time doing repetitive and dull in-game tasks, such as fishing. Conversely, those whose characters more often fell to death from high places were less conscientious, according to their survey responses.

Correlation between personality and quantitative gameplay data is certainly not unproblematic. The relationship between personality and identity and video game activity is complex and idiosyncratic; for instance, research suggests that gamer identity intersects with gender, racial, and sexual identity. Additionally, there has been general pushback against claims of Big Data’s production of new knowledge rooted in correlation. Despite this, games companies increasingly realize the value of big data sets to gain insight into what a player likes, how they play, what they play, what they’ll likely spend money on (in freemium games), how and when to offer the right content, and how to solicit the right kinds of player feelings.

While there are no numbers on how many video game companies are surveilling their players in-game (although, as a recent article suggests, large publishers and developers like Epic, EA, and Activision explicitly state they capture user data in their license agreements), a new industry of firms selling middleware “data analytics” tools, often used by game developers, has sprung up. These data analytics tools promise to make users more amenable to continued consumption through the use of data analysis at scale. Such analytics, once available only to the largest video game studios—which could hire data scientists to capture, clean, and analyze the data, and software engineers to develop in-house analytics tools—are now commonplace across the entire industry, pitched as “accessible” tools that provide a competitive edge in a crowded marketplace by companies like Unity, GameAnalytics, or Amazon Web Services. (Although, as a recent study shows, the extent to which these tools are truly “accessible” is questionable, requiring technical expertise and time to implement.) As demand for data-driven insight has grown, so have the range of different services—dozens of tools in the past several years alone, providing game developers with different forms of insight. One tool—essentially Uber for playtesting—allows companies to outsource quality assurance testing, and provides data-driven insight into the results. Another supposedly uses AI to understand player value and maximize retention (and spending, with a focus on high-spenders).

20 Years After 9/11, Surveillance Has Become a Way of Life

20 Years After 9/11, Surveillance Has Become a Way of Life

Two decades after 9/11, many simple acts that were once taken for granted now seem unfathomable: strolling with loved ones to the gate of their flight, meandering through a corporate plaza, using streets near government buildings. Our metropolises’ commons are now enclosed with steel and surveillance. Amid the perpetual pandemic of the past year and a half, cities have become even more walled off. With each new barrier erected, more of the city’s defining feature erodes: the freedom to move, wander, and even, as Walter Benjamin said, to “lose one’s way … as one loses one’s way in a forest.”

It’s harder to get lost amid constant tracking. It’s also harder to freely gather when the public spaces between home and work are stripped away. Known as third places, they are the connective tissue that stitches together the fabric of modern communities: the public park where teens can skateboard next to grandparents playing chess, the library where children can learn to read and unhoused individuals can find a digital lifeline. When third places vanish, as they have since the attacks, communities can falter.

Without these spaces holding us together, citizens live more like several separate societies operating in parallel. Just as social-media echo chambers have undermined our capacity for conversations online, the loss of third places can create physical echo chambers.

America has never been particularly adept at protecting our third places. For enslaved and indigenous people, entering the town square alone could be a death sentence. Later, the racial terrorism of Jim Crow in the South denied Black Americans not only suffrage, but also access to lunch counters, public transit, and even the literal water cooler. In northern cities like New York, Black Americans still faced arrest and violence for transgressing rigid, but unseen, segregation codes.

Throughout the 20th century, New York built an infrastructure of exclusion to keep our unhoused neighbors from sharing the city institutions that are, by law, every bit as much theirs to occupy. In 1999, then mayor Rudy Giuliani warned unhoused New Yorkers that “streets do not exist in civilized societies for the purpose of people sleeping there.” His threats prompted thousands of NYPD officers to systematically target and push the unhoused out of sight, thus semi-privatizing the quintessential public place.

Despite these limitations, before 9/11 millions of New Yorkers could walk and wander through vast networks of modern commons—public parks, private plazas, paths, sidewalks, open lots, and community gardens, crossing paths with those whom they would never have otherwise met. These random encounters electrify our city and give us a unifying sense of self. That shared space began to slip away from us 20 years ago, and if we’re not careful, it’ll be lost forever.

In the aftermath of the attacks, we heard patriotic platitudes from those who promised to “defend democracy.” But in the ensuing years, their defense became democracy’s greatest threat, reconstructing cities as security spaces. The billions we spent to “defend our way of life” have proved to be its undoing, and it’s unclear if we’ll be able to turn back the trend.

In a country where the term “papers, please” was once synonymous with foreign authoritarianism, photo ID has become an ever present requirement. Before 9/11, a New Yorker could spend their entire day traversing the city without any need for ID. Now it’s required to enter nearly any large building or institution.

While the ID check has become muscle memory for millions of privileged New Yorkers, it’s a source of uncertainty and fear for others. Millions of Americans lack a photo ID, and for millions more, using ID is a risk, a source of data for Immigration and Customs Enforcement.

According to Mizue Aizeki, interim executive director of the New York–based Immigrant Defense Project, “ID systems are particularly vulnerable to becoming tools of surveillance.” Aizeki added, “data collection and analysis has become increasingly central to ICE’s ability to identify and track immigrants,” noting that the Department of Homeland Security dramatically increased its support for surveillance systems since its post-9/11 founding.

ICE has spent millions partnering with firms like Palantir, the controversial data aggregator that sells information services to governments at home and abroad. Vendors can collect digital sign-in lists from buildings where we show our IDs, facial recognition in plazas, and countless other surveillance tools that track the areas around office buildings with an almost military level of surveillance. According to Aizeki, “as mass policing of immigrants has escalated, advocates have been confronted by a rapidly expanding surveillance state.”

The All-Seeing Eyes of New York’s 15,000 Surveillance Cameras

The All-Seeing Eyes of New York’s 15,000 Surveillance Cameras

A new video from human rights organization Amnesty International maps the locations of more than 15,000 cameras used by the New York Police Department, both for routine surveillance and in facial-recognition searches. A 3D model shows the 200-meter range of a camera, part of a sweeping dragnet capturing the unwitting movements of nearly half of the city’s residents, putting them at risk for misidentification. The group says it is the first to map the locations of that many cameras in the city.

Amnesty International and a team of volunteer researchers mapped cameras that can feed NYPD’s much criticized facial-recognition systems in three of the city’s five boroughs—Manhattan, Brooklyn, and the Bronx—finding 15,280 in total. Brooklyn is the most surveilled, with over 8,000 cameras.

A video by Amnesty International shows how New York City surveillance cameras work.

“You are never anonymous,” says Matt Mahmoudi, the AI researcher leading the project. The NYPD has used the cameras in almost 22,000 facial-recognition searches since 2017, according to NYPD documents obtained by the Surveillance Technology Oversight Project, a New York privacy group.

“Whether you’re attending a protest, walking to a particular neighborhood, or even just grocery shopping, your face can be tracked by facial-recognition technology using imagery from thousands of camera points across New York,” Mahmoudi says.

The cameras are often placed on top of buildings, on street lights, and at intersections. The city itself owns thousands of cameras; in addition, private businesses and homeowners often grant access to police.

Police can compare faces captured by these cameras to criminal databases to search for potential suspects. Earlier this year, the NYPD was required to disclose the details of its facial-recognition systems for public comment. But those disclosures didn’t include the number or location of cameras, or any details of how long data is retained or with whom data is shared.

The Amnesty International team found that the cameras are often clustered in majority nonwhite neighborhoods. NYC’s most surveilled neighborhood is East New York, Brooklyn, where the group found 577 cameras in less than 2 square miles. More than 90 percent of East New York’s residents are nonwhite, according to city data.

Facial-recognition systems often perform less accurately on darker-skinned people than lighter-skinned people. In 2016, Georgetown University researchers found that police departments across the country used facial recognition to identify nonwhite potential suspects more than their white counterparts.

In a statement, an NYPD spokesperson said the department never arrests anyone “solely on the basis of a facial-recognition match,” and only uses the tool to investigate “a suspect or suspects related to the investigation of a particular crime.”
 
“Where images are captured at or near a specific crime, comparison of the image of a suspect can be made against a database that includes only mug shots legally held in law enforcement records based on prior arrests,” the statement reads.

Amnesty International is releasing the map and accompanying videos as part of its #BantheScan campaign urging city officials to ban police use of the tool ahead of the city’s mayoral primary later this month. In May, Vice asked mayoral candidates if they’d support a ban on facial recognition. While most didn’t respond to the inquiry, candidate Dianne Morales told the publication she supported a ban, while candidates Shaun Donovan and Andrew Yang suggested auditing for disparate impact before deciding on any regulation.


More Great WIRED Stories

Dumbed Down AI Rhetoric Harms Everyone

Dumbed Down AI Rhetoric Harms Everyone

When the European Union Commission released its regulatory proposal on artificial intelligence last month, much of the US policy community celebrated. Their praise was at least partly grounded in truth: The world’s most powerful democratic states haven’t sufficiently regulated AI and other emerging tech, and the document marked something of a step forward. Mostly, though, the proposal and responses to it underscore democracies’ confusing rhetoric on AI.

Over the past decade, high-level stated goals about regulating AI have often conflicted with the specifics of regulatory proposals, and what end-states should look like aren’t well-articulated in either case. Coherent and meaningful progress on developing internationally attractive democratic AI regulation, even as that may vary from country to country, begins with resolving the discourse’s many contradictions and unsubtle characterizations.

The EU Commission has touted its proposal as an AI regulation landmark. Executive vice president Margrethe Vestager said upon its release, “We think that this is urgent. We are the first on this planet to suggest this legal framework.” Thierry Breton, another commissioner, said the proposals “aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”

This is certainly better than many national governments, especially the US, stagnating on rules of the road for the companies, government agencies, and other institutions. AI is already widely used in the EU despite minimal oversight and accountability, whether for surveillance in Athens or operating buses in Málaga, Spain.

But to cast the EU’s regulation as “leading” simply because it’s first only masks the proposal’s many issues. This kind of rhetorical leap is one of the first challenges at hand with democratic AI strategy.

Of the many “specifics” in the 108-page proposal, its approach to regulating facial recognition is especially consequential. “The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement,” it reads, “is considered particularly intrusive in the rights and freedoms of the concerned persons,” as it can affect private life, “evoke a feeling of constant surveillance,” and “indirectly dissuade the exercise of the freedom of assembly and other fundamental rights.” At first glance, these words may signal alignment with the concerns of many activists and technology ethicists on the harms facial recognition can inflict on marginalized communities and grave mass-surveillance risks.

The commission then states, “The use of those systems for the purpose of law enforcement should therefore be prohibited.” However, it would allow exceptions in “three exhaustively listed and narrowly defined situations.” This is where the loopholes come into play.

The exceptions include situations that “involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localization, identification or prosecution of perpetrators or suspects of the criminal offenses.” This language, for all that the scenarios are described as “narrowly defined,” offers myriad justifications for law enforcement to deploy facial recognition as it wishes. Permitting its use in the “identification” of “perpetrators or suspects” of criminal offenses, for example, would allow precisely the kind of discriminatory uses of often racist and sexist facial-recognition algorithms that activists have long warned about.

The EU’s privacy watchdog, the European Data Protection Supervisor, quickly pounced on this. “A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives,” the EDPS statement read. Sarah Chander from the nonprofit organization European Digital Rights described the proposal to the Verge as “a veneer of fundamental rights protection.” Others have noted how these exceptions mirror legislation in the US that on the surface appears to restrict facial recognition use but in fact has many broad carve-outs.