Select Page
Simulation Tech Can Help Predict the Biggest Threats

Simulation Tech Can Help Predict the Biggest Threats

The character of conflict between nations has fundamentally changed. Governments and militaries now fight on our behalf in the “gray zone,” where the boundaries between peace and war are blurred. They must navigate a complex web of ambiguous and deeply interconnected challenges, ranging from political destabilization and disinformation campaigns to cyberattacks, assassinations, proxy operations, election meddling, or perhaps even human-made pandemics. Add to this list the existential threat of climate change (and its geopolitical ramifications) and it is clear that the description of what now constitutes a national security issue has broadened, each crisis straining or degrading the fabric of national resilience.

Traditional analysis tools are poorly equipped to predict and respond to these blurred and intertwined threats. Instead, in 2022 governments and militaries will use sophisticated and credible real-life simulations, putting software at the heart of their decision-making and operating processes. The UK Ministry of Defence, for example, is developing what it calls a military Digital Backbone. This will incorporate cloud computing, modern networks, and a new transformative capability called a Single Synthetic Environment, or SSE.

This SSE will combine artificial intelligence, machine learning, computational modeling, and modern distributed systems with trusted data sets from multiple sources to support detailed, credible simulations of the real world. This data will be owned by critical institutions, but will also be sourced via an ecosystem of trusted partners, such as the Alan Turing Institute.

An SSE offers a multilayered simulation of a city, region, or country, including high-quality mapping and information about critical national infrastructure, such as power, water, transport networks, and telecommunications. This can then be overlaid with other information, such as smart-city data, information about military deployment, or data gleaned from social listening. From this, models can be constructed that give a rich, detailed picture of how a region or city might react to a given event: a disaster, epidemic, or cyberattack or a combination of such events organized by state enemies.

Defense synthetics are not a new concept. However, previous solutions have been built in a standalone way that limits reuse, longevity, choice, and—crucially—the speed of insight needed to effectively counteract gray-zone threats.

National security officials will be able to use SSEs to identify threats early, understand them better, explore their response options, and analyze the likely consequences of different actions. They will even be able to use them to train, rehearse, and implement their plans. By running thousands of simulated futures, senior leaders will be able to grapple with complex questions, refining policies and complex plans in a virtual world before implementing them in the real one.

One key question that will only grow in importance in 2022 is how countries can best secure their populations and supply chains against dramatic weather events coming from climate change. SSEs will be able to help answer this by pulling together regional infrastructure, networks, roads, and population data, with meteorological models to see how and when events might unfold.

Who Killed the Robot Dog?

Who Killed the Robot Dog?

George Jetson did not want his family to adopt a dog. For the patriarch of the futuristic family in the 1960s cartoon The Jetsons, apartment living in the age of flying cars and cities in the sky was incompatible with an animal in need of regular walking and grooming, so he instead purchased an electronic dog called ‘Lectronimo, which required no feeding and even attacked burglars. In a contest between Astro—basically future Scooby-Doo—and the robot dog, ‘Lectronimo performed all classic dog tasks better, but with zero personality. The machine ended up a farcical hunk of equipment, a laugh line for both the Jetsons and the audience. Robots aren’t menaces, they’re silly.

That’s how we have imagined the robot dog, and animaloids in general, for much of the 20th century, according to Jay Telotte, professor emeritus of the School of Literature, Media, and Communication at Georgia Tech. Disney’s 1927 cartoon “The Mechanical Cow” imagines a robot bovine on wheels with a broom for a tail skating around delivering milk to animal friends. The worst that could happen is your mechanical farm could go haywire, as in the 1930s cartoon “Technoracket,” but even then robot animals presented no real threat to their biological counterparts. “In fact, many of the ‘animaloid’ visions in movies and TV over the years have been in cartoons and comic narratives,” says Telotte, where “the laughter they generate is typically assuring us that they are not really dangerous.” The same goes for most of the countless robot dogs in popular culture over the years, from Dynomutt, Dog Wonder, to the series of cyborg dogs named K9 in Dr. Who.

Our nearly 100-year romance with the robot dog, however, has come to a dystopian end. It seems that every month Boston Dynamics releases another dancing video of their robot SPOT and the media responds with initial awe, then with trepidation, and finally with night-terror editorials about our future under the brutal rule of robot overlords. While Boston Dynamics explicitly prohibits their dogs being turned into weapons, Ghost Robotics’ SPUR is currently being tested at various Air Force bases (with a lovely variety of potential weapon attachments), and Chinese company Xiaomi hopes to undercut SPOT with their much cheaper and somehow more terrifying Cyberdog. All of which is to say, the robot dog as it once was— a symbol of a fun, high-tech future full of incredible, social, artificial life—is dead. How did we get here? Who killed the robot dog?

The quadrupeds we commonly call robot dogs are descendants of a long line of mechanical life, historically called automata. One of the earliest examples of such autonomous machines was the “defecating duck,” created by French inventor Jacques de Vaucanson nearly 300 years ago, in 1739. This mechanical duck—which appeared to eat little bits of grain, pause, and then promptly excrete digested grain on the other end—along with numerous other automata of the era, were “philosophical experiments, attempts to discern which aspects of living creatures could be reproduced in machinery, and to what degree, and what such reproductions might reveal about their natural subjects,” writes Stanford historian Jessica Riskin.

The defecating duck, of course, was an extremely weird and gross fraud, preloaded with poop-like substance. But still, the preoccupation with determining which aspects of life were purely mechanical was a dominant intellectual preoccupation of the time, and even inspired the use of soft, lightweight materials such as leather in the construction of another kind of biological model: prosthetic hands, which had previously been built out of metal. Even today, biologists build robot models of their animal subjects to better understand how they move. As with many of its mechanical brethren, much of the robot dog’s life has been an exercise in re-creating the beloved pet, perhaps even subconsciously, to learn which aspects of living things are merely mechanical and which are organic. A robot dog must look and act sufficiently doglike, but what actually makes a dog a dog?

American manufacturing company Westinghouse debuted perhaps the first electrical dog, Sparko, at the 1940 New York World’s Fair. The 65-pound metallic pooch served as a companion to the company’s electric man, Elektro. (The term robot did not come into popular usage until around the mid 20th century.) What was most interesting about both of these promotional robots were their seeming autonomy: Light stimuli set off their action sequences, so effectively, in fact, that apparently Sparko’s sensors responded to the lights of a passing car, causing it to speed into oncoming traffic. Part of a campaign to help sell washing machines, Sparko and Elektro represented Westinghouse’s engineering prowess, but they were also among the first attempts to bring sci-fi into reality and laid the groundwork for an imagined future full of robotic companionship. The idea that robots can also be fun companions endured throughout the 20th century.

When AIBO—the archetypal robot dog created by Sony—first appeared in the early 2000s, it was its artificial intelligence that made it extraordinary. Ads for the second-generation AIBO promised “intelligent entertainment” that mimicked free will with individual personalities. AIBO’s learning capabilities made each dog at least somewhat unique, making it easier to consider special and easier to love. It was their AI that made them doglike: playful, inquisitive, occasionally disobedient. When I, 10 years old, walked into FAO Schwarz in New York in 2001 and watched the AIBOs on display head butt little pink balls, something about these little creations tore at my heartstrings—despite the unbridgeable rift between me and the machine, I still wanted to try to get to know it, to understand it. I wanted to love a robot dog.

20 Years After 9/11, Surveillance Has Become a Way of Life

20 Years After 9/11, Surveillance Has Become a Way of Life

Two decades after 9/11, many simple acts that were once taken for granted now seem unfathomable: strolling with loved ones to the gate of their flight, meandering through a corporate plaza, using streets near government buildings. Our metropolises’ commons are now enclosed with steel and surveillance. Amid the perpetual pandemic of the past year and a half, cities have become even more walled off. With each new barrier erected, more of the city’s defining feature erodes: the freedom to move, wander, and even, as Walter Benjamin said, to “lose one’s way … as one loses one’s way in a forest.”

It’s harder to get lost amid constant tracking. It’s also harder to freely gather when the public spaces between home and work are stripped away. Known as third places, they are the connective tissue that stitches together the fabric of modern communities: the public park where teens can skateboard next to grandparents playing chess, the library where children can learn to read and unhoused individuals can find a digital lifeline. When third places vanish, as they have since the attacks, communities can falter.

Without these spaces holding us together, citizens live more like several separate societies operating in parallel. Just as social-media echo chambers have undermined our capacity for conversations online, the loss of third places can create physical echo chambers.

America has never been particularly adept at protecting our third places. For enslaved and indigenous people, entering the town square alone could be a death sentence. Later, the racial terrorism of Jim Crow in the South denied Black Americans not only suffrage, but also access to lunch counters, public transit, and even the literal water cooler. In northern cities like New York, Black Americans still faced arrest and violence for transgressing rigid, but unseen, segregation codes.

Throughout the 20th century, New York built an infrastructure of exclusion to keep our unhoused neighbors from sharing the city institutions that are, by law, every bit as much theirs to occupy. In 1999, then mayor Rudy Giuliani warned unhoused New Yorkers that “streets do not exist in civilized societies for the purpose of people sleeping there.” His threats prompted thousands of NYPD officers to systematically target and push the unhoused out of sight, thus semi-privatizing the quintessential public place.

Despite these limitations, before 9/11 millions of New Yorkers could walk and wander through vast networks of modern commons—public parks, private plazas, paths, sidewalks, open lots, and community gardens, crossing paths with those whom they would never have otherwise met. These random encounters electrify our city and give us a unifying sense of self. That shared space began to slip away from us 20 years ago, and if we’re not careful, it’ll be lost forever.

In the aftermath of the attacks, we heard patriotic platitudes from those who promised to “defend democracy.” But in the ensuing years, their defense became democracy’s greatest threat, reconstructing cities as security spaces. The billions we spent to “defend our way of life” have proved to be its undoing, and it’s unclear if we’ll be able to turn back the trend.

In a country where the term “papers, please” was once synonymous with foreign authoritarianism, photo ID has become an ever present requirement. Before 9/11, a New Yorker could spend their entire day traversing the city without any need for ID. Now it’s required to enter nearly any large building or institution.

While the ID check has become muscle memory for millions of privileged New Yorkers, it’s a source of uncertainty and fear for others. Millions of Americans lack a photo ID, and for millions more, using ID is a risk, a source of data for Immigration and Customs Enforcement.

According to Mizue Aizeki, interim executive director of the New York–based Immigrant Defense Project, “ID systems are particularly vulnerable to becoming tools of surveillance.” Aizeki added, “data collection and analysis has become increasingly central to ICE’s ability to identify and track immigrants,” noting that the Department of Homeland Security dramatically increased its support for surveillance systems since its post-9/11 founding.

ICE has spent millions partnering with firms like Palantir, the controversial data aggregator that sells information services to governments at home and abroad. Vendors can collect digital sign-in lists from buildings where we show our IDs, facial recognition in plazas, and countless other surveillance tools that track the areas around office buildings with an almost military level of surveillance. According to Aizeki, “as mass policing of immigrants has escalated, advocates have been confronted by a rapidly expanding surveillance state.”

The All-Seeing Eyes of New York’s 15,000 Surveillance Cameras

The All-Seeing Eyes of New York’s 15,000 Surveillance Cameras

A new video from human rights organization Amnesty International maps the locations of more than 15,000 cameras used by the New York Police Department, both for routine surveillance and in facial-recognition searches. A 3D model shows the 200-meter range of a camera, part of a sweeping dragnet capturing the unwitting movements of nearly half of the city’s residents, putting them at risk for misidentification. The group says it is the first to map the locations of that many cameras in the city.

Amnesty International and a team of volunteer researchers mapped cameras that can feed NYPD’s much criticized facial-recognition systems in three of the city’s five boroughs—Manhattan, Brooklyn, and the Bronx—finding 15,280 in total. Brooklyn is the most surveilled, with over 8,000 cameras.

A video by Amnesty International shows how New York City surveillance cameras work.

“You are never anonymous,” says Matt Mahmoudi, the AI researcher leading the project. The NYPD has used the cameras in almost 22,000 facial-recognition searches since 2017, according to NYPD documents obtained by the Surveillance Technology Oversight Project, a New York privacy group.

“Whether you’re attending a protest, walking to a particular neighborhood, or even just grocery shopping, your face can be tracked by facial-recognition technology using imagery from thousands of camera points across New York,” Mahmoudi says.

The cameras are often placed on top of buildings, on street lights, and at intersections. The city itself owns thousands of cameras; in addition, private businesses and homeowners often grant access to police.

Police can compare faces captured by these cameras to criminal databases to search for potential suspects. Earlier this year, the NYPD was required to disclose the details of its facial-recognition systems for public comment. But those disclosures didn’t include the number or location of cameras, or any details of how long data is retained or with whom data is shared.

The Amnesty International team found that the cameras are often clustered in majority nonwhite neighborhoods. NYC’s most surveilled neighborhood is East New York, Brooklyn, where the group found 577 cameras in less than 2 square miles. More than 90 percent of East New York’s residents are nonwhite, according to city data.

Facial-recognition systems often perform less accurately on darker-skinned people than lighter-skinned people. In 2016, Georgetown University researchers found that police departments across the country used facial recognition to identify nonwhite potential suspects more than their white counterparts.

In a statement, an NYPD spokesperson said the department never arrests anyone “solely on the basis of a facial-recognition match,” and only uses the tool to investigate “a suspect or suspects related to the investigation of a particular crime.”
 
“Where images are captured at or near a specific crime, comparison of the image of a suspect can be made against a database that includes only mug shots legally held in law enforcement records based on prior arrests,” the statement reads.

Amnesty International is releasing the map and accompanying videos as part of its #BantheScan campaign urging city officials to ban police use of the tool ahead of the city’s mayoral primary later this month. In May, Vice asked mayoral candidates if they’d support a ban on facial recognition. While most didn’t respond to the inquiry, candidate Dianne Morales told the publication she supported a ban, while candidates Shaun Donovan and Andrew Yang suggested auditing for disparate impact before deciding on any regulation.


More Great WIRED Stories

Dumbed Down AI Rhetoric Harms Everyone

Dumbed Down AI Rhetoric Harms Everyone

When the European Union Commission released its regulatory proposal on artificial intelligence last month, much of the US policy community celebrated. Their praise was at least partly grounded in truth: The world’s most powerful democratic states haven’t sufficiently regulated AI and other emerging tech, and the document marked something of a step forward. Mostly, though, the proposal and responses to it underscore democracies’ confusing rhetoric on AI.

Over the past decade, high-level stated goals about regulating AI have often conflicted with the specifics of regulatory proposals, and what end-states should look like aren’t well-articulated in either case. Coherent and meaningful progress on developing internationally attractive democratic AI regulation, even as that may vary from country to country, begins with resolving the discourse’s many contradictions and unsubtle characterizations.

The EU Commission has touted its proposal as an AI regulation landmark. Executive vice president Margrethe Vestager said upon its release, “We think that this is urgent. We are the first on this planet to suggest this legal framework.” Thierry Breton, another commissioner, said the proposals “aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”

This is certainly better than many national governments, especially the US, stagnating on rules of the road for the companies, government agencies, and other institutions. AI is already widely used in the EU despite minimal oversight and accountability, whether for surveillance in Athens or operating buses in Málaga, Spain.

But to cast the EU’s regulation as “leading” simply because it’s first only masks the proposal’s many issues. This kind of rhetorical leap is one of the first challenges at hand with democratic AI strategy.

Of the many “specifics” in the 108-page proposal, its approach to regulating facial recognition is especially consequential. “The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement,” it reads, “is considered particularly intrusive in the rights and freedoms of the concerned persons,” as it can affect private life, “evoke a feeling of constant surveillance,” and “indirectly dissuade the exercise of the freedom of assembly and other fundamental rights.” At first glance, these words may signal alignment with the concerns of many activists and technology ethicists on the harms facial recognition can inflict on marginalized communities and grave mass-surveillance risks.

The commission then states, “The use of those systems for the purpose of law enforcement should therefore be prohibited.” However, it would allow exceptions in “three exhaustively listed and narrowly defined situations.” This is where the loopholes come into play.

The exceptions include situations that “involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localization, identification or prosecution of perpetrators or suspects of the criminal offenses.” This language, for all that the scenarios are described as “narrowly defined,” offers myriad justifications for law enforcement to deploy facial recognition as it wishes. Permitting its use in the “identification” of “perpetrators or suspects” of criminal offenses, for example, would allow precisely the kind of discriminatory uses of often racist and sexist facial-recognition algorithms that activists have long warned about.

The EU’s privacy watchdog, the European Data Protection Supervisor, quickly pounced on this. “A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives,” the EDPS statement read. Sarah Chander from the nonprofit organization European Digital Rights described the proposal to the Verge as “a veneer of fundamental rights protection.” Others have noted how these exceptions mirror legislation in the US that on the surface appears to restrict facial recognition use but in fact has many broad carve-outs.