Select Page
Russia’s Killer Drone in Ukraine Raises Fears About AI in Warfare

Russia’s Killer Drone in Ukraine Raises Fears About AI in Warfare

But drones have also highlighted a key vulnerability in Russia’s invasion, which is now entering its third week. Ukrainian forces have used a remotely operated Turkish-made drone called the TB2 to great effect against Russian forces, shooting guided missiles at Russian missile launchers and vehicles. The paraglider-sized drone, which relies on a small crew on the ground, is slow and cannot defend itself, but it has proven effective against a surprisingly weak Russian air campaign.

This week, the Biden administration also said it would supply Ukraine with a small US-made loitering munition called Switchblade. This single-use drone, which comes equipped with explosives, cameras, and guided systems, has some autonomous capabilities but relies on a person to make decisions about which targets to engage.

But Bendett questions whether Russia would unleash an AI-powered drone with advanced autonomy in such a chaotic environment, especially given how poorly coordinated the country’s overall air strategy seems to be. “The Russian military and its capabilities are now being severely tested in Ukraine,” he says. “If the [human] ground forces with all their sophisticated information gathering can’t really make sense of what’s happening on the ground, then how could a drone?”

Several other military experts question the purported capabilities of the KUB-BLA.

“The companies that produce these loitering drones talk up their autonomous features, but often the autonomy involves flight corrections and maneuvering to hit a target identified by a human operator, not autonomy in the way the international community would define an autonomous weapon,” says Michael Horowitz, a professor at the University of Pennsylvania, who keeps track of military technology.

Despite such uncertainties, the issue of AI in weapons systems has become contentious of late because the technology is rapidly finding its way into many military systems, for example to help interpret input from sensors. The US military maintains that a person should always make lethal decisions, but the US also opposes a ban on the development of such systems.

To some, the appearance of the KUB-BLA shows that we are on a slippery slope toward increasing use of AI in weapons that will eventually remove humans from the equation.

“We’ll see even more proliferation of such lethal autonomous weapons unless more Western nations start supporting a ban on them,” says Max Tegmark, a professor at MIT and cofounder of the Future of Life Institute, an organization that campaigns against such weapons.

Others, though, believe that the situation unfolding in Ukraine shows how difficult it will really be to use advanced AI and autonomy.

William Alberque, Director of Strategy, Technology, and Arms Control at the International Institute for Strategic Studies says that given the success that Ukraine has had with the TB2, the Russians are not ready to deploy tech that is more sophisticated. “We’re seeing Russian morons getting owned by a system that they should not be vulnerable to.”


More Great WIRED Stories

Simulation Tech Can Help Predict the Biggest Threats

Simulation Tech Can Help Predict the Biggest Threats

The character of conflict between nations has fundamentally changed. Governments and militaries now fight on our behalf in the “gray zone,” where the boundaries between peace and war are blurred. They must navigate a complex web of ambiguous and deeply interconnected challenges, ranging from political destabilization and disinformation campaigns to cyberattacks, assassinations, proxy operations, election meddling, or perhaps even human-made pandemics. Add to this list the existential threat of climate change (and its geopolitical ramifications) and it is clear that the description of what now constitutes a national security issue has broadened, each crisis straining or degrading the fabric of national resilience.

Traditional analysis tools are poorly equipped to predict and respond to these blurred and intertwined threats. Instead, in 2022 governments and militaries will use sophisticated and credible real-life simulations, putting software at the heart of their decision-making and operating processes. The UK Ministry of Defence, for example, is developing what it calls a military Digital Backbone. This will incorporate cloud computing, modern networks, and a new transformative capability called a Single Synthetic Environment, or SSE.

This SSE will combine artificial intelligence, machine learning, computational modeling, and modern distributed systems with trusted data sets from multiple sources to support detailed, credible simulations of the real world. This data will be owned by critical institutions, but will also be sourced via an ecosystem of trusted partners, such as the Alan Turing Institute.

An SSE offers a multilayered simulation of a city, region, or country, including high-quality mapping and information about critical national infrastructure, such as power, water, transport networks, and telecommunications. This can then be overlaid with other information, such as smart-city data, information about military deployment, or data gleaned from social listening. From this, models can be constructed that give a rich, detailed picture of how a region or city might react to a given event: a disaster, epidemic, or cyberattack or a combination of such events organized by state enemies.

Defense synthetics are not a new concept. However, previous solutions have been built in a standalone way that limits reuse, longevity, choice, and—crucially—the speed of insight needed to effectively counteract gray-zone threats.

National security officials will be able to use SSEs to identify threats early, understand them better, explore their response options, and analyze the likely consequences of different actions. They will even be able to use them to train, rehearse, and implement their plans. By running thousands of simulated futures, senior leaders will be able to grapple with complex questions, refining policies and complex plans in a virtual world before implementing them in the real one.

One key question that will only grow in importance in 2022 is how countries can best secure their populations and supply chains against dramatic weather events coming from climate change. SSEs will be able to help answer this by pulling together regional infrastructure, networks, roads, and population data, with meteorological models to see how and when events might unfold.

The History of Predicting the Future

The History of Predicting the Future

The future has a history. The good news is that it’s one from which we can learn; the bad news is that we very rarely do. That’s because the clearest lesson from the history of the future is that knowing the future isn’t necessarily very useful. But that has yet to stop humans from trying.

Take Peter Turchin’s famed prediction for 2020. In 2010 he developed a quantitative analysis of history, known as cliodynamics, that allowed him to predict that the West would experience political chaos a decade later. Unfortunately, no one was able to act on that prophecy in order to prevent damage to US democracy. And of course, if they had, Turchin’s prediction would have been relegated to the ranks of failed futures. This situation is not an aberration. 

Rulers from Mesopotamia to Manhattan have sought knowledge of the future in order to obtain strategic advantages—but time and again, they have failed to interpret it correctly, or they have failed to grasp either the political motives or the speculative limitations of those who proffer it. More often than not, they have also chosen to ignore futures that force them to face uncomfortable truths. Even the technological innovations of the 21st century have failed to change these basic problems—the results of computer programs are, after all, only as accurate as their data input.

There is an assumption that the more scientific the approach to predictions, the more accurate forecasts will be. But this belief causes more problems than it solves, not least because it often either ignores or excludes the lived diversity of human experience. Despite the promise of more accurate and intelligent technology, there is little reason to think the increased deployment of AI in forecasting will make prognostication any more useful than it has been throughout human history.

People have long tried to find out more about the shape of things to come. These efforts, while aimed at the same goal, have differed across time and space in several significant ways, with the most obvious being methodology—that is, how predictions were made and interpreted. Since the earliest civilizations, the most important distinction in this practice has been between individuals who have an intrinsic gift or ability to predict the future, and systems that provide rules for calculating futures. The predictions of oracles, shamans, and prophets, for example, depended on the capacity of these individuals to access other planes of being and receive divine inspiration. Strategies of divination such as astrology, palmistry, numerology, and Tarot, however, depend on the practitioner’s mastery of a complex theoretical rule-based (and sometimes highly mathematical) system, and their ability to interpret and apply it to particular cases. Interpreting dreams or the practice of necromancy might lie somewhere between these two extremes, depending partly on innate ability, partly on acquired expertise. And there are plenty of examples, in the past and present, that involve both strategies for predicting the future. Any internet search on “dream interpretation” or “horoscope calculation” will throw up millions of hits.

In the last century, technology legitimized the latter approach, as developments in IT (predicted, at least to some extent, by Moore’s law) provided more powerful tools and systems for forecasting. In the 1940s, the analog computer MONIAC had to use actual tanks and pipes of colored water to model the UK economy. By the 1970s, the Club of Rome could turn to the World3 computer simulation to model the flow of energy through human and natural systems via key variables such as industrialization, environmental loss, and population growth. Its report, Limits to Growth, became a best seller, despite the sustained criticism it received for the assumptions at the core of the model and the quality of the data that was fed into it.

At the same time, rather than depending on technological advances, other forecasters have turned to the strategy of crowdsourcing predictions of the future. Polling public and private opinions, for example, depends on something very simple—asking people what they intend to do or what they think will happen. It then requires careful interpretation, whether based in quantitative (like polls of voter intention) or qualitative (like the Rand corporation’s DELPHI technique) analysis. The latter strategy harnesses the wisdom of highly specific crowds. Assembling a panel of experts to discuss a given topic, the thinking goes, is likely to be more accurate than individual prognostication.

Who Killed the Robot Dog?

Who Killed the Robot Dog?

George Jetson did not want his family to adopt a dog. For the patriarch of the futuristic family in the 1960s cartoon The Jetsons, apartment living in the age of flying cars and cities in the sky was incompatible with an animal in need of regular walking and grooming, so he instead purchased an electronic dog called ‘Lectronimo, which required no feeding and even attacked burglars. In a contest between Astro—basically future Scooby-Doo—and the robot dog, ‘Lectronimo performed all classic dog tasks better, but with zero personality. The machine ended up a farcical hunk of equipment, a laugh line for both the Jetsons and the audience. Robots aren’t menaces, they’re silly.

That’s how we have imagined the robot dog, and animaloids in general, for much of the 20th century, according to Jay Telotte, professor emeritus of the School of Literature, Media, and Communication at Georgia Tech. Disney’s 1927 cartoon “The Mechanical Cow” imagines a robot bovine on wheels with a broom for a tail skating around delivering milk to animal friends. The worst that could happen is your mechanical farm could go haywire, as in the 1930s cartoon “Technoracket,” but even then robot animals presented no real threat to their biological counterparts. “In fact, many of the ‘animaloid’ visions in movies and TV over the years have been in cartoons and comic narratives,” says Telotte, where “the laughter they generate is typically assuring us that they are not really dangerous.” The same goes for most of the countless robot dogs in popular culture over the years, from Dynomutt, Dog Wonder, to the series of cyborg dogs named K9 in Dr. Who.

Our nearly 100-year romance with the robot dog, however, has come to a dystopian end. It seems that every month Boston Dynamics releases another dancing video of their robot SPOT and the media responds with initial awe, then with trepidation, and finally with night-terror editorials about our future under the brutal rule of robot overlords. While Boston Dynamics explicitly prohibits their dogs being turned into weapons, Ghost Robotics’ SPUR is currently being tested at various Air Force bases (with a lovely variety of potential weapon attachments), and Chinese company Xiaomi hopes to undercut SPOT with their much cheaper and somehow more terrifying Cyberdog. All of which is to say, the robot dog as it once was— a symbol of a fun, high-tech future full of incredible, social, artificial life—is dead. How did we get here? Who killed the robot dog?

The quadrupeds we commonly call robot dogs are descendants of a long line of mechanical life, historically called automata. One of the earliest examples of such autonomous machines was the “defecating duck,” created by French inventor Jacques de Vaucanson nearly 300 years ago, in 1739. This mechanical duck—which appeared to eat little bits of grain, pause, and then promptly excrete digested grain on the other end—along with numerous other automata of the era, were “philosophical experiments, attempts to discern which aspects of living creatures could be reproduced in machinery, and to what degree, and what such reproductions might reveal about their natural subjects,” writes Stanford historian Jessica Riskin.

The defecating duck, of course, was an extremely weird and gross fraud, preloaded with poop-like substance. But still, the preoccupation with determining which aspects of life were purely mechanical was a dominant intellectual preoccupation of the time, and even inspired the use of soft, lightweight materials such as leather in the construction of another kind of biological model: prosthetic hands, which had previously been built out of metal. Even today, biologists build robot models of their animal subjects to better understand how they move. As with many of its mechanical brethren, much of the robot dog’s life has been an exercise in re-creating the beloved pet, perhaps even subconsciously, to learn which aspects of living things are merely mechanical and which are organic. A robot dog must look and act sufficiently doglike, but what actually makes a dog a dog?

American manufacturing company Westinghouse debuted perhaps the first electrical dog, Sparko, at the 1940 New York World’s Fair. The 65-pound metallic pooch served as a companion to the company’s electric man, Elektro. (The term robot did not come into popular usage until around the mid 20th century.) What was most interesting about both of these promotional robots were their seeming autonomy: Light stimuli set off their action sequences, so effectively, in fact, that apparently Sparko’s sensors responded to the lights of a passing car, causing it to speed into oncoming traffic. Part of a campaign to help sell washing machines, Sparko and Elektro represented Westinghouse’s engineering prowess, but they were also among the first attempts to bring sci-fi into reality and laid the groundwork for an imagined future full of robotic companionship. The idea that robots can also be fun companions endured throughout the 20th century.

When AIBO—the archetypal robot dog created by Sony—first appeared in the early 2000s, it was its artificial intelligence that made it extraordinary. Ads for the second-generation AIBO promised “intelligent entertainment” that mimicked free will with individual personalities. AIBO’s learning capabilities made each dog at least somewhat unique, making it easier to consider special and easier to love. It was their AI that made them doglike: playful, inquisitive, occasionally disobedient. When I, 10 years old, walked into FAO Schwarz in New York in 2001 and watched the AIBOs on display head butt little pink balls, something about these little creations tore at my heartstrings—despite the unbridgeable rift between me and the machine, I still wanted to try to get to know it, to understand it. I wanted to love a robot dog.