Select Page
Russia’s Killer Drone in Ukraine Raises Fears About AI in Warfare

Russia’s Killer Drone in Ukraine Raises Fears About AI in Warfare

But drones have also highlighted a key vulnerability in Russia’s invasion, which is now entering its third week. Ukrainian forces have used a remotely operated Turkish-made drone called the TB2 to great effect against Russian forces, shooting guided missiles at Russian missile launchers and vehicles. The paraglider-sized drone, which relies on a small crew on the ground, is slow and cannot defend itself, but it has proven effective against a surprisingly weak Russian air campaign.

This week, the Biden administration also said it would supply Ukraine with a small US-made loitering munition called Switchblade. This single-use drone, which comes equipped with explosives, cameras, and guided systems, has some autonomous capabilities but relies on a person to make decisions about which targets to engage.

But Bendett questions whether Russia would unleash an AI-powered drone with advanced autonomy in such a chaotic environment, especially given how poorly coordinated the country’s overall air strategy seems to be. “The Russian military and its capabilities are now being severely tested in Ukraine,” he says. “If the [human] ground forces with all their sophisticated information gathering can’t really make sense of what’s happening on the ground, then how could a drone?”

Several other military experts question the purported capabilities of the KUB-BLA.

“The companies that produce these loitering drones talk up their autonomous features, but often the autonomy involves flight corrections and maneuvering to hit a target identified by a human operator, not autonomy in the way the international community would define an autonomous weapon,” says Michael Horowitz, a professor at the University of Pennsylvania, who keeps track of military technology.

Despite such uncertainties, the issue of AI in weapons systems has become contentious of late because the technology is rapidly finding its way into many military systems, for example to help interpret input from sensors. The US military maintains that a person should always make lethal decisions, but the US also opposes a ban on the development of such systems.

To some, the appearance of the KUB-BLA shows that we are on a slippery slope toward increasing use of AI in weapons that will eventually remove humans from the equation.

“We’ll see even more proliferation of such lethal autonomous weapons unless more Western nations start supporting a ban on them,” says Max Tegmark, a professor at MIT and cofounder of the Future of Life Institute, an organization that campaigns against such weapons.

Others, though, believe that the situation unfolding in Ukraine shows how difficult it will really be to use advanced AI and autonomy.

William Alberque, Director of Strategy, Technology, and Arms Control at the International Institute for Strategic Studies says that given the success that Ukraine has had with the TB2, the Russians are not ready to deploy tech that is more sophisticated. “We’re seeing Russian morons getting owned by a system that they should not be vulnerable to.”


More Great WIRED Stories

Google’s New Tech Can Read Your Body Language—Without Cameras

Google’s New Tech Can Read Your Body Language—Without Cameras

What if your computer decided not to blare out a notification jingle because it noticed you weren’t sitting at your desk? What if your TV saw you leave the couch to answer the front door and paused Netflix automatically, then resumed playback when you sat back down? What if our computers took more social cues from our movements and learned to be more considerate companions?

It sounds futuristic and perhaps more than a little invasive—a computer watching your every move? But it feels less creepy once you learn that these technologies don’t have to rely on a camera to see where you are and what you’re doing. Instead, they use radar. Google’s Advanced Technology and Products division—better known as ATAP, the department behind oddball projects such as a touch-sensitive denim jacket—has spent the past year exploring how computers can use radar to understand our needs or intentions and then react to us appropriately.

This is not the first time we’ve seen Google use radar to provide its gadgets with spatial awareness. In 2015, Google unveiled Soli, a sensor that can use radar’s electromagnetic waves to pick up precise gestures and movements. It was first seen in the Google Pixel 4’s ability to detect simple hand gestures so the user could snooze alarms or pause music without having to physically touch the smartphone. More recently, radar sensors were embedded inside the second-generation Nest Hub smart display to detect the movement and breathing patterns of the person sleeping next to it. The device was then able to track the person’s sleep without requiring them to strap on a smartwatch.

The same Soli sensor is being used in this new round of research, but instead of using the sensor input to directly control a computer, ATAP is instead using the sensor data to enable computers to recognize our everyday movements and make new kinds of choices.

“We believe as technology becomes more present in our life, it’s fair to start asking technology itself to take a few more cues from us,” says Leonardo Giusti, head of design at ATAP. In the same way your mom might remind you to grab an umbrella before you head out the door, perhaps your thermostat can relay the same message as you walk past and glance at it—or your TV can lower the volume if it detects you’ve fallen asleep on the couch.

Radar Research

Google ATAP demo

A human entering a computer’s personal space.

Courtesy of Google

Giusti says much of the research is based on proxemics, the study of how people use space around them to mediate social interactions. As you get closer to another person, you expect increased engagement and intimacy. The ATAP team used this and other social cues to establish that people and devices have their own concepts of personal space. 

Radar can detect you moving closer to a computer and entering its personal space. This might mean the computer can then choose to perform certain actions, like booting up the screen without requiring you to press a button. This kind of interaction already exists in current Google Nest smart displays, though instead of radar, Google employs ultrasonic sound waves to measure a person’s distance from the device. When a Nest Hub notices you’re moving closer, it highlights current reminders, calendar events, or other important notifications. 

DeepMind Has Trained an AI to Control Nuclear Fusion

DeepMind Has Trained an AI to Control Nuclear Fusion

The inside of a tokamak—the doughnut-shaped vessel designed to contain a nuclear fusion reaction—presents a special kind of chaos. Hydrogen atoms are smashed together at unfathomably high temperatures, creating a whirling, roiling plasma that’s hotter than the surface of the sun. Finding smart ways to control and confine that plasma will be key to unlocking the potential of nuclear fusion, which has been mooted as the clean energy source of the future for decades. At this point, the science underlying fusion seems sound, so what remains is an engineering challenge. “We need to be able to heat this matter up and hold it together for long enough for us to take energy out of it,” says Ambrogio Fasoli, director of the Swiss Plasma Center at École Polytechnique Fédérale de Lausanne in Switzerland.

That’s where DeepMind comes in. The artificial intelligence firm, backed by Google parent company Alphabet, has previously turned its hand to video games and protein folding, and has been working on a joint research project with the Swiss Plasma Center to develop an AI for controlling a nuclear fusion reaction.

In stars, which are also powered by fusion, the sheer gravitational mass is enough to pull hydrogen atoms together and overcome their opposing charges. On Earth, scientists instead use powerful magnetic coils to confine the nuclear fusion reaction, nudging it into the desired position and shaping it like a potter manipulating clay on a wheel. The coils have to be carefully controlled to prevent the plasma from touching the sides of the vessel: this can damage the walls and slow down the fusion reaction. (There’s little risk of an explosion as the fusion reaction cannot survive without magnetic confinement).

But every time researchers want to change the configuration of the plasma and try out different shapes that may yield more power or a cleaner plasma, it necessitates a huge amount of engineering and design work. Conventional systems are computer-controlled and based on models and careful simulations, but they are, Fasoli says, “complex and not always necessarily optimized.”

DeepMind has developed an AI that can control the plasma autonomously. A paper published in the journal Nature describes how researchers from the two groups taught a deep reinforcement learning system to control the 19 magnetic coils inside TCV, the variable-configuration tokamak at the Swiss Plasma Center, which is used to carry out research that will inform the design of bigger fusion reactors in the future. “AI, and specifically reinforcement learning, is particularly well suited to the complex problems presented by controlling plasma in a tokamak,” says Martin Riedmiller, control team lead at DeepMind.

The neural network—a type of AI setup designed to mimic the architecture of the human brain—was initially trained in a simulation. It started by observing how changing the settings on each of the 19 coils affected the shape of the plasma inside the vessel. Then it was given different shapes to try to re-create in the plasma. These included a D-shaped cross section close to what will be used inside ITER (formerly the International Thermonuclear Experimental Reactor), the large-scale experimental tokamak under construction in France, and a snowflake configuration that could help dissipate the intense heat of the reaction more evenly around the vessel.

DeepMind’s neural network was able to manipulate the plasma inside a fusion reactor into a number of different shapes that fusion researchers have been exploring.Illustration: DeepMind & SPC/EPFL 

DeepMind’s AI was able to autonomously figure out how to create these shapes by manipulating the magnetic coils in the right way—both in the simulation and when the scientists ran the same experiments for real inside the TCV tokamak to validate the simulation. It represents a “significant step,” says Fasoli, one that could influence the design of future tokamaks or even speed up the path to viable fusion reactors. “It’s a very positive result,” says Yasmin Andrew, a fusion specialist at Imperial College London, who was not involved in the research. “It will be interesting to see if they can transfer the technology to a larger tokamak.”

Fusion offered a particular challenge to DeepMind’s scientists because the process is both complex and continuous. Unlike a turn-based game like Go, which the company has famously conquered with its AlphaGo AI, the state of a plasma constantly changes. And to make things even harder, it can’t be continuously measured. It is what AI researchers call an “under–observed system.”

“Sometimes algorithms which are good at these discrete problems struggle with such continuous problems,” says Jonas Buchli, a research scientist at DeepMind. “This was a really big step forward for our algorithm, because we could show that this is doable. And we think this is definitely a very, very complex problem to be solved. It is a different kind of complexity than what you have in games.”

Simulation Tech Can Help Predict the Biggest Threats

Simulation Tech Can Help Predict the Biggest Threats

The character of conflict between nations has fundamentally changed. Governments and militaries now fight on our behalf in the “gray zone,” where the boundaries between peace and war are blurred. They must navigate a complex web of ambiguous and deeply interconnected challenges, ranging from political destabilization and disinformation campaigns to cyberattacks, assassinations, proxy operations, election meddling, or perhaps even human-made pandemics. Add to this list the existential threat of climate change (and its geopolitical ramifications) and it is clear that the description of what now constitutes a national security issue has broadened, each crisis straining or degrading the fabric of national resilience.

Traditional analysis tools are poorly equipped to predict and respond to these blurred and intertwined threats. Instead, in 2022 governments and militaries will use sophisticated and credible real-life simulations, putting software at the heart of their decision-making and operating processes. The UK Ministry of Defence, for example, is developing what it calls a military Digital Backbone. This will incorporate cloud computing, modern networks, and a new transformative capability called a Single Synthetic Environment, or SSE.

This SSE will combine artificial intelligence, machine learning, computational modeling, and modern distributed systems with trusted data sets from multiple sources to support detailed, credible simulations of the real world. This data will be owned by critical institutions, but will also be sourced via an ecosystem of trusted partners, such as the Alan Turing Institute.

An SSE offers a multilayered simulation of a city, region, or country, including high-quality mapping and information about critical national infrastructure, such as power, water, transport networks, and telecommunications. This can then be overlaid with other information, such as smart-city data, information about military deployment, or data gleaned from social listening. From this, models can be constructed that give a rich, detailed picture of how a region or city might react to a given event: a disaster, epidemic, or cyberattack or a combination of such events organized by state enemies.

Defense synthetics are not a new concept. However, previous solutions have been built in a standalone way that limits reuse, longevity, choice, and—crucially—the speed of insight needed to effectively counteract gray-zone threats.

National security officials will be able to use SSEs to identify threats early, understand them better, explore their response options, and analyze the likely consequences of different actions. They will even be able to use them to train, rehearse, and implement their plans. By running thousands of simulated futures, senior leaders will be able to grapple with complex questions, refining policies and complex plans in a virtual world before implementing them in the real one.

One key question that will only grow in importance in 2022 is how countries can best secure their populations and supply chains against dramatic weather events coming from climate change. SSEs will be able to help answer this by pulling together regional infrastructure, networks, roads, and population data, with meteorological models to see how and when events might unfold.

Self-Driving Cars: The Complete Guide

Self-Driving Cars: The Complete Guide

In the past decade, autonomous driving has gone from “maybe possible” to “definitely possible” to “inevitable” to “how did anyone ever think this wasn’t inevitable?” to “now commercially available.” In December 2018, Waymo, the company that emerged from Google’s self-driving-car project, officially started its commercial self-driving-car service in the suburbs of Phoenix. At first, the program was underwhelming: available only to a few hundred vetted riders, and human safety operators remained behind the wheel. But in the past four years, Waymo has slowly opened the program to members of the public and has begun to run robotaxis without drivers inside. The company has since brought its act to San Francisco. People are now paying for robot rides.

And it’s just a start. Waymo says it will expand the service’s capability and availability over time. Meanwhile, its onetime monopoly has evaporated. Every significant automaker is pursuing the tech, eager to rebrand and rebuild itself as a “mobility provider. Amazon bought a self-driving-vehicle developer, Zoox. Autonomous trucking companies are raking in investor money. Tech giants like Apple, IBM, and Intel are looking to carve off their slice of the pie. Countless hungry startups have materialized to fill niches in a burgeoning ecosystem, focusing on laser sensors, compressing mapping data, setting up service centers, and more.

This 21st-century gold rush is motivated by the intertwined forces of opportunity and survival instinct. By one account, driverless tech will add $7 trillion to the global economy and save hundreds of thousands of lives in the next few decades. Simultaneously, it could devastate the auto industry and its associated gas stations, drive-thrus, taxi drivers, and truckers. Some people will prosper. Most will benefit. Some will be left behind.

It’s worth remembering that when automobiles first started rumbling down manure-clogged streets, people called them horseless carriages. The moniker made sense: Here were vehicles that did what carriages did, minus the hooves. By the time “car” caught on as a term, the invention had become something entirely new. Over a century, it reshaped how humanity moves and thus how (and where and with whom) humanity lives. This cycle has restarted, and the term “driverless car” may soon seem as anachronistic as “horseless carriage.” We don’t know how cars that don’t need human chauffeurs will mold society, but we can be sure a similar gear shift is on the way.

SelfDriving Cars The Complete Guide

The First Self-Driving Cars

Just over a decade ago, the idea of being chauffeured around by a string of zeros and ones was ludicrous to pretty much everybody who wasn’t at an abandoned Air Force base outside Los Angeles, watching a dozen driverless cars glide through real traffic. That event was the Urban Challenge, the third and final competition for autonomous vehicles put on by Darpa, the Pentagon’s skunkworks arm.

Content

This content can also be viewed on the site it originates from.

At the time, America’s military-industrial complex had already thrown vast sums and years of research trying to make unmanned trucks. It had laid a foundation for this technology, but stalled when it came to making a vehicle that could drive at practical speeds, through all the hazards of the real world. So, Darpa figured, maybe someone else—someone outside the DOD’s standard roster of contractors, someone not tied to a list of detailed requirements but striving for a slightly crazy goal—could put it all together. It invited the whole world to build a vehicle that could drive across California’s Mojave Desert, and whoever’s robot did it the fastest would get a million-dollar prize.

The 2004 Grand Challenge was something of a mess. Each team grabbed some combination of the sensors and computers available at the time, wrote their own code, and welded their own hardware, looking for the right recipe that would take their vehicle across 142 miles of sand and dirt of the Mojave. The most successful vehicle went just seven miles. Most crashed, flipped, or rolled over within sight of the starting gate. But the race created a community of people—geeks, dreamers, and lots of students not yet jaded by commercial enterprise—who believed the robot drivers people had been craving for nearly forever were possible, and who were suddenly driven to make them real.

They came back for a follow-up race in 2005 and proved that making a car drive itself was indeed possible: Five vehicles finished the course. By the 2007 Urban Challenge, the vehicles were not just avoiding obstacles and sticking to trails but following traffic laws, merging, parking, even making safe, legal U-turns.

When Google launched its self-driving car project in 2009, it started by hiring a team of Darpa Challenge veterans. Within 18 months, they had built a system that could handle some of California’s toughest roads (including the famously winding block of San Francisco’s Lombard Street) with minimal human involvement. A few years later, Elon Musk announced Tesla would build a self-driving system into its cars. And the proliferation of ride-hailing services like Uber and Lyft weakened the link between being in a car and owning that car, helping set the stage for a day when actually driving that car falls away too. In 2015, Uber poached dozens of scientists from Carnegie Mellon University—a robotics and artificial intelligence powerhouse—to get its effort going.