Select Page
DeepMind Has Trained an AI to Control Nuclear Fusion

DeepMind Has Trained an AI to Control Nuclear Fusion

The inside of a tokamak—the doughnut-shaped vessel designed to contain a nuclear fusion reaction—presents a special kind of chaos. Hydrogen atoms are smashed together at unfathomably high temperatures, creating a whirling, roiling plasma that’s hotter than the surface of the sun. Finding smart ways to control and confine that plasma will be key to unlocking the potential of nuclear fusion, which has been mooted as the clean energy source of the future for decades. At this point, the science underlying fusion seems sound, so what remains is an engineering challenge. “We need to be able to heat this matter up and hold it together for long enough for us to take energy out of it,” says Ambrogio Fasoli, director of the Swiss Plasma Center at École Polytechnique Fédérale de Lausanne in Switzerland.

That’s where DeepMind comes in. The artificial intelligence firm, backed by Google parent company Alphabet, has previously turned its hand to video games and protein folding, and has been working on a joint research project with the Swiss Plasma Center to develop an AI for controlling a nuclear fusion reaction.

In stars, which are also powered by fusion, the sheer gravitational mass is enough to pull hydrogen atoms together and overcome their opposing charges. On Earth, scientists instead use powerful magnetic coils to confine the nuclear fusion reaction, nudging it into the desired position and shaping it like a potter manipulating clay on a wheel. The coils have to be carefully controlled to prevent the plasma from touching the sides of the vessel: this can damage the walls and slow down the fusion reaction. (There’s little risk of an explosion as the fusion reaction cannot survive without magnetic confinement).

But every time researchers want to change the configuration of the plasma and try out different shapes that may yield more power or a cleaner plasma, it necessitates a huge amount of engineering and design work. Conventional systems are computer-controlled and based on models and careful simulations, but they are, Fasoli says, “complex and not always necessarily optimized.”

DeepMind has developed an AI that can control the plasma autonomously. A paper published in the journal Nature describes how researchers from the two groups taught a deep reinforcement learning system to control the 19 magnetic coils inside TCV, the variable-configuration tokamak at the Swiss Plasma Center, which is used to carry out research that will inform the design of bigger fusion reactors in the future. “AI, and specifically reinforcement learning, is particularly well suited to the complex problems presented by controlling plasma in a tokamak,” says Martin Riedmiller, control team lead at DeepMind.

The neural network—a type of AI setup designed to mimic the architecture of the human brain—was initially trained in a simulation. It started by observing how changing the settings on each of the 19 coils affected the shape of the plasma inside the vessel. Then it was given different shapes to try to re-create in the plasma. These included a D-shaped cross section close to what will be used inside ITER (formerly the International Thermonuclear Experimental Reactor), the large-scale experimental tokamak under construction in France, and a snowflake configuration that could help dissipate the intense heat of the reaction more evenly around the vessel.

DeepMind’s neural network was able to manipulate the plasma inside a fusion reactor into a number of different shapes that fusion researchers have been exploring.Illustration: DeepMind & SPC/EPFL 

DeepMind’s AI was able to autonomously figure out how to create these shapes by manipulating the magnetic coils in the right way—both in the simulation and when the scientists ran the same experiments for real inside the TCV tokamak to validate the simulation. It represents a “significant step,” says Fasoli, one that could influence the design of future tokamaks or even speed up the path to viable fusion reactors. “It’s a very positive result,” says Yasmin Andrew, a fusion specialist at Imperial College London, who was not involved in the research. “It will be interesting to see if they can transfer the technology to a larger tokamak.”

Fusion offered a particular challenge to DeepMind’s scientists because the process is both complex and continuous. Unlike a turn-based game like Go, which the company has famously conquered with its AlphaGo AI, the state of a plasma constantly changes. And to make things even harder, it can’t be continuously measured. It is what AI researchers call an “under–observed system.”

“Sometimes algorithms which are good at these discrete problems struggle with such continuous problems,” says Jonas Buchli, a research scientist at DeepMind. “This was a really big step forward for our algorithm, because we could show that this is doable. And we think this is definitely a very, very complex problem to be solved. It is a different kind of complexity than what you have in games.”

What Makes an Artist in the Age of Algorithms?

What Makes an Artist in the Age of Algorithms?

In 2021, technology’s role in how art is generated remains up for debate and discovery. From the rise of NFTs to the proliferation of techno-artists who use generative adversarial networks to produce visual expressions, to smartphone apps that write new music, creatives and technologists are continually experimenting with how art is produced, consumed, and monetized.

BT, the Grammy-nominated composer of 2010’s These Hopeful Machines, has emerged as a world leader at the intersection of tech and music. Beyond producing and writing for the likes of David Bowie, Death Cab for Cutie, Madonna, and the Roots, and composing scores for The Fast and the Furious, Smallville, and many other shows and movies, he’s helped pioneer production techniques like stutter editing and granular synthesis. This past spring, BT released GENESIS.JSON, a piece of software that contains 24 hours of original music and visual art. It features 15,000 individually sequenced audio and video clips that he created from scratch, which span different rhythmic figures, field recordings of cicadas and crickets, a live orchestra, drum machines, and myriad other sounds that play continuously. And it lives on the blockchain. It is, to my knowledge, the first composition of its kind.

Could ideas like GENESIS.JSON be the future of original music, where composers use AI and the blockchain to create entirely new art forms? What makes an artist in the age of algorithms? I spoke with BT to learn more.

What are your central interests at the interface of artificial intelligence and music?

I am really fascinated with this idea of what an artist is. Speaking in my common tongue—music—it’s a very small array of variables. We have 12 notes. There’s a collection of rhythms that we typically use. There’s a sort of vernacular of instruments, of tones, of timbres, but when you start to add them up, it becomes this really deep data set.

On its surface, it makes you ask, “What is special and unique about an artist?” And that’s something that I’ve been curious about my whole adult life. Seeing the research that was happening in artificial intelligence, my immediate thought was that music is low-hanging fruit.

These days, we can take the sum total of the artists’ output and we can take their artistic works and we can quantify the entire thing into a training set, a massive, multivariable training set. And we don’t even name the variables. The RNN (recurrent neural networks) and CNNs (convolutional neural networks) name them automatically.

So you’re referring to a body of music that can be used to “train” an artificial intelligence algorithm that can then create original music that resembles the music it was trained on. If we reduce the genius of artists like Coltrane or Mozart, say, into a training set and can recreate their sound, how will musicians and music connoisseurs respond?

I think that the closer we get, it becomes this uncanny valley idea. Some would say that things like music are sacrosanct and have to do with very base-level things about our humanity. It’s not hard to get into kind of a spiritual conversation about what music is as a language, and what it means, and how powerful it is, and how it transcends culture, race, and time. So the traditional musician might say, “That’s not possible. There’s so much nuance and feeling, and your life experience, and these kinds of things that go into the musical output.”

And the sort of engineer part of me goes, well Look at what Google has made. It’s a simple kind of MIDI-generation engine, where they’ve taken all Bach’s works and it’s able to spit out [Bach-like] fugues. Because Bach wrote so many fugues, he’s a great example. Also, he’s the father of modern harmony. Musicologists listen to some of those Google Magenta fugues and can’t distinguish them from Bach’s original works. Again, this makes us question what constitutes an artist.

I’m both excited and have incredible trepidation about this space that we’re expanding into. Maybe the question I want to be asking is less “We can, but should we?” and more “How do we do this responsibly, because it’s happening?”

Right now, there are companies that are using something like Spotify or YouTube to train their models with artists who are alive, whose works are copyrighted and protected. But companies are allowed to take someone’s work and train models with it right now. Should we be doing that? Or should we be speaking to the artists themselves first? I believe that there needs to be protective mechanisms put in place for visual artists, for programmers, for musicians.