Leading figures in the development of artificial intelligence systems, including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis, have signed a statement warning that the technology they are building may someday pose an existential threat to humanity comparable to that of nuclear war and pandemics.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement, released today by the Center for AI Safety, a nonprofit.
The idea that AI might become difficult to control, and either accidentally or deliberately destroy humanity, has long been debated by philosophers. But in the past six months, following some surprising and unnerving leaps in the performance of AI algorithms, the issue has become a lot more widely and seriously discussed.
In addition to Altman and Hassabis, the statement was signed by Dario Amodei, CEO of Anthropic, a startup dedicated to developing AI with a focus on safety. Other signatories include Geoffrey Hinton and Yoshua Bengio—two of three academics given the Turing Award for their work on deep learning, the technology that underpins modern advances in machine learning and AI—as well as dozens of entrepreneurs and researchers working on cutting-edge AI problems.
“The statement is a great initiative,” says Max Tegmark, a physics professor at the Massachusetts Institute of Technology and the director of the Future of Life Institute, a nonprofit focused on the long-term risks posed by AI. In March, Tegmark’s Institute published a letter calling for a six-month pause on the development of cutting-edge AI algorithms so that the risks could be assessed. The letter was signed by hundreds of AI researchers and executives, including Elon Musk.
Tegmark says he hopes the statement will encourage governments and the general public to take the existential risks of AI more seriously. “The ideal outcome is that the AI extinction threat gets mainstreamed, enabling everyone to discuss it without fear of mockery,” he adds.
Dan Hendrycks, director of the Center for AI Safety, compared the current moment of concern about AI to the debate among scientists sparked by the creation of nuclear weapons. “We need to be having the conversations that nuclear scientists were having before the creation of the atomic bomb,” Hendrycks said in a quote issued along with his organization’s statement.
The current tone of alarm is tied to several leaps in the performance of AI algorithms known as large language models. These models consist of a specific kind of artificial neural network that is trained on enormous quantities of human-written text to predict the words that should follow a given string. When fed enough data, and with additional training in the form of feedback from humans on good and bad answers, these language models are able to generate text and answer questions with remarkable eloquence and apparent knowledge—even if their answers are often riddled with mistakes.
These language models have proven increasingly coherent and capable as they have been fed more data and computer power. The most powerful model created so far, OpenAI’s GPT-4, is able to solve complex problems, including ones that appear to require some forms of abstraction and common sense reasoning.
In November, venerable tech outlet CNET began publishing articles generated by artificial intelligence, on topics such as personal finance, that proved to be riddled with errors. Today the human members of its editorial staff have unionized, calling on their bosses to provide better conditions for workers and more transparency and accountability around the use of AI.
“In this time of instability, our diverse content teams need industry-standard job protections, fair compensation, editorial independence, and a voice in the decisionmaking process, especially as automated technology threatens our jobs and reputations,” reads the mission statement of the CNET Media Workers Union, whose more than 100 members include writers, editors, video producers, and other content creators.
While the organizing effort started before CNET management began its AI rollout, its employees could become one of the first unions to force its bosses to set guardrails around the use of content produced by generative AI services like ChatGPT. Any agreement struck with CNET’s parent company, Red Ventures, could help set a precedent for how companies approach the technology. Multiple digital media outlets have recently slashed staff, with some like BuzzFeed and Sports Illustrated at the same time embracing AI-generated content. Red Ventures did not immediately respond to a request for comment.
In Hollywood, AI-generated writing has prompted a worker uprising. Striking screenwriters want studios to agree to prohibit AI authorship and to never ask writers to adapt AI-generated scripts. The Alliance of Motion Picture and Television Producers rejected that proposal, instead offering to hold annual meetings to discuss technological advancements. The screenwriters and CNET’s staff are both represented by the Writers Guild of America.
While CNET bills itself as “your guide to a better future,” the 30-year-old publication late last year stumbled clumsily into the new world of generative AI that can create text or images. In January, the science and tech website Futurism revealed that in November, CNET had quietly started publishing AI-authored explainers such as “What Is Zelle and How Does it Work?” The stories ran under the byline “CNET Money Staff,” and readers had to hover their cursor over it to learn that the articles had been written “using automation technology.”
A torrent of embarrassing disclosures followed. The Verge reported that more than half of the AI-generated stories contained factual errors, leading CNET to issue sometimes lengthy corrections on 41 out of its 77 bot-written articles. The tool that editors used also appeared to have plagiarized work from competing news outlets, as generative AI is wont to do.
Then-editor-in-chief Connie Guglielmo later wrote that a plagiarism-detection tool had been misused or failed and that the site was developing additional checks. One former staffer demanded that her byline be excised from the site, concerned that AI would be used to update her stories in an effort to lure more traffic from Google search results.
In response to the negative attention to CNET’s AI project, Guglielmo published an article saying that the outlet had been testing an “internally designed AI engine” and that “AI engines, like humans, make mistakes.” Nonetheless, she vowed to make some changes to the site’s disclosure and citation policies and forge ahead with its experiment in robot authorship. In March, she stepped down from her role as editor in chief and now heads up the outlet’s AI edit strategy.
On March 19, Donald Trump Jr. sent an email via the firm that manages his father’s email list, Campaign Nucleus, announcing “a HUGE advance in the culture war.” That culture war, Trump wrote, is “coming to corporate America.” He added that conservatives have a “new” tool to fight back against “woke” workplaces: the “free to work” job board RedBalloon. As an incentive to create an account on the website, Trump offered 20 autographed copies of his latest book, Triggered.
“The big job boards like Indeed and ZipRecruiter are actually promoting ‘woke’ workplace policies,” Trump said in a promotional video posted on the right-wing video streaming site Rumble. “They’re a huge part of the problem.” Exactly what the problem is, or what the word “woke” is supposed to stand for other than a conservative shibboleth, is unclear.
Standing next to the eldest son of the recently indicted former president in the RedBalloon advertisement is the company’s unfortunately named founder, Andrew Crapuchettes. RedBalloon’s origin story goes back to 2021, when Crapuchettes claims he was fired from his role as CEO at his former company, EMSI, for being “too conservative and Christian.”
RedBalloon’s explicitly “anti-woke” positioning fits within a broader conservative push in recent years to create a “parallel economy” apart from progressive values. The idea has been promoted by the junior Trump and far-right pundits like Charlie Kirk of Turning Points USA. And while a parallel right-wing ecosystem of media outlets has gained some traction, other anti-woke projects haven’t fared so well. Consider the Peter Theil-funded bank that faced self-cancellation, or the fact that right-wing Twitter alternative Parler is down to around 20 employees. As NBC News reported last month, conservative tech founders at the most recent Conservative Political Action Conference “said they believe some companies that were part of the ‘parallel economy’ movement got ahead of themselves in their aspirations.”
‘An Unapologetic Conservative Christian’
The particular nature of Crapuchettes’ Christian faith is something that may give pause to fans of the separation of church and state. In November 2021, TheGuardian reported Crapuchettes as an elder of the Christ Church in Moscow, Idaho—a church that is led by a man who has “openly expressed the ambition of creating a ‘theocracy’ in America.”
“When I ran the company, I believed that everybody should bring their whole self to work,” Crapuchettes says when asked about his faith. “And as an unapologetic conservative Christian, that means that when we’re having our annual Christmas dinner, I’m going to pray for the meal.”
Regarding Crapuchettes’ role as an elder of a church that promotes Christian theocracy, he says it was “never brought up” as a conflict by his former board of directors.
“Was it an underlying issue? I have no idea,” Crapuchettes says.
Crapuchettes says the issues with his former employer started when he and the EMSI board of directors butted heads over various social issues. “The Covid-BLM-George Floyd social shift happened,” Crapuchettes says. “We came to a head on a number of things, and they ended up selling the business.”
You may have noticed some impressive video memes made with AI in recent weeks. Harry Potter reimagined as a Balenciaga commercial and nightmarish footage of Will Smith eating spaghetti both recently went viral. They highlight how quickly AI’s ability to create video is advancing, as well as how problematic some uses of the technology may be.
These videos remind me of the moment AI image-making tools became widespread last year, when programs like Craiyon (formerly known as DALL-E Mini) let anyone conjure up recognizable, if crude and often surreal, images, such as surveillance footage of babies robbing a gas station, Darth Vadar courtroom sketches, and Elon Musk eating crayons.
Craiyon was an open source knockoff of the then carefully restricted DALL-E 2 image generator from OpenAI, the company behind ChatGPT. The tool was the first to show AI’s ability to take a text prompt and turn it into what looked like real photos and human-drawn illustrations. Since then, DALL-E has become open to everyone, and programs like Midjourney and Dream Studio have developed and honed similar tools, making it relatively trivial to craft complex and realistic images with a few taps on a keyboard.
As engineers have tweaked the algorithmic knobs and levers behind these image generators, added more training data, and paid for more GPU chips to run everything, these image-making tools have become incredibly good at faking reality. To take a few examples from a subreddit dedicated to strange AI images, check out Alex Jones at a gay pride parade or the Ark of the Covenant at a yard sale.
Widespread access to this technology, and its sophistication, forces us to rethink how we view online imagery, as was highlighted after AI-made images purporting to show Donald Trump’s arrest went viral last month. The incident led Midjourney to announce that it would no longer offer a free trial of its service—a fix that might deter some cheapskate bad actors but leaves the broader problem untouched.
As WIRED’s Amanda Hoover writes this week, algorithms still struggle to generate convincing video from a prompt. Creating many individual frames is computationally expensive, and as today’s jittering and sputtering videos show, it is hard for algorithms to maintain enough coherence between them to produce a video that makes sense.
AI tools are, however, getting a lot more adept at editing videos. The Balenciaga meme, along with versions referencing Friends and Breaking Bad, were made by combining a few different AI tools, first to generate still images and then to add simple animation effects. But the end result is still impressive.
Runway ML, a startup that’s developing AI tools for professional image and video creation and editing, this week launched a new more efficient technique for applying stylistic changes to videos. I used it to create this dreamlike footage of my cat, Leona, walking through a “cloudscape” from an existing video in just a few minutes.
Different machine learning techniques open new possibilities. A company called Luma AI, for instance, is using a technique known as neural radiance fields to turn 2D photographs into detailed 3D scenes. Feed a few snapshots into the company’s app, and you’ll have a fully interactive 3D scene to play with.
These clips suggestt we are at an inflection point for AI video making. As with AI image generation, a growing rush of memes could be followed by significant improvements in the quality and controllability of AI videos that lodge the technology in all sorts of places. AI may well become a muse for some auteurs. Runway’s tools were used by the visual effects artists working on the Oscar-winning Everything Everywhere All At Once. Darren Aronofsky, director of The Whale, Black Swan, and Pi is also a fan of Runway.
But you only need to look at how advanced images from Midjourney and Dream Studio are now to sense where AI video is heading—and how difficult it may become to distinguish real clips from fake ones. Of course, people can already manipulate videos with existing technology, but it’s still relatively expensive and difficult to pull off.
The rapid advances in generative AI may prove dangerous in an era when social media has been weaponized and deepfakes are propagandists’ playthings. As Jason Parham wrote for WIRED this week, we also need to seriously consider how generative AI can recapture and repurpose ugly stereotypes.
For now, the instinct to trust video clips is mostly reliable, but it might not be long before the footage we see is less solid and truthful than it once was.
The big news this week was a call from tech luminaries to pause development and deployment of AI models more advanced than OpenAI’s GTP-4—the stunningly capable language algorithm behind ChatGPT—until risks including job displacement and misinformation can be better understood.
Even if OpenAI, Google, Microsoft, and other tech heavyweights were to stop what they’re doing—and they’re not going to stop what they’re doing—the AI models that have already been developed are likely to have profound impacts, especially in software development.
It might not look like a regular business deal, but Alphabet’s agreement to supply AI to Replit, a web-based coding tool with over 20 million users, is something of a seismic shift. Replit will use Google’s AI models, along with others, in Ghostwriter, a tool that recommends code and answers code-related questions in a manner similar to ChatGPT. Amjad Masad, Replit’s CEO, tells me that Google has “super cool technology” and that his company can get it into the hands of developers. Through this partnership, Google will also make Replit available to users of Google Cloud, helping it reach more business customers.
The move is particularly significant because Alphabet is squaring up to Microsoft and GitHub, which are likewise using AI to assist coders with Copilot, an add-on for Visual Studio. The same AI that makes ChatGPT seem so clever works on computer languages. When you start typing code, tools like Copilot will suggest a way to complete it.
Alphabet’s move also signals what could be the next big battleground for large tech companies. While so much attention is being paid to ChatGPT parlor tricks and Midjourey 5 versions of Donald Trump, the bigger story is about which company can offer developers the best AI tools—and the new software that developers will build with that AI by their side.
Research from Microsoft suggests that developers can perform tasks over 50 percent faster when using an AI assistant. Companies that offer cutting-edge AI can draw developers to their coding tools and get those users hooked on their clouds and other stuff. Amazon has developed an AI coding tool called Code Whisperer, and Meta is working on one for internal use too. Presumably, Apple will not want to be left behind.
As well as helping developers write code, AI is starting to change the way code is put together. Last week, OpenAI announced that the first plugins for ChatGPT have been created. They will make it possible for the bot to perform tasks like searching for flights, booking restaurants, and ordering groceries. Incorporating AI into code can also accelerate software development. This week Masad of Replit shared a neat example—an app that will turn voice commands into working websites. “We think a lot of software projects will start that way in future,” Masad says.
With things moving so quickly, it’s worth considering what the consequences of rapidly incorporating AI into software development might be. AI tools can reproduce vulnerabilities in the code they suggest that developers may not notice or might be unable to spot. Perhaps developers will become more complacent, or see their skills atrophy, if they rely too heavily on AI. And what kind of “technical debt” might emerge if programmers need to go back and fix software that no human has ever closely examined?