Over the two years lawmakers have been negotiating the rules agreed today, AI technology and the leading concerns about it have dramatically changed. When the AI Act was conceived in April 2021, policymakers were worried about opaque algorithms deciding who would get a job, be granted refugee status or receive social benefits. By 2022, there were examples that AI was actively harming people. In a Dutch scandal, decisions made by algorithms were linked to families being forcibly separated from their children, while students studying remotely alleged that AI systems discriminated against them based on the color of their skin.
Then, in November 2022, OpenAI released ChatGPT, dramatically shifting the debate. The leap in AI’s flexibility and popularity triggered alarm in some AI experts, who drew hyperbolic comparisons between AI and nuclear weapons.
That discussion manifested in the AI Act negotiations in Brussels in the form of a debate about whether makers of so-called foundation models such as the one behind ChatGPT, like OpenAI and Google, should be considered as the root of potential problems and regulated accordingly—or whether new rules should instead focus on companies using those foundational models to build new AI-powered applications, such as chatbots or image generators.
Representatives of Europe’s generative AI industry expressed caution about regulating foundation models, saying it could hamper innovation among the bloc’s AI startups. “We cannot regulate an engine devoid of usage,” Arthur Mensch, CEO of French AI company Mistral, said last month. “We don’t regulate the C [programming] language because one can use it to develop malware. Instead, we ban malware.” Mistral’s foundation model 7B would be exempt under the rules agreed today because the company is still in the research and development phase, Carme Artigas, Spain’s Secretary of State for Digitalization and Artificial Intelligence, said in the press conference.
The major point of disagreement during the final discussions that ran late into the night twice this week was whether law enforcement should be allowed to use facial recognition or other types of biometrics to identify people either in real time or retrospectively. “Both destroy anonymity in public spaces,” says Daniel Leufer, a senior policy analyst at digital rights group Access Now. Real-time biometric identification can identify a person standing in a train station right now using live security camera feeds, he explains, while “post” or retrospective biometric identification can figure out that the same person also visited the train station, a bank, and a supermarket yesterday, using previously banked images or video.
Leufer said he was disappointed by the “loopholes” for law enforcement that appeared to have been built into the version of the act finalized today.
European regulators’ slow response to the emergence of social media era loomed over discussions. Almost 20 years elapsed between Facebook’s launch and the passage of the Digital Services Act—the EU rulebook designed to protect human rights online—taking effect this year. In that time, the bloc was forced to deal with the problems created by US platforms, while being unable to foster their smaller European challengers. “Maybe we could have prevented [the problems] better by earlier regulation,” Brando Benifei, one of two lead negotiators for the European Parliament, told WIRED in July. AI technology is moving fast. But it will still be many years until it’s possible to say whether the AI Act is more successful in containing the downsides of Silicon Valley’s latest export.
Hoffman and others said that there’s no need to pause development of AI. He called that drastic measure, for which some AI researchers have petitioned, foolish and destructive. Hoffman identified himself as a rational “accelerationist”—someone who knows to slow down when driving around a corner but that, presumably, is happy to speed up when the road ahead is clear. “I recommend everyone come join us in the optimist club, not because it’s utopia and everything works out just fine, but because it can be part of an amazing solution,” he said. “That’s what we’re trying to build towards.”
Mitchell and Buolamwini, who is artist-in-chief and president of the AI harms advocacy group Algorithmic Justice League, said that relying on company promises to mitigate bias and misuse of AI would not be enough. In their view, governments must make clear that AI systems cannot undermine people’s rights to fair treatment or humanity. “Those who stand to be exploited or extorted, even exterminated” need to be protected, Buolamwini said, adding that systems like lethal drones should be stopped. “We’re already in a world where AI is dangerous,” she said. “We have AI as the angels of death.”
Applications such as weaponry are far from OpenAI’s core focus on aiding coders, writers, and other professionals. The company’s tools by their terms cannot be used in military and warfare—although OpenAI’s primary backer and enthusiastic customer Microsoft has a sizable business with the US military. But Buolamwini suggested that companies developing business applications deserve no less scrutiny. As AI takes over mundane tasks such as composition, companies must be ready to reckon with the social consequences of a world that may offer workers fewer meaningful opportunities to learn the basics of a job that it may turn out are vital to becoming highly skilled. “What does it mean to go through that process of creation, finding the right word, figuring out how to express yourself, and learning something in the struggle to do it?” she said.
Fei-Fei Li, a Stanford University computer scientist who runs the school’s Institute for Human-Centered Artificial Intelligence, said the AI community has to be focused on its impacts on people, all the way from individual dignity to large societies. “I should start a new club called the techno-humanist,” she said. “It’s too simple to say, ‘Do you want to accelerate or decelerate?’ We should talk about where we want to accelerate, and where we should slow down.”
Li is one of the modern AI pioneers, having developed the computer vision system known as ImageNet. Would OpenAI want a seemingly balanced voice like hers on its new board? OpenAI board chair Bret Taylor did not respond to a request to comment. But if the opportunity arose, Li said, “I will carefully consider that.”
There are other clues to what Q* could be. The name may be an allusion to Q-learning, a form of reinforcement learning that involves an algorithm learning to solve a problem through positive or negative feedback, which has been used to create game-playing bots and to tune ChatGPT to be more helpful. Some have suggested that the name may also be related to the A* search algorithm, widely used to have a program find the optimal path to a goal.
The Information throws another clue into the mix: “Sutskever’s breakthrough allowed OpenAI to overcome limitations on obtaining enough high-quality data to train new models,” its story says. “The research involved using computer-generated [data], rather than real-world data like text or images pulled from the internet, to train new models.” That appears to be a reference to the idea of training algorithms with so-called synthetic training data, which has emerged as a way to train more powerful AI models.
Subbarao Kambhampati, a professor at Arizona State University who is researching the reasoning limitations of LLMs, thinks that Q* may involve using huge amounts of synthetic data, combined with reinforcement learning, to train LLMs to specific tasks such as simple arithmetic. Kambhampati notes that there is no guarantee that the approach will generalize into something that can figure out how to solve any possible math problem.
For more speculation on what Q* might be, read this post by a machine-learning scientist who pulls together the context and clues in impressive and logical detail. The TLDR version is that Q* could be an effort to use reinforcement learning and a few other techniques to improve a large language model’s ability to solve tasks by reasoning through steps along the way. Although that might make ChatGPT better at math conundrums, it’s unclear whether it would automatically suggest AI systems could evade human control.
That OpenAI would try to use reinforcement learning to improve LLMs seems plausible because many of the company’s early projects, like video-game-playing bots, were centered on the technique. Reinforcement learning was also central to the creation of ChatGPT, because it can be used to make LLMs produce more coherent answers by asking humans to provide feedback as they converse with a chatbot. When WIRED spoke with Demis Hassabis, the CEO of Google DeepMind, earlier this year, he hinted that the company was trying to combine ideas from reinforcement learning with advances seen in large language models.
Rounding up the available clues about Q*, it hardly sounds like a reason to panic. But then, it all depends on your personal P(doom) value—the probability you ascribe to the possibility that AI destroys humankind. Long before ChatGPT, OpenAI’s scientists and leaders were initially so freaked out by the development of GPT-2, a 2019 text generator that now seems laughably puny, that they said it could not be released publicly. Now the company offers free access to much more powerful systems.
OpenAI refused to comment on Q*. Perhaps we will get more details when the company decides it’s time to share more results from its efforts to make ChatGPT not just good at talking but good at reasoning too.
Sam Altman marked his formal return as CEO of OpenAI today in a company memo that confirmed changes to the company’s board, including a new nonvoting seat for the startup’s primary investor, Microsoft.
In a memo sent to staff and shared on OpenAI’s blog, Altman painted the chaos of the past two weeks, triggered by the board’s loss of trust in their CEO, during which almost the entire staff of the company threatened to quit, as a testament to the startup’s resilience rather than a sign of instability.
“You stood firm for each other, this company, and our mission,” Altman wrote. “One of the most important things for the team that builds [artificial general intelligence] safely is the ability to handle stressful and uncertain situations, and maintain good judgment throughout. Top marks.”
Altman was ousted on November 17. The company’s nonprofit board of directors said that a deliberative review had concluded that Altman “was not consistently candid in his communications with the board.” Under OpenAI’s unusual structure, the board’s duty was to the project’s original, nonprofit mission of developing AI that is beneficial to humanity, not the company’s business.
That board that ejected Altman included the company’s chief scientist, Ilya Sutskever, who later recanted and joined with staff who threatened to quit if Altman was not reinstated.
Altman said that there would be no hard feelings over that, although his note left questions over Sutskever’s future.
“I love and respect Ilya, I think he’s a guiding light of the field and a gem of a human being. I harbor zero ill will towards him,” Altman wrote, adding, “We hope to continue our working relationship and are discussing how he can continue his work at OpenAI.” What was clear, however, was that Sutskever would not be returning to the board.
Altman’s note to staff confirmed that OpenAI’s new all-male board will consist of former Treasury secretary Larry Summers, Quora CEO Adam D’Angelo, and former Salesforce co-CEO Bret Taylor, with Taylor as chair. D’Angelo is the only remaining member of the previous board.
Previous board members Helen Toner, a director at CSET, a think tank, and Tasha McCauley, an entrepreneur, both resigned.
Speaking at the New York Times DealBook summit shortly before the announcement, OpenAI cofounder Elon Musk expressed concerns about Altman and questioned why Sutskever had voted to fire him. “Either it was a serious thing and we should know what it is, or it’s not a serious thing and the board should resign,” Musk said. “I have mixed feelings about Sam, I do.”
Elon Musk needed fewer than 100 characters to add new chaos to the ongoing crisis swirling around OpenAI after the shock firing of CEO Sam Altman last week.
In a post on X Tuesday Musk drew attention to an anonymous letter accusing Altman of various examples of underhanded behavior as CEO of OpenAI.
The link shared by Musk was to a copy of the letter uploaded to Github, a resource for sharing code. That copy of the letter was removed less than an hour after Musk posted it. Sources familiar with Altman’s tenure at OpenAI told WIRED they were not familiar with the accusations. Altman did not immediately reply to a request for comment.
A person who responded to an email sent to an address listed on the Github profile told WIRED they saw the letter via a discussion on Hacker News and posted a copy of the original, found on Board.net, which allows anonymous posts. The person said they later removed their copy of the letter to preserve their privacy, adding “I have no idea as to the veracity of any of the contents.”
An email sent to an address included in the letter did not immediately receive a response.
“These seem like concerns worth investigating,” Musk wrote in his post linking to the unsigned letter, which is addressed to OpenAI’s board and purports to have been written by concerned former employees of the company. WIRED has not been able to verify the authenticity or any of the claims.
Since Altman’s exit last week, for reasons not yet made clear by the board that fired him, OpenAI’s current employees have shown striking loyalty. Monday, more than 95 percent of the company signed an open letter saying they were willing to leave the company if Altman wasn’t restored.
The anonymous letter boosted by Musk makes allegations against Altman and also Greg Brockman, an OpenAI cofounder who was removed as board chair last week and then quit over Altman’s treatment. “Throughout our time at OpenAI, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their insatiable pursuit of achieving artificial general intelligence (AGI),” the letter alleges.
Musk has himself been accused of similar behavior at several of his own ventures, which include automaker Tesla, rocket maker SpaceX, and brain interface developer Neuralink.