Deadly bioweapons, automated cybersecurity attacks, powerful AI models escaping human control. Those are just some of the potential threats posed by artificial intelligence, according to a new UK government report. It was released to help set the agenda for an international summit on AI safety to be hosted by the UK next week. The report was compiled with input from leading AI companies such as Google’s DeepMind unit and multiple UK government departments, including intelligence agencies.
Joe White, the UK’s technology envoy to the US, says the summit provides an opportunity to bring countries and leading AI companies together to better understand the risks posed by the technology. Managing the potential downsides of algorithms will require old-fashioned organic collaboration, says White, who helped plan next week’s summit. “These aren’t machine-to-human challenges,” White says. “These are human-to-human challenges.”
UK prime minister Rishi Sunak will make a speech tomorrow about how, while AI opens up opportunities to advance humanity, it’s important to be honest about the new risks it creates for future generations.
The UK’s AI Safety Summit will take place on November 1 and 2 and will mostly focus on the ways people can misuse or lose control of advanced forms of AI. Some AI experts and executives in the UK have criticized the event’s focus, saying the government should prioritize more near-term concerns, such as helping the UK compete with global AI leaders like the US and China.
Some AI experts have warned that a recent uptick in discussion about far-off AI scenarios, including the possibility of human extinction, could distract regulators and the public from more immediate problems, such as biased algorithms or AI technology strengthening already dominant companies.
The UK report released today considers the national security implications of large language models, the AI technology behind ChatGPT. White says UK intelligence agencies are working with the Frontier AI Task Force, a UK government expert group, to explore scenarios like what could happen if bad actors combined a large language model with secret government documents. One doomy possibility discussed in the report suggests a large language model that accelerates scientific discovery could also boost projects trying to create biological weapons.
This July, Dario Amodei, CEO of AI startup Anthropic, told members of the US Senate that within the next two or three years it could be possible for a language model to suggest how to carry out large-scale biological weapons attacks. But White says the report is a high-level document that is not intended to “serve as a shopping list of all the bad things that can be done.”
In addition to UK government agencies, the report released today was reviewed by a panel including policy and ethics experts from Google’s DeepMind AI lab, which began as a London AI startup and was acquired by the search company in 2014, and Hugging Face, a startup developing open source AI software.
Yoshua Bengio, one of three “godfathers of AI” who won the highest award in computing, the Turing Award, for machine-learning techniques central to the current AI boom was also consulted. Bengio recently said his optimism about the technology he helped foster has soured and that a new “humanity defense” organization is needed to help keep AI in check.
During cross-examination, defense attorney Mark Cohen continually tried to stress that Alameda’s total net value assets were the same across the alternatives, and Ellison kept responding that, yes, but the balance sheets were still misleading.
Things Sam Is Freaking Out About
According to Ellison’s “things Sam is freaking out about” document, Bankman-Fried was stressed about “getting regulators to crack down on Binance,” bad PR, raising money from Saudi Crown Prince Mohammed bin Salman, and possibly buying Snapchat.
In time, the bad PR (and worse than bad PR) came true, SBF didn’t raise money from Mohammed bin Salman, and he certainly didn’t buy Snapchat, but regulators have cracked down on Binance.
SBF’s Magic Hair and Loose Morals
Bankman-Fried got a haircut for the trial, which is somewhat ironic given that he allegedly saw it, Samson-like, as the source of his powers.
Ellison claimed that he said his mop of hair helped him get higher bonuses at trading firm Jane Street and was important for his image. Her testimony revealed the extent of Bankman-Fried’s obsession with his persona. For example, he and Ellison drove luxury cars in the Bahamas until he allegedly decreed that it was better for their image to drive a Toyota Corolla and Honda Civic, respectively. He courted the media as well, both by being easy to reach and by investing in media organizations such as Semafor and TheBlock, Ellison said.
In the media, Bankman-Fried tried to cultivate an aura of being obsessed with morals, specifically with the effective altruism movement, which focuses on evidence-based ways to improve the world. His more extreme moral beliefs, however, might not have passed muster if reported publicly.
According to Ellison, Bankman-Fried said that he was a utilitarian—and though some utilitarians still tried to live by rules like “Don’t lie” and “Don’t steal,” SBF didn’t agree with that. What mattered, and what he cared about most, she claimed he said, was maximizing the good.
He thought he had a 5 percent chance of becoming president, Ellison claimed, and would be willing to flip a coin if tails meant the world would be destroyed but heads meant it would be twice as good.
Old Friends Take the Stand
Two longtime friends of SBF—Adam Yedidia from MIT, and Gary Wang from math camp—testified this week. Yedidia, an FTX coder, claimed that customers who wanted to deposit fiat money (such as dollars or euros, rather than cryptocurrency) on the FTX exchange actually ended up sending that money to a bank account controlled by, and used by, Alameda. Yedidia testified under an agreement that he couldn’t be prosecuted for his testimony.
Wang, who cofounded both FTX and Alameda and served as chief technology officer, has already pled guilty and flat-out began by saying that he had committed financial crimes with SBF. In particular, Wang explained that FTX executives wrote code that gave Alameda privileges such as the ability to have a negative balance on FTX and the ability to borrow a $65 billion—so, essentially unlimited—line of credit.
Random Number Generator
Hardly the most consequential revelation, but perhaps the funniest: During his testimony, Wang was shown a SBF tweet claiming that FTX had a $100 million insurance fund. This was not true, and in fact the number they displayed had little to do with the actual amount in the fund. The number they publicized was calculated by taking the daily trading volume, multiplying that by a random number around 7,500, and dividing it by 1 billion.
In the weeks after Sam Bankman-Fried’s FTX crypto exchange began to crumble last November, he chose to ignore the most basic piece of legal advice: Say nothing, or risk incriminating yourself. He took media interviews. He appeared on podcasts. He tweeted incessantly. He started his own Substack. He promised to testify in front of Congress, though he was arrested before he got the chance.
Starting today, Bankman-Fried will stand trial in a New York court, accused of seven separate counts of fraud against customers, investors, and lenders. FTX collapsed after users tried to withdraw their money from the exchange but were unable to because, the Department of Justice alleges, Bankman-Fried had funneled the money into a sibling business, Alameda Research, where it was spent on high-risk crypto trades, debt repayments, personal loans, luxury purchases, and other company expenses.
The trial, whose outcome will mean little for crypto businesses or the people who lost money in FTX, has already garnered plenty of public attention. The prosecution’s witnesses will include victims of the exchange’s collapse and Bankman-Fried’s one-time paramour, former Alameda CEO Caroline Ellison. It may seem intuitive that Bankman-Fried, the protagonist, should have a speaking role. But his lawyers might well advise him to plead the Fifth Amendment and decline to testify.
In his public appearances before his arrest, Bankman-Fried characterized the situation as one big mistake. There was negligence, he admits, but no criminal intent to defraud. But his attempts to explain away the allegations could create headaches for his legal team in court. As the defense, the objective is to “create an immaculate narrative,” says Jason Allegrante, chief legal officer at crypto custody firm Fireblocks, to “present the best narrative the facts will support.” But when Bankman-Fried began “defending himself in the media and court of public opinion,” he risked “introducing into the public record a lot of information and material that can be used against him.”
As the trial progresses, Bankman-Fried’s defense team will need to take those same risks into consideration in deciding who to place on the stand.
Bankman-Fried’s trial will last four to six weeks. First, the prosecution will lay out its case, calling all its witnesses—from FTX customers to investors to alleged “coconspirators.” Then the defense will choose how to respond. Under the US justice system, the prosecution must demonstrate guilt beyond a reasonable doubt. Therefore, a viable defense strategy, says Jordan Estes, partner at law firm Kramer Levin, is to “just poke holes in the government’s case” and decline to offer up any additional witnesses.
Whether Bankman-Fried takes the stand or not will only be decided, says Estes, once the strength of the prosecution’s case becomes clear. He is by no means required to testify. “It’s his decision. We’ll just have to wait and see,” she says. “If the government’s case isn’t going well—if they call witnesses that don’t appear very credible or the cross-examination goes terribly—there’s a possibility the defense will feel it doesn’t need to do anything.”
In any criminal case, the decision to put the defendant on the stand is a “high-stakes moment,” says Allegrante. Doing so exposes them to questioning by the prosecution that they would otherwise avoid, but also to the way specific jurors might interpret their testimony. It introduces additional variables to an environment the defense hopes to carefully control.
It was meant to be a week for women in tech—but this year’s Grace Hopper Celebration was swamped by men who gate-crashed the event in search of lucrative tech jobs.
The annual conference and career fair aimed at women and non-binary tech workers, which takes its name from a pioneering computer scientist, took place last week in Orlando, Florida. The event bills itself as the largest gathering of women in tech worldwide, and has sought to unite women in the tech industry for nearly 30 years. Sponsors include Apple, Amazon, and Bloomberg, and it’s a major networking opportunity for aspiring tech workers. In-person admission costs between $649 and around $1,300.
This year, droves of men showed up with résumés in hand. AnitaB.org, the nonprofit that runs the conference, said there was “an increase in participation of self-identifying males” at this year’s event. The nonprofit says it believes allyship from men is important, and noted it cannot ban men from attending due to federal nondiscrimination protections in the US.
Organizers expressed frustration. Past iterations of the conference have “always felt safe and loving and embracing,” said Bo Young Lee, president of advisory at AnitaB.org, in a LinkedIn post. “And this year, I must admit, I didn’t feel this way.”
This content can also be viewed on the site it originates from.
Cullen White, AnitaB.org’s chief impact officer, said in a video posted to X, formerly Twitter, that some registrants had lied about their gender identity when signing up, and men were now taking up space and time with recruiters that should go to women. “All of those are limited resources to which you have no right,” White said. AnitaB.org did not respond to a request for comment.
Tech jobs, once a fairly safe and lucrative bet, have become more elusive. In 2022 and 2023, tech companies around the world laid off more than 400,000 workers, according to Layoffs.fyi, a site that tracks job losses across the industry. Tens of thousands of those cuts have come from huge employers like Meta and Amazon, and some firms have instituted hiring freezes. The layoffs have been particularly brutal for immigrant workers, who have been left scrambling for sponsorship in the US after losing work.
The controversy at the Grace Hopper Celebration shows the fallout of those job losses, as women and non-binary people still struggle to find equal footing in an industry dominated by men. Women made up just a third of those working in STEM jobs as of 2021, according to the US National Center for Science and Engineering Statistics.
As job cuts bite, all prospective tech workers have become more desperate for opportunities. During the conference, videos posted to TikTok showed a sea of men waiting in line to enter the conference or speak with recruiters in the expo hall. Men and women are seen running into the expo as a staffer yells for them to slow down.
Avni Barman, the founder of female-talent focused media platform Gen She, says she immediately noticed “tons” more men and a more chaotic scene this time compared to previous years.
Barman was at the conference to host a meet-up. During and after the conference, she heard from a number of women who were sad and frustrated after. “This is a conference for women and non-binary people,” Barman says.
Nelly Azar, a student at The Ohio State University studying computer science and engineering, attended the conference and saw long lines of people waiting to speak to employers. That was entirely different from 2022, they say, when they attended and saw few men.
Azar says they could talk to only two of the companies they were interested in because others were inundated with applicants. Long lines zigzagged outside the entrance to the event’s expo hall. The frustration was palpable. This year’s conference shows “not only how fragile our spaces are, but why we need them more than ever,” Azar says. “Now is one of the most important times to advocate for gender equity.”
The AI mimics are available today in beta on Facebook Messenger, Instagram, and WhatsApp. They include a chatbot based on Paris Hilton playing a mystery-solving detective, another based on Snoop Dogg as a dungeon master, and YouTuber Mr. Beast as a chatbot that Meta describes as “the big brother that roasts you—because he cares.”
The celebrity chatbots are also built on Llama 2. Meta says tools used to build them will be made available for Meta users and businesses to make their own versions in the future.
Building products based on open-sourced machine-learning models distinguishes Meta from competitors who are also racing to introduce new forms of AI. Google and OpenAI both keep their latest AI models as proprietary.
Meta made the first Llama open to all in February and then released the more powerful Llama 2 in July. The models have been downloaded 30 million times altogether, and Meta estimates that 7,000 derivatives have been created. Adaptations of Meta’s open source AI code by outsiders can help inform how the company uses the project for its own apps and services, such as a version of Llama designed to generate programming code that Meta released last month.
The integration of Llama, a model that made its debut in February, into every Meta app and service shows how open-sourcing AI models can help big companies move faster, says Nathan Lambert, an AI researcher at Hugging Face. He says Meta’s AI announcements today appear to be a notable moment in the brief history of competition that has sprung up around large language model technology.
Some people following AI developments have a less favorable view of Meta’s open source AI strategy. Ahead of today’s news, Holly Elmore, who grew passionate about AI safety after an open letter called for a pause in AI development, announced she is holding a protest outside Meta offices in San Francisco this week asking the company to stop distributing the most detailed versions of the Llama model.
Speaking before the announcement today, Elmore told WIRED she fears that the way Meta released Llama appears in violation of an AI risk-management framework from the US National Institute of Standards and Technology.
Today’s launch of Meta AI isn’t the company’s first venture into creating an AI assistant. After acquiring AI startups working on conversational AI, it introduced a virtual assistant named M in 2015 to challenge the likes of Alexa and Google Assistant.
The assistant responded to users with a combination of software-generated text and answers from human workers. Meta, then known as Facebook, said it aimed to have algorithms do more of the work over time, but a source familiar with the project says the majority of responses sent to early users came from humans. M was quietly shut down in 2018.