The destination for this yet-untargeted bounty? Livestock feed, says Payne.
This exploitation of the mesopelagic required a huge harvesting effort in the southwest Indian Ocean and southern Atlantic, including employing boats with helicopters and fish-processing facilities to support a fleet of smaller fishing vessels. After the Soviet Union collapsed—along with its fisheries subsidies—momentum in the fishery collapsed, too.
Forty years later, interest in fishing the mesopelagic was revived, particularly among countries in northern Europe, after the 2010 Malaspina Circumnavigation Expedition delivered the revised estimate of mesopelagic biomass. This interest is what sparked initiatives like the MEESO project, which is attempting to answer both economic and biological questions about mesopelagic fisheries.
The work of Runar Gjerp Solstad, a researcher with Nofima, a Norwegian research institute that has been collaborating on the MEESO project, suggests it’s unlikely a mesopelagic fish will end up on anyone’s dinner plate. Solstad’s work has focused on assessing the food potential of one of the target species, Mueller’s pearlside, a mesopelagic fish. For the human palate, results have not been promising.
“It tastes really bad,” he says. “There is no other way of putting it.”
Still, as was the case with the defunct mesopelagic fisheries of the USSR, much of the interest is in using mesopelagic fish as food for other animals, like Atlantic salmon. With demand for seafood, especially from aquaculture, expected to double by 2050, some scientists and fishers say the eventual exploitation of the mesopelagic is likely—but it’s a harvest that could have unintended consequences.
A look to existing commercial fisheries suggests how severe these consequences could be. In 2020, scientists publishing in the journal Science Advances estimated that by removing fish that would otherwise be pooping and dying—another way for carbon to reach the deep ocean—humans have effectively prevented the sequestration of 22 million tonnes of carbon.
But beyond fishing, the greater change to the mesopelagic zone may come from climate change.
Approximately 1.5 million years ago, Earth’s climate was flip-flopping roughly 4 degrees Celsius between glacial and balmier periods. Paleontologist Konstantina Agiadi’s research suggests this rapid fluctuation—at least on a geological timescale—in the early middle Pleistocene had a significant effect on the twilight zone.
By studying the fossilized otoliths, or ear stones, of lanternfish from this period, Agiadi, a postdoctoral researcher at the University of Vienna in Austria, found that median body size of mesopelagic fish shrank by 35 percent as the climate warmed. (Warmer water speeds up fish metabolism, causing them to mature, and stop growing, at a smaller body size.) This would have had implications for the biological carbon pump, Agiadi says, as smaller fish travel shorter distances, meaning less carbon exported to the deep ocean.
Over the two years lawmakers have been negotiating the rules agreed today, AI technology and the leading concerns about it have dramatically changed. When the AI Act was conceived in April 2021, policymakers were worried about opaque algorithms deciding who would get a job, be granted refugee status or receive social benefits. By 2022, there were examples that AI was actively harming people. In a Dutch scandal, decisions made by algorithms were linked to families being forcibly separated from their children, while students studying remotely alleged that AI systems discriminated against them based on the color of their skin.
Then, in November 2022, OpenAI released ChatGPT, dramatically shifting the debate. The leap in AI’s flexibility and popularity triggered alarm in some AI experts, who drew hyperbolic comparisons between AI and nuclear weapons.
That discussion manifested in the AI Act negotiations in Brussels in the form of a debate about whether makers of so-called foundation models such as the one behind ChatGPT, like OpenAI and Google, should be considered as the root of potential problems and regulated accordingly—or whether new rules should instead focus on companies using those foundational models to build new AI-powered applications, such as chatbots or image generators.
Representatives of Europe’s generative AI industry expressed caution about regulating foundation models, saying it could hamper innovation among the bloc’s AI startups. “We cannot regulate an engine devoid of usage,” Arthur Mensch, CEO of French AI company Mistral, said last month. “We don’t regulate the C [programming] language because one can use it to develop malware. Instead, we ban malware.” Mistral’s foundation model 7B would be exempt under the rules agreed today because the company is still in the research and development phase, Carme Artigas, Spain’s Secretary of State for Digitalization and Artificial Intelligence, said in the press conference.
The major point of disagreement during the final discussions that ran late into the night twice this week was whether law enforcement should be allowed to use facial recognition or other types of biometrics to identify people either in real time or retrospectively. “Both destroy anonymity in public spaces,” says Daniel Leufer, a senior policy analyst at digital rights group Access Now. Real-time biometric identification can identify a person standing in a train station right now using live security camera feeds, he explains, while “post” or retrospective biometric identification can figure out that the same person also visited the train station, a bank, and a supermarket yesterday, using previously banked images or video.
Leufer said he was disappointed by the “loopholes” for law enforcement that appeared to have been built into the version of the act finalized today.
European regulators’ slow response to the emergence of social media era loomed over discussions. Almost 20 years elapsed between Facebook’s launch and the passage of the Digital Services Act—the EU rulebook designed to protect human rights online—taking effect this year. In that time, the bloc was forced to deal with the problems created by US platforms, while being unable to foster their smaller European challengers. “Maybe we could have prevented [the problems] better by earlier regulation,” Brando Benifei, one of two lead negotiators for the European Parliament, told WIRED in July. AI technology is moving fast. But it will still be many years until it’s possible to say whether the AI Act is more successful in containing the downsides of Silicon Valley’s latest export.
Following two years of preproduction, game developer 10 Chambers finally announced its new heist game—Den of Wolves—Thursday during the 2023 Game Awards. Set in 2097 in a highly corrupt city located in the middle of the Pacific Ocean, it is, according to narrative director Simon Viklund, the kind of game “where you’re supposed to feel like a badass.” For Viklund, who also serves as the game’s composer (he did the compositions for PayDay: The Heist and PayDay 2, too), that means “the music needs to, like, [grunt noise].”
True to its name, Den of Wolves’ fictional city is a place where basically anything is legal as long as it is done in the pursuit of supercharged innovation and groundbreaking technology. Imagine PayDay meets Cyberpunk 2077 set in a metropolis that’s a mixture of Venice and Hong Kong. The concept is quite different from 10 Chambers’ previous work with horror game GTFO, but it structurally plays to the studio’s core strength: four person co-op games.
A lot is on the line as the studio works on its second release. 10 Chambers received an investment from Chinese tech and entertainment conglomerate Tencent to build this game and expand from a small staff of around 10 people to nearly 100. Viklund emphasizes that the game will have a highly detailed environment but that gamers should not expect an open-world experience. The overall vibe, Viklund adds, pulls from a litany of sci-fi and thriller movies, like Heat and Judge Dredd (the Stallone one, not the 2012 reboot).
While he enjoyed working on horror game music for GTFO, Viklund is excited to move away from that genre and back to a PayDay-esque heist experience. “My wheelhouse is this power fantasy type of music,” he says. Never played that franchise before? Give “Razormind” from PayDay 2 a listen any morning you forget your coffee at home and need a quick jolt of adrenaline.
So, what can players expect from the music in Den of Wolves? “So, there’s going to be elements, of course, that are similar to PayDay,” says Viklund. “But I’m keen on taking it somewhere else in terms of tempo. Making it heavier, slower paced.” He also looks forward to incorporating different elements of percussion inspired by the Pacific Ocean setting.
Since the game is still in early development and won’t be released for a while, WIRED did not see any actual game footage during a recent preview event 10 Chambers held for the title. Similar to the launch of GTFO, the company plans to release the game at first to players through Steam early access. Den of Wolves doesn’t have a release date yet, but PC gamers can anticipate receiving it before their console counterparts.
Fans of GTFO may be disappointed that their game’s content updates are ending, but Viklund points to 10 Chambers’ first game as critical for building the company’s confidence around design. “It was very freeing to be able to have a project where we could have that ‘fuck it—we’ll just do it’ sort of attitude,” he says. This type of confidence is a driving force behind 10 Chambers’ decision to develop something fresh for players rather than relying on a franchise concept that already exists.
Hoffman and others said that there’s no need to pause development of AI. He called that drastic measure, for which some AI researchers have petitioned, foolish and destructive. Hoffman identified himself as a rational “accelerationist”—someone who knows to slow down when driving around a corner but that, presumably, is happy to speed up when the road ahead is clear. “I recommend everyone come join us in the optimist club, not because it’s utopia and everything works out just fine, but because it can be part of an amazing solution,” he said. “That’s what we’re trying to build towards.”
Mitchell and Buolamwini, who is artist-in-chief and president of the AI harms advocacy group Algorithmic Justice League, said that relying on company promises to mitigate bias and misuse of AI would not be enough. In their view, governments must make clear that AI systems cannot undermine people’s rights to fair treatment or humanity. “Those who stand to be exploited or extorted, even exterminated” need to be protected, Buolamwini said, adding that systems like lethal drones should be stopped. “We’re already in a world where AI is dangerous,” she said. “We have AI as the angels of death.”
Applications such as weaponry are far from OpenAI’s core focus on aiding coders, writers, and other professionals. The company’s tools by their terms cannot be used in military and warfare—although OpenAI’s primary backer and enthusiastic customer Microsoft has a sizable business with the US military. But Buolamwini suggested that companies developing business applications deserve no less scrutiny. As AI takes over mundane tasks such as composition, companies must be ready to reckon with the social consequences of a world that may offer workers fewer meaningful opportunities to learn the basics of a job that it may turn out are vital to becoming highly skilled. “What does it mean to go through that process of creation, finding the right word, figuring out how to express yourself, and learning something in the struggle to do it?” she said.
Fei-Fei Li, a Stanford University computer scientist who runs the school’s Institute for Human-Centered Artificial Intelligence, said the AI community has to be focused on its impacts on people, all the way from individual dignity to large societies. “I should start a new club called the techno-humanist,” she said. “It’s too simple to say, ‘Do you want to accelerate or decelerate?’ We should talk about where we want to accelerate, and where we should slow down.”
Li is one of the modern AI pioneers, having developed the computer vision system known as ImageNet. Would OpenAI want a seemingly balanced voice like hers on its new board? OpenAI board chair Bret Taylor did not respond to a request to comment. But if the opportunity arose, Li said, “I will carefully consider that.”
23andMe has maintained that attackers used a technique known as credential stuffing to compromise the 14,000 user accounts—finding instances where leaked login credentials from other services were reused on 23andMe. In the wake of the incident, the company forced all of its users to reset their passwords and began requiring two-factor authentication for all customers. In the weeks after 23andMe initially disclosed its breach, other similar services. including Ancestry and MyHeritage, also began promoting or requiring two-factor authentication on their accounts.
In October and again this week, though, WIRED pressed 23andMe on its finding that the user account compromises were attributable solely to credential-stuffing attacks. The company has repeatedly declined to comment, but multiple users have noted that they are certain their 23andMe account usernames and passwords were unique and could not have been exposed somewhere else in another leak.
In at least one example, though, 23andMe eventually provided an explanation to the user. On Tuesday, US National Security Agency cybersecurity director Rob Joyce noted on his personal X (formerly Twitter) account: “They disclose the credential stuffing attacks, but they don’t say how the accounts were targeted for stuffing. This was unique and not an account that could be scraped from the web or other sites.” Joyce wrote that he creates a unique email address for each company he uses to make an account. “That account is used NOWHERE else and it was unsuccessfully stuffed,” he wrote, adding: “Personal opinion: @23andMe hack was STILL worse than they are owning with the new announcement.”
Hours after Joyce publicly raised these concerns (and WIRED asked 23andMe about his case), Joyce said that the company had contacted him to determine what had happened with his account. Joyce did use a unique email address for his 23andMe account, but the company partnered with MyHeritage in 2014 and 2015 to enhance the DNA Relatives “Family Tree” functionality, which Joyce says he subsequently used. Then, separately, MyHeritage suffered a data breach in 2018 in which Joyce’s unique 23andMe email address was apparently exposed. He adds that because of using strong, unique passwords on both his MyHeritage and 23andMe accounts, neither was ever successfully compromised by attackers.
The anecdote underscores the stakes of user data sharing between companies and software features that promote social sharing when the information involved is deeply personal and relates directly to identity. It may be that the larger numbers of impacted users were not in the SEC report because 23andMe (like many companies that have suffered security breaches) does not want to include scraped data in the category of breached data. These delineations, though, ultimately make it difficult for users to grasp the scale and impact of security incidents.
“I firmly believe that cyber-insecurity is fundamentally a policy problem,” says Brett Callow, a threat analyst at the security firm Emsisoft. “We need standardized and uniform disclosure and reporting laws, prescribed language for those disclosures and reports, regulation and licensing of negotiators. Far too much happens in the shadows or is obfuscated by weasel words. It’s counterproductive and helps only the cybercriminals.”
Meanwhile, apparent 23andMe user Kendra Fee flagged on Tuesday that 23andMe is notifying customers about changes to its terms of service related to dispute resolutions and arbitration. The company says that the changes will “encourage a prompt resolution of any disputes” and “streamline arbitration proceedings where multiple similar claims are filed.” Users can opt out of the new terms by notifying the company that they decline within 30 days of receiving notice of the change.
Updated at 10:35 pm ET, December 5, 2023, to include new information about NSA cybersecurity director Rob Joyce’s 23andMe account and the broader implications of his experience.
The first trailer for Rockstar Games’ Grand Theft Auto VI has arrived, and it’s promising a new female protagonist. More of a teaser than anything, the trailer introduces viewers to Lucia, a woman who blames her incarceration on “bad luck, I guess.”
To say Grand Theft Auto VI is hotly anticipated is an understatement. Rockstar’s last car larceny game dropped more than 10 years ago, and despite a circus of speculation and some leaks in 2022, fans have seen almost nothing official from the franchise.
Today’s trailer doesn’t offer much in the way of details. A series of flashing scenes set to Tom Petty’s “Love is a Long Road” feature beaches, late-night parties, and a lot of butts, as Lucia plans a robbery with a partner. It’s the first time Grand Theft Auto has included a female antihero, and it appears players will be returning to Vice City.
Rockstar says GTAVI will be the “biggest, most immersive evolution” of the series yet.
The game is currently set for release in 2025 for PlayStation 5 and Xbox Series X/S. A Bloomberg report from 2022 suggested that the game would feature a Bonnie and Clyde-esque duo in a fictionalized version of Miami.
Rockstar released the trailer for Grand Theft Auto VI, which it had been teasing for Tuesday, December 5, a day earlier than expected after it leaked. “Please watch the real thing on YouTube,” the developer tweeted. A video briefly surfaced on TikTok with what appeared to include debug footage of the game. It was quickly taken down.
It’s not the first time Rockstar has dealt with GTA VI leaks. Last year, a hacker released a huge trove of data from the game, including 90 videos of unfinished development.
This content can also be viewed on the site it originates from.