Select Page
Google’s App Store Ruled an Illegal Monopoly, as a Jury Sides With Epic Games

Google’s App Store Ruled an Illegal Monopoly, as a Jury Sides With Epic Games

More bad news for Google could come in mid-2024 when US district judge Amit Mehta in Washington, DC, is expected to issue his ruling on whether Google has unlawfully maintained its monopoly over web search. Testimony in that case, which was brought by the US Department of Justice and attorneys general for nearly every US state and territory, concluded last month.

A similar case two years ago had not gone too well for Epic. In Epic v. Apple, a federal judge in Oakland, California, ordered that Apple make just one change to its App Store practices. The judge found that most of the other Apple practices that Epic viewed as anticompetitive were justified, because the iPhone maker needed to recoup its investment in developing the app marketplace. Apple still has not had to comply as it awaits the Supreme Court’s decision early next year about whether to review the case.

Google hasn’t said much about why it chose to have a jury rather than a judge decide its fate in the trial that concluded today, though it tried unsuccessfully to reverse course on the eve of jury selection.

Judge Donato also tried to prevent the case even going to trial, ordering several times for Epic and Google to attempt to settle instead. In a last-second push, Google CEO Sundar Pichai and Epic CEO Tim Sweeney met for an hour on December 7 but failed to reach a deal, according to a court filing.

Google previously agreed to settle with as many as 48,000 app developers but without making major changes to its business practices. It also settled with a group of consumers and attorneys general for all 50 US states. Details of the latter settlement had not been published, pending the verdict in the Epic trial.

‘Shut Rivals Off’

In closing arguments today, Gary Bornstein, an attorney for Epic, told jurors that Google’s Android operating system was the only choice for smartphone makers, because Apple keeps iOS to itself and there aren’t any viable alternatives. Google used that power with device makers and wireless carriers who sell phones to ensure they promoted the Play store, he said, often more than they encouraged the lesser-known alternatives.

Google binds app developers who sell digital items in the Play store to use its billing system and pockets up to 30 percent of sales. The search giant also paid developers millions of dollars not to pursue alternatives to Play, Epic alleged.

The EU Just Passed Sweeping New Rules to Regulate AI

The EU Just Passed Sweeping New Rules to Regulate AI

Over the two years lawmakers have been negotiating the rules agreed today, AI technology and the leading concerns about it have dramatically changed. When the AI Act was conceived in April 2021, policymakers were worried about opaque algorithms deciding who would get a job, be granted refugee status or receive social benefits. By 2022, there were examples that AI was actively harming people. In a Dutch scandal, decisions made by algorithms were linked to families being forcibly separated from their children, while students studying remotely alleged that AI systems discriminated against them based on the color of their skin.

Then, in November 2022, OpenAI released ChatGPT, dramatically shifting the debate. The leap in AI’s flexibility and popularity triggered alarm in some AI experts, who drew hyperbolic comparisons between AI and nuclear weapons.

That discussion manifested in the AI Act negotiations in Brussels in the form of a debate about whether makers of so-called foundation models such as the one behind ChatGPT, like OpenAI and Google, should be considered as the root of potential problems and regulated accordingly—or whether new rules should instead focus on companies using those foundational models to build new AI-powered applications, such as chatbots or image generators.

Representatives of Europe’s generative AI industry expressed caution about regulating foundation models, saying it could hamper innovation among the bloc’s AI startups. “We cannot regulate an engine devoid of usage,” Arthur Mensch, CEO of French AI company Mistral, said last month. “We don’t regulate the C [programming] language because one can use it to develop malware. Instead, we ban malware.” Mistral’s foundation model 7B would be exempt under the rules agreed today because the company is still in the research and development phase, Carme Artigas, Spain’s Secretary of State for Digitalization and Artificial Intelligence, said in the press conference.

The major point of disagreement during the final discussions that ran late into the night twice this week was whether law enforcement should be allowed to use facial recognition or other types of biometrics to identify people either in real time or retrospectively. “Both destroy anonymity in public spaces,” says Daniel Leufer, a senior policy analyst at digital rights group Access Now. Real-time biometric identification can identify a person standing in a train station right now using live security camera feeds, he explains, while “post” or retrospective biometric identification can figure out that the same person also visited the train station, a bank, and a supermarket yesterday, using previously banked images or video.

Leufer said he was disappointed by the “loopholes” for law enforcement that appeared to have been built into the version of the act finalized today.

European regulators’ slow response to the emergence of social media era loomed over discussions. Almost 20 years elapsed between Facebook’s launch and the passage of the Digital Services Act—the EU rulebook designed to protect human rights online—taking effect this year. In that time, the bloc was forced to deal with the problems created by US platforms, while being unable to foster their smaller European challengers. “Maybe we could have prevented [the problems] better by earlier regulation,” Brando Benifei, one of two lead negotiators for the European Parliament, told WIRED in July. AI technology is moving fast. But it will still be many years until it’s possible to say whether the AI Act is more successful in containing the downsides of Silicon Valley’s latest export.

OpenAI Cofounder Reid Hoffman Gives Sam Altman a Vote of Confidence

OpenAI Cofounder Reid Hoffman Gives Sam Altman a Vote of Confidence

Hoffman and others said that there’s no need to pause development of AI. He called that drastic measure, for which some AI researchers have petitioned, foolish and destructive. Hoffman identified himself as a rational “accelerationist”—someone who knows to slow down when driving around a corner but that, presumably, is happy to speed up when the road ahead is clear. “I recommend everyone come join us in the optimist club, not because it’s utopia and everything works out just fine, but because it can be part of an amazing solution,” he said. “That’s what we’re trying to build towards.”

Mitchell and Buolamwini, who is artist-in-chief and president of the AI harms advocacy group Algorithmic Justice League, said that relying on company promises to mitigate bias and misuse of AI would not be enough. In their view, governments must make clear that AI systems cannot undermine people’s rights to fair treatment or humanity. “Those who stand to be exploited or extorted, even exterminated” need to be protected, Buolamwini said, adding that systems like lethal drones should be stopped. “We’re already in a world where AI is dangerous,” she said. “We have AI as the angels of death.”

Applications such as weaponry are far from OpenAI’s core focus on aiding coders, writers, and other professionals. The company’s tools by their terms cannot be used in military and warfare—although OpenAI’s primary backer and enthusiastic customer Microsoft has a sizable business with the US military. But Buolamwini suggested that companies developing business applications deserve no less scrutiny. As AI takes over mundane tasks such as composition, companies must be ready to reckon with the social consequences of a world that may offer workers fewer meaningful opportunities to learn the basics of a job that it may turn out are vital to becoming highly skilled. “What does it mean to go through that process of creation, finding the right word, figuring out how to express yourself, and learning something in the struggle to do it?” she said.

Motion blur portrait of a person in front of a blue backdrop

Fei-Fei Li, a Stanford University computer scientist who runs the school’s Institute for Human-Centered Artificial Intelligence, said the AI community has to be focused on its impacts on people, all the way from individual dignity to large societies. “I should start a new club called the techno-humanist,” she said. “It’s too simple to say, ‘Do you want to accelerate or decelerate?’ We should talk about where we want to accelerate, and where we should slow down.”

Li is one of the modern AI pioneers, having developed the computer vision system known as ImageNet. Would OpenAI want a seemingly balanced voice like hers on its new board? OpenAI board chair Bret Taylor did not respond to a request to comment. But if the opportunity arose, Li said, “I will carefully consider that.”

These Clues Hint at the True Nature of OpenAI’s Shadowy Q* Project

These Clues Hint at the True Nature of OpenAI’s Shadowy Q* Project

There are other clues to what Q* could be. The name may be an allusion to Q-learning, a form of reinforcement learning that involves an algorithm learning to solve a problem through positive or negative feedback, which has been used to create game-playing bots and to tune ChatGPT to be more helpful. Some have suggested that the name may also be related to the A* search algorithm, widely used to have a program find the optimal path to a goal.

The Information throws another clue into the mix: “Sutskever’s breakthrough allowed OpenAI to overcome limitations on obtaining enough high-quality data to train new models,” its story says. “The research involved using computer-generated [data], rather than real-world data like text or images pulled from the internet, to train new models.” That appears to be a reference to the idea of training algorithms with so-called synthetic training data, which has emerged as a way to train more powerful AI models.

Subbarao Kambhampati, a professor at Arizona State University who is researching the reasoning limitations of LLMs, thinks that Q* may involve using huge amounts of synthetic data, combined with reinforcement learning, to train LLMs to specific tasks such as simple arithmetic. Kambhampati notes that there is no guarantee that the approach will generalize into something that can figure out how to solve any possible math problem.

For more speculation on what Q* might be, read this post by a machine-learning scientist who pulls together the context and clues in impressive and logical detail. The TLDR version is that Q* could be an effort to use reinforcement learning and a few other techniques to improve a large language model’s ability to solve tasks by reasoning through steps along the way. Although that might make ChatGPT better at math conundrums, it’s unclear whether it would automatically suggest AI systems could evade human control.

That OpenAI would try to use reinforcement learning to improve LLMs seems plausible because many of the company’s early projects, like video-game-playing bots, were centered on the technique. Reinforcement learning was also central to the creation of ChatGPT, because it can be used to make LLMs produce more coherent answers by asking humans to provide feedback as they converse with a chatbot. When WIRED spoke with Demis Hassabis, the CEO of Google DeepMind, earlier this year, he hinted that the company was trying to combine ideas from reinforcement learning with advances seen in large language models.

Rounding up the available clues about Q*, it hardly sounds like a reason to panic. But then, it all depends on your personal P(doom) value—the probability you ascribe to the possibility that AI destroys humankind. Long before ChatGPT, OpenAI’s scientists and leaders were initially so freaked out by the development of GPT-2, a 2019 text generator that now seems laughably puny, that they said it could not be released publicly. Now the company offers free access to much more powerful systems.

OpenAI refused to comment on Q*. Perhaps we will get more details when the company decides it’s time to share more results from its efforts to make ChatGPT not just good at talking but good at reasoning too.

Sam Altman Officially Returns to OpenAI—With a New Board Seat for Microsoft

Sam Altman Officially Returns to OpenAI—With a New Board Seat for Microsoft

Sam Altman marked his formal return as CEO of OpenAI today in a company memo that confirmed changes to the company’s board, including a new nonvoting seat for the startup’s primary investor, Microsoft.

In a memo sent to staff and shared on OpenAI’s blog, Altman painted the chaos of the past two weeks, triggered by the board’s loss of trust in their CEO, during which almost the entire staff of the company threatened to quit, as a testament to the startup’s resilience rather than a sign of instability.

“You stood firm for each other, this company, and our mission,” Altman wrote. “One of the most important things for the team that builds [artificial general intelligence] safely is the ability to handle stressful and uncertain situations, and maintain good judgment throughout. Top marks.”

Altman was ousted on November 17. The company’s nonprofit board of directors said that a deliberative review had concluded that Altman “was not consistently candid in his communications with the board.” Under OpenAI’s unusual structure, the board’s duty was to the project’s original, nonprofit mission of developing AI that is beneficial to humanity, not the company’s business.

That board that ejected Altman included the company’s chief scientist, Ilya Sutskever, who later recanted and joined with staff who threatened to quit if Altman was not reinstated.

Altman said that there would be no hard feelings over that, although his note left questions over Sutskever’s future.

“I love and respect Ilya, I think he’s a guiding light of the field and a gem of a human being. I harbor zero ill will towards him,” Altman wrote, adding, “We hope to continue our working relationship and are discussing how he can continue his work at OpenAI.” What was clear, however, was that Sutskever would not be returning to the board.

Altman’s note to staff confirmed that OpenAI’s new all-male board will consist of former Treasury secretary Larry Summers, Quora CEO Adam D’Angelo, and former Salesforce co-CEO Bret Taylor, with Taylor as chair. D’Angelo is the only remaining member of the previous board.

Previous board members Helen Toner, a director at CSET, a think tank, and Tasha McCauley, an entrepreneur, both resigned.

Speaking at the New York Times DealBook summit shortly before the announcement, OpenAI cofounder Elon Musk expressed concerns about Altman and questioned why Sutskever had voted to fire him. “Either it was a serious thing and we should know what it is, or it’s not a serious thing and the board should resign,” Musk said. “I have mixed feelings about Sam, I do.”