If Vision Pro is mostly meant to be used from a couch cushion or desk chair, the external battery pack may not factor in as much. As I pointed out last spring, it’s an unusual choice for a consumer tech company that has, over the past two decades, created products that we transport with us, literally everywhere we go.
Some industry experts are split on the external battery design. Bailenson, for one, believes that headset computing should be optimized for shorter durations. “After 30 minutes, it’s probably time to take off the headset and go about your day and touch some walls and drink some water,” he says. “So in this instance there really shouldn’t be a need for an external battery pack, in my opinion, because most experiences are short.”
Sam Cole, the cofounder and chief executive of FitXR, a fitness app popular on the Meta Quest, says that, “controversially,” he doesn’t believe the Vision Pro battery pack will be “as much of a factor for fitness apps as it will be for sitting and working for hours.”
“Even when headsets are bulkier, our users tend to forget about the cable, forget about the battery pack, because you’re so focused on punches being thrown at you,” Cole says. “The weight distribution and the accessories become much more topical when you’re thinking about working on a headset or sitting on calls for four hours.”
But Cole also says, battery pack aside, “all of the Vision Pro’s factors put together have led us to believe it’s a really high-quality experience. This is going to be as good as Meta Quest 3 if not better.”
Prior examples might not necessarily help read the battery tea leaves, either. Early versions of the Magic Leap AR goggles had an external “compute pack” that was designed for the wearer’s waistband. Microsoft’s HoloLens, on the other hand, packed what felt like an entire PC on your head. Neither product was successful; the placement of the battery pack was moot.
Apple did not respond to an inquiry as to why journalists and influencers were not able to take their own photos of Vision Pro or if the company plans to share more images of the battery.
Spirit AeroSystems, the Wichita-based aerospace manufacturer that manufactured the door plug that blew out on the Alaska Airlines flight, declined to comment on the incident. However, in a statement published on its website, Spirit says its “primary focus is the quality and product integrity of the aircraft structures we deliver.”
The company’s parts have caused issues for Boeing in the past. The Seattle Times reported back in October on defects in Spirit components that contributed to months-long delayed deliveries of Boeing 787 aircraft. Tom Gentile, the then CEO of Spirit, resigned following these and other production errors by the company.
But Fehrm hypothesizes the blowout may have been due to alleged oversights that happened after Spirit had added the door plug, once Boeing retook ownership of the plane. Fehrm claims Boeing uses the door in question to access parts of the plane during its checks ahead of the aircraft being cleared to fly. And so, in his opinion: “Someone has taken away the bolts, opened the door, done the work, closed the door, and forgot to put the pins in.”
In other words, he is leaning toward processes being at fault, not the plane’s design. This, though, raises concerns about the way plane safety checks are conducted.
In theory, in the US the FAA checks aircraft for their airworthiness, granting them certification to fly safely. Aircraft designs are studied and reviewed on paper, with ground and flight tests taking place on the finished aircraft alongside an evaluation of the required maintenance routine to keep a plane flightworthy.
In practice, these reviews are often delegated to third-party organizations that are designated to grant certification. Planes can fly without the FAA inspecting them first-hand. “You won’t find an FAA inspector in a set of coveralls walking down a production line at Renton,” says Tim Atkinson, a former pilot and aircraft accident investigator and current aviation consultant, referring to Boeing’s Washington state–based 737 factory.
The FAA relies on third parties because it’s already overstretched and needs to focus on safety-critical new technologies that push forward the latest innovations in flight. “It can’t [check all aircraft itself], because you’re producing 30 to 60 aircrafts a month, and there are 4 million parts in an aircraft,” says Fehrm.
“Designated examiners have always been part of the landscape,” says Mann, but he believes the latest series of events add to existing questions around whether this is the right approach. On the other hand, there are currently no practical alternatives, he says.
The plane in the Alaska Airlines incident was granted an airworthiness certificate on October 25, 2023, and issued with a seven-year certificate by the FAA on November 2. FAA records do not include who granted the certificate on behalf of the FAA, and the administration declined to identify the organization or individual who approved the plane’s airworthiness. The plane’s first flight took place in early November.
With this being a third major and potentially life-threatening incident for Boeing in little over five years—all involving a single type of aircraft—the company’s status has taken a hit.
Do AI companies need to pay for the training data that powers their generative AI systems? The question is hotly contested in Silicon Valley and in a wave of lawsuits levied against tech behemoths like Meta, Google, and OpenAI. In Washington, DC, though, there seems to be a growing consensus that the tech giants need to cough up.
Today, at a Senate hearing on AI’s impact on journalism, lawmakers from both sides of the aisle agreed that OpenAI and others should pay media outlets for using their work in AI projects. “It’s not only morally right,” said Richard Blumenthal, the Democrat who chairs the Judiciary Subcommittee on Privacy, Technology, and the Law that held the hearing. “It’s legally required.”
Josh Hawley, a Republican working with Blumenthal on AI legislation, agreed. “It shouldn’t be that just because the biggest companies in the world want to gobble up your data, they should be able to do it,” he said.
Media industry leaders at the hearing today described how AI companies were imperiling their industry by using their work without compensation. Curtis LeGeyt, CEO of the National Association of Broadcasters, Danielle Coffey, CEO of the News Media Alliance, and Roger Lynch, CEO of Condé Nast, all spoke in favor of licensing. (WIRED is owned by Condé Nast.)
Coffey claimed that AI companies “eviscerate the quality content they feed upon,” and Lynch characterized training data scraped without permission as “stolen goods.” Coffey and Lynch also both said that they believe AI companies are infringing on copyright under current law. Lynch urged lawmakers to clarify that using journalistic content without first brokering licensing agreements is not protected by fair use, a legal doctrine that permits copyright violations under certain conditions.
Senate hearings can be adversarial, but the mood today was largely congenial. The lawmakers and media industry insiders often applauded each others’ statements. “If Congress could clarify that the use of our content, or other publisher content, for the training and output of AI models is not fair use, then the free market will take care of the rest,” Lynch said at one point. “That seems eminently reasonable to me,” Hawley replied.
Journalism professor Jeff Jarvis was the hearing’s only discordant voice. He asserted that training on data obtained without payment is, indeed, fair use, and spoke against compulsory licensing, arguing that it would damage the information ecosystem rather than safeguard it. “I must say that I am offended to see publishers lobby for protectionist legislation, trading on the political capital earned through journalism,” he said, jabbing at his fellow speakers. (Jarvis was also subject to the hearing’s only real contentious line of questioning, from Republican Marsha Blackburn, who needled Jarvis about whether AI is biased against conservatives and recited an AI-generated poem praising President Biden as evidence.)
Outside of the committee room, there is less agreement that mandatory licensing is necessary. OpenAI and other AI companies have argued that it’s not viable to license all training data, and some independent AI experts agree.
Fleming believed that growth has natural limits. Things grow to maturity—kids into adults, saplings into trees, startups into full-fledged companies—but growth beyond that point is, in his words, a “pathology” and an “affliction.” The bigger and more productive an economy gets, he argued, the more resources it needs to burn to maintain its own infrastructure. It becomes less and less efficient at keeping any one person clothed, fed, and sheltered. He called this the “intensification paradox”: The harder everyone works to make the GDP line point up, the harder everyone has to work to make the GDP line point up. Inevitably, Fleming believed, growth will turn to degrowth, intensification to deintensification. These are things to prepare for, plan for, and the way to do that is with the missing metric: resilience.
Fleming offers several definitions of resilience, the briefest of which is “the ability of a system to cope with shock.” He describes two kinds: preventive resilience, which helps you maintain an existing state in spite of shocks, and recovery-elastic resilience, which helps you adapt quickly to a new post-shock state. Growth won’t help you with resilience, Fleming argues. Only community will. He’s big on the “informal economy”—think Craigslist and Buy Nothing, not Amazon. People helping people.
So I began to imagine, in my hypocritical heart, an analytics platform that would measure resilience in those terms. As growth shot too high, notifications would fire off to your phone: Slow down! Stop selling! Instead of revenue, it would measure relationships formed, barters fulfilled, products loaned and reused. It would reflect all sorts of non-transactional activities that make a company resilient: Is the sales team doing enough yoga? Are the office dogs getting enough pets? In the analytics meeting, we would ask questions like “Is the product cheap enough for everyone?” I even tried to sketch out a resilience funnel, where the juice that drips down is people checking in on their neighbors. It was an interesting exercise, but what I ended up imagining was basically HR software for Burning Man, which, well, I’m not sure that’s the world I want to live in either. If you come up with a good resilience funnel, let me know. Such a product would perform very badly in the marketplace (assuming you could even measure that).
The fundamental problem is that the stuff that creates resilience won’t ever show up in the analytics. Let’s say you were building a chat app. If people chat more using your app, that’s good, right? That’s community! But the really good number, from a resilience perspective, is how often they put down the app and meet up in person to hash things out. Because that will lead to someone coming by the house with lasagna when someone else has Covid, or someone giving someone’s kid an old acoustic guitar from the attic in exchange for, I don’t know, a beehive. Whole Earth stuff. You know how it works.
All of this somewhat guilty running around led me back to the simplest answer: I can’t measure resilience. I mean, sure, I could wing a bunch of vague, abstract stats and make pronouncements. God knows I’ve done a lot of that before. But there’s no metric, really, that can capture it. Which means I have to talk to strangers, politely, about problems they’re trying to solve.
I hate this conclusion. I want to push out content and see lines move and make no more small talk. I want my freaking charts. That’s why I like tech. Benchmarks, CPU speeds, hard drive sizes, bandwidth, users, point releases, revenue. I love when the number goes up. It’s almost impossible to imagine a world where it doesn’t. Or rather it used to be.
This article appears in the November 2023 issue.Subscribe now.
On January 1, Mike Neville gave Midjourney the following prompt: “Steamboat Willie drawn in a vintage Disney style, black and white. He is dripping all over with white gel.”
There’s no polite way to describe what this prompt conjured from the AI image generator. It looks, very much, like Mickey Mouse is drenched in ejaculate.
At the start of every year, a crop of cultural works enters the public domain in the United States. When copyright expires on particularly beloved characters, people get excited. This is an especially eagerly anticipated year. An early version of Mickey Mouse, colloquially known as Steamboat Willie, entered public domain in 2024 after nearly a century of rigorously enforced copyright protection. Within days, an explosion of homebrewed Steamboat Willie art hit the internet, including a horror movie trailer, a meme coin—and, of course, a glut of AI-generated Willies. Some are G-rated. Others, like “Creamboat Willie,” are decidedly not. (Willie doing drugs is another popular theme.)
While a contingent of the people sharing naughty Willie images are simply goofing around, others had surprisingly sober-minded intentions. Neville, an art director who posted his image on social media using the handle “Olivia Mutant-John,” has a lively sense of humor, but his experiment wasn’t solely a scatalogical joke. “My interest in generating the assets was to explore copyright thresholds and where the tools are currently,” he says. He’d noticed that it was easy to find examples of copyrighted characters on popular image-generating tools (a point also recently made by AI scientist Gary Marcus, who posted AI-generated depictions of SpongeBob SquarePants as an example) and wanted to see how far he could push an image generator now that Steamboat Willie was in the public domain.
Neville isn’t the only person conducting AI Willie experiments with copyright on his mind. Pierre-Carl Langlais, head of research at the AI data research firm OpSci, created a fine-tuned version of Stable Diffusion he called “Mickey-1928” based on 96 public domain stills of Mickey Mouse from the 1928 films Steamboat Willie, Plane Crazy, and Gallopin’ Gaucho. “It’s a political stance,” he says.
Langlais firmly believes that people should be paying closer attention to where AI tools get their training data; he’s working on several separate projects focused on creating models that train exclusively on public domain works to that end. He whipped it up in hours, because it’s essentially a filter laid atop of Stable Diffusion, not a totally custom data set. (That would be a far more labor-intensive project.)