Select Page
The Watch That Made Everything Now

The Watch That Made Everything Now

The time of our lives begins April 4, 1972. That’s the day Hamilton released the first digital watch: the Pulsar Time Computer. Originally designed for a Stanley Kubrick film, the prototype was displayed in 1970 on The Tonight Show with Johnny Carson, although the late-night host was not impressed and mocked the expensive device. He couldn’t imagine how much times were about to change.

This first digital watch may seem unimpressive by current standards, but its features were novel when it debuted. Its blank screen revealed time with the push of the button, while another push provided the seconds; its sensor adjusted to the degree of light, an ordinary feature now, but remarkable then; the use of an LED screen was the edge of innovation at the time; and quartz technology was being perfected, but this watch sold it. With every purchase of the Pulsar, people strapped on a new way of seeing and experiencing the world. It presented a space-age future. It offered private, on-demand time. And in that instant, all became now.

The Pulsar emerged in the era of the space race and a future imagined as sleek, glossy, smooth—frictionless. Moon landings, new home appliances eliminating the duress of labor, faster transportation contraptions, the rapid growth of science fiction with aliens and cyborgs, all spoke to an urge to inhabit an existence beyond our planetary limitations. Speed and space require frictionless designs, and the Pulsar represented that design aesthetic.

Even the name Pulsar was meant to invoke a space-age future. Hamilton’s design was an extension of the company’s digital clock and wristwatch prototypes for Kubrick’s 2001: A Space Odyssey, though only the clock made it into the 1968 film. The fact that the device was designed for a movie about artificial intelligence and evolution contributed to the need to make time itself look different.

An ad for the watch from 1973 boasted that it could survive shocks up to 2,500 times the force of gravity. Humans can’t withstand anything past 90, but sometimes what is on offer is utterly irrelevant. New designs frequently present superfluous options in order to make users feel their lives require the extraordinary gadgetry of a superhuman. The term “early adopters” describes a population that identifies with exploring and using new technological designs even if the objects offer little beyond a redesign of the interface.

By the second half of the 20th century, the notion that aesthetics operated as “an engine for consumer demand”—that engagement with design was a value in itself, separate from any novel application the underlying technology might offer—had already been recognized within design communities. The Pulsar’s lack of new functionality was irrelevant because the revolution it wrought occurred through the digital interface, which allowed people to imagine themselves peering into the future.

The watch visualizes the future for the “everyman” as, notably, it was initially designed for and marketed to men. Though James Bond’s watch would soon shift back to the Rolex, famed British actor Roger Moore can be seen wearing the Pulsar in Live and Let Die (1973). Elvis Presley, Sammy Davis Jr., Yul Brynner, and political celebrities like the Shah of Iran all wore one in assorted photo opportunities. Whether their sporting the Pulsar was an early example of product placement or simply preference, the watch was seen on men who epitomized a traditional kind of masculine power and success. In 1974, a Washington Post photographer captured President Ford wearing one while testifying before Congress about Nixon’s pardon. Keith Richards and Jack Nicholson, who both embodied a new kind of machismo, were also spotted wearing a slightly less expensive version. (Cureau).

My Music App Knows Me Way Too Well. Am I Stuck in a Groove?

My Music App Knows Me Way Too Well. Am I Stuck in a Groove?

One of the streaming music apps I use creates customized playlists for me, and it’s scarily good at predicting songs I’m going to like. Does that make me boring? 

—Playing It Safe


Dear Playing It Safe, 

I once read somewhere that if you want to slowly drive someone mad, resolve, for a week or so, to occasionally mutter, “I knew you were going to say that” after they make some casual remark. The logic, as far as I can tell, is that by convincing a person that their thoughts are entirely predictable, you steadily erode their sense of agency until they can no longer conceive of themselves as an autonomous being. I have no idea whether this actually works—I’ve never been sadistic enough to try it. But if its premise is correct, we all must be slowly losing our minds. How many times a day are we reminded that our actions can be precisely anticipated? Predictive text successfully guesses how we’re going to respond to emails. Amazon suggests the very book that we’ve been meaning to read. It’s rare these days to finish typing a Google query before autocomplete finishes our thought, a reminder that our medical anxieties, our creative projects, and our relationship dilemmas are utterly unoriginal.

For those of us raised in the crucible of late-capitalist individualism, we who believe our souls to be as unique as our thumbprints and as unduplicable as a snowflake, the idea that our interests fall into easily discernible patterns is deeply, perhaps even existentially, unsettling. In fact, Playing It Safe, I’m willing to bet that your real anxiety is not that you’re boring but that you’re not truly free. If your taste can be so easily inferred from your listening history and the data streams of “users like you” (to borrow the patronizing argot of prediction engines), are you actually making a choice? Is it possible that your ineffable and seemingly spontaneous delight at hearing that Radiohead song you loved in college is merely the inflexible mathematical endpoint of the vector of probabilities that have determined your personality since birth?

While this anxiety may feel new, it stems from a much older problem about prediction and personal freedom, one that first emerged in response to the belief in divine foreknowledge. If God can see the future with perfect accuracy, then aren’t human actions necessarily predetermined? How could we act otherwise? A scientific version of the problem was posed by the 19th-century French physicist Pierre-Simon Laplace, who imagined a cosmic superintelligence that knew every detail about the universe, down to the exact position of all its atoms. If this entity (now known as Laplace’s demon) understood everything about the present world and possessed an intellect “vast enough to submit the data to analysis,” it could perfectly predict the future, revealing that all events, including our own actions, belong to a long domino chain of cause-and-effect that extends back to the birth of the universe.

The algorithm that predicts your musical preferences is less sophisticated than the cosmic intellect Laplace had in mind. But it still reveals, to a lesser degree, the extent to which your actions are constrained by your past choices and certain generalized probabilities of human behavior. And it’s not difficult to extrapolate what predictive technologies might expose about our sense of agency once they become even better at anticipating our actions and emotional states—perhaps even surpassing our own self-knowledge. Will we accept their recommendations for whom to marry, or whom to vote for, just as we now do their suggestions for what to watch and what to read? Will police departments arrest likely criminals before they commit the crime, as they do in Minority Report, tipped off by the oracular predictions of digital precogs? Several years ago, Amazon filed a patent for “anticipatory shipping,” banking on the hope the company would soon be able to correctly guess our orders (and start preparing them for dispatch) before we made the purchase.

If the revelation of your own dullness is merely the first stirrings of this new reality, how should you respond? One option would be to rebel and try to prove its assumptions false. Act out of character. When you have an inclination to do something, do the precise opposite. Listen to music you hate. Make choices that will reroute your data stream. This is the solution arrived at by Dostoevsky’s narrator in Notes From the Underground, who takes up irrational and self-damaging actions simply to prove that he is not enslaved to the inflexible calculations of rational self-interest. The novel was written during the heyday of rational egoism, when certain utopian thinkers believed that human behavior could be reduced to a series of logical rules so as to maximize well-being and create the ideal society. The narrator insists that most people would find such a world intolerable because it would destroy their belief in individual freedom. We value our autonomy over all the comforts and the advantages that scientific determinism offers—so much so, he argues, that we would seek out absurdity or even self-harm in order to prove that we are free. If science ever definitively proves that humans act according to these fatalistic rules, we would destroy ourselves “for the sole purpose of sending all these logarithms to the devil and living once more according to our own stupid will!”

It’s a rousing passage, though as predictions go it’s not especially prescient. Few of us today appear to be tormented by the comforts of predictive analytics. In fact, the conveniences they offer are deemed so desirable that we often collude with them. On Spotify, we “like” the songs we enjoy, contributing one more shard to the emerging mosaic of our digital personhood. On TikTok, we quickly scroll past posts that don’t reflect our dominant interests, lest the all-seeing algorithm mistake our curiosity for invested interest. Perhaps you have paused, once or twice, before watching a Netflix film that diverges from your usual taste, or hesitated before Googling a religious question, lest it take you for a true believer and skew your future search results. If you want to optimize your recommendations, the best thing to do is to act as much like “yourself” as possible, to remain resolutely and eternally in character—which is to say, to act in a way that is entirely contrary to the real complexities of human nature.

With that said, I don’t advise embracing the irrational or acting against your own interests. It will not make you happy, nor will it prove a point. Randomness is a poor substitute for genuine freedom. Instead, perhaps you should reconsider the unstated premise of your query, which is that your identity is defined by your consumer choices. Your fear that you’ve become boring might have less to do with your supposedly vanilla taste than the fact that these platforms have conditioned us to see our souls through the lens of formulaic categories that are designed to be legible to advertisers. It’s all too easy to mistake our character for the bullet points that grace our bios: our relationship status, our professional affiliations, the posts and memes and threads that we’ve liked, the purchases we’ve made, and the playlists we’ve built.

Looking for Alien Life? Seek Out Alien Tech

Looking for Alien Life? Seek Out Alien Tech

Back in 1950, Enrico Fermi posed the question now known as the Fermi Paradox: Given the countless galaxies, stars, and planets out there, the odds are that life exists elsewhere—so why haven’t we found it? The size of the universe is only one possible answer. Maybe humans have already encountered extraterrestrial (ET) life but didn’t recognize it. Maybe it doesn’t want to be found. Maybe it’s monitoring Earth without us knowing. Maybe it doesn’t find us interesting.

And there’s another reason: The search for advanced aliens is constrained by human assumptions, including the idea that advanced ET would be “alive.”

Scientists who engage in the search for extraterrestrial life look for what life on Earth needs—carbon and water—as well as for biosignatures: gasses and organic matter, such as methane, that living things exhale, excrete, or secrete. Searching for biosignatures is arduous for many reasons, and biosignatures don’t necessarily indicate the presence of life, as they could come from geological or other natural forces (for instance, a whiff of methane detected on Mars has tantalized scientists for years, but they have yet to reach a consensus).

The assumption that biological life on other planets would look or function like biological life on Earth is flawed and constrained by anthropocentrism. The same is true of assuming that advanced intelligent life on other planets would be biological just because humans are. Maybe we haven’t found aliens because advanced alien spaces have transcended biology altogether.

In the grand scheme, Earth is a relatively young planet. If we assume that biological life of some sort emerged on other planets, then we can also make some educated assumptions about how that life evolved—namely, that other species also invented technology, such as tools, transport vehicles, factories, and computers. Maybe those species invented artificial intelligence (AI) or virtual worlds. Advanced ET may have reached the “technological singularity,” the point at which AI exceeds human or biological intelligence. Maybe they experienced what many scientists believe is in store for Homo sapiens—the merging of biological beings and machines. Maybe they’ve become nanosats. Maybe they’re data or are part of a digital network that functions like a collective consciousness. In fact, the last variable of the Drake Equation—a framework for estimating the likelihood of advanced, intelligent species existing in the cosmos—posits that technologically advanced civilizations broadcast detectable signals for a finite amount of time, suggesting they eventually go extinct or become post-biological.

The idea that ET intelligence might exist as “super” AI has been proposed by scientists like Susan Schneider, founding director of the Center for the Future Mind; SETI senior astronomer Seth Shostak; and others. In an op-ed for The Guardian, Shostak posits that aliens intelligent enough to seek out Earth “will probably have gone beyond biological smarts and, indeed, beyond biology itself.” Caleb Scharf, director of Columbia’s Astrobiology Program, argues that “Just as someone living on the steppe in 12th-century Mongolia would find a self-driving car both magical and meaningless, we might be quite incapable of registering or interpreting the presence of billion-year-old machine savants.”

The potential of AI to become super AI and vastly eclipse the limits of human intelligence has long concerned scientists like Nick Bostrom and entrepreneurs like Elon Musk, and so the possible existence of super AI aliens raises important considerations about the risks of searching for—and finding—them. It also provokes questions about the potential dangers of them finding us. Dark Forest Theory teases out these threats, suggesting that the universe is akin to a dark forest full of predators and prey and that stealth is the best, and perhaps only, survival strategy.

Curb Your Food Tech Enthusiasm

Curb Your Food Tech Enthusiasm

There are two problems with this approach. First, the promises of technologies meant to reduce emissions from agriculture often far exceed what they can actually deliver. For instance, as Matthew Hayek and I wrote in WIRED earlier this year, widely publicized claims that feeding cows algae feed additives could cut their emissions by 80 percent actually work out to be closer to 10 percent when you take into account when and under what conditions you can change a cow’s diet. Biodigesters, meanwhile, are very expensive and only address the 10 or so percent of agricultural methane emissions that come from manure. And whether either of these can be massively scaled is an open question. With these realities in mind, the modest 18 percent decrease in emissions from currently available technology outlined by the Breakthrough Institute’s report looks dubious. But even if its more ambitious goal of developing new technology that reduces beef’s methane by 48 percent were to work, the resulting emissions would still be higher than the currently worst-emitting pork and chicken, and well over twice as much as plant-based meats and four times as much as tofu. The clean cow, in other words, is a lame duck.

The second issue with this techno-optimistic approach is that even if these technological fixes are as effective as promised, they will perpetuate a food production system that will continue to be harmful to animals, workers, and the planet. There are scores of other impacts of beef production, including overgrazing of land, deforestation, harmful runoff and odors, animal welfare issues, and the treatment of workers in slaughterhouses. What good is investing in technologies to reduce emissions if their sources are industries that should be phased out rather than saved? Indeed, an exclusive focus on emissions reductions in food systems can lead to potentially far worse outcomes, like replacing high-emitting beef with lower-emitting chicken. Chicken production emits relatively little, but it does so at the cost of cramming animals into factory farms, where they suffer horribly, are more prone to disease outbreaks, and can be pumped full of antibiotics, contributing to the global crisis of antibiotic resistance.

Then there’s the technology-driven “solution” of alternative proteins such as plant-based and cellular meat. On the one hand, these products actually aim to create a more sustainable way of producing meat, both lowering emissions and removing many of the other harms of conventional meat production, including factory farms and slaughterhouses. Investing in the development of this technology might help usher in a far more ethical food system, one better for animals, consumers, and the planet. What the clean cow is to clean coal, clean meat is to renewables like solar.

But alternative protein still operates within the confines of existing, highly problematic systems. To realize its full potential in creating a better food system, we need to look beyond its advantages over conventional meat. The technology itself does little to address other major structural and ethical issues within the food system, including corporate concentration and the treatment of workers. As alternative protein companies break into the mainstream, many are being bought up by large incumbent food companies, including those they are ostensibly trying to disrupt. Most recently, the Brazilian cattle behemoth JBS invested $100 million in a Spanish cellular agriculture startup. Given JBS’ abysmal environmental record, this is hardly good news unless the company actively reduces its meat production to focus on alternative proteins.

Why Is It So Hard to Believe In Other People’s Pain?

Why Is It So Hard to Believe In Other People’s Pain?

Hostile suspicion of others, encompassing everything from the position of their mask to their stance on mandates, has marked this wretched pandemic from the start. Now, in perhaps the unkindest cut, suspicion is aimed at people with long Covid—the symptoms that may afflict as many as a third of those who survive a first hit of the virus. One theory is that Covid infection riles up the body’s defenses and can leave the immune system in a frenzy, causing shortness of breath, extreme fatigue, and brain fog. In The Invisible Kingdom, her forthcoming book about chronic illness, Meghan O’Rourke reports that doctors often reject these symptoms as meaningless. When medical tests for these patients come up negative, “Western medicine wants to say, ‘You’re fine,’ ” says Dayna McCarthy, a physician focused on long Covid.

This is not surprising. Skepticism about chronic conditions, including post-polio syndrome and fibromyalgia, is exceedingly common—and it nearly always alienates patients, deepens their suffering, and impedes treatment. Until researchers can find the biomarkers that might certify long Covid as a “real” disease, the best clinicians can do is listen to testimony and treat symptoms. But the project of addressing long Covid might also be served by a more rigorous epistemology of pain—that is, a theory of how we come to believe or doubt the suffering of other people.

In her 1985 book The Body in Pain: The Making and Unmaking of the World, Elaine Scarry makes a profound assertion: “To have great pain is to have certainty; to hear about pain is to have doubt.” Because the claim illuminates both pain and knowledge, and because women rarely attach their names to philosophical assertions, I’d like, belatedly, to dub this elegant proposition “Scarry’s axiom.”

The axiom came to mind this fall for two reasons: I was trying to support a friend with long Covid, and I participated in a forum about how the media contends with racism. It was the second experience that illuminated the first and suggested Scarry’s axiom as a way to understand the acute distrust that now pervades our pluralistic country.

At the forum, a socialist and a libertarian each lodged complaints. The socialist charged that the media’s focus on racism leaves out a more significant battle—the never-ending class struggle. The libertarian argued that the media’s focus on race fails to understand the individual, with his or her pressing fear of death and aspirations to art, money, and transcendence. The libertarian then took shots at easily offended undergraduates who put emotion before reason and are forever getting “offended” and needing “safety,” which he said were postures incompatible with education.

This familiar debate ground on. As far as I can tell, no one on any side—and I disagreed with both the socialist and the libertarian—ever budged. But perhaps that’s because we kept missing a truth in front of our faces: that we were all dismissing as somehow less than real the pain of others while elevating our own, and that of our confreres, as hard fact.