Select Page
There’s an Alternative to the Infinite Scroll

There’s an Alternative to the Infinite Scroll

Sometime in the summer of 2020, I noticed an occasional, searing pain shooting up my right forearm. It soon became clear this was a byproduct of a gesture that had become as commonplace as breathing or blinking that season, if not long before: scrolling. This was how I spent most of the day, it seemed. Smartphone welded to my palm, thumb compulsively brushing upward, extracting content out of the empty space beneath my phone charger port, pulling an endless succession of rabbits out of hats, feverishly yanking the lever on the largest and most addictive slot machine in the world. The acupuncturist I saw to help repair my inflamed tendon implored me to stop, so I did, for a while—I just awkwardly used my left index finger instead.

Of course, it wasn’t always this way. While a desktop computer has its own hazardous ergonomics, the experience of being online was once far more “embodied,” both literally and conceptually. Interfacing with a screen involved arms, hands, and fingers all in motion on clacking keyboards and roving mice. Accordingly, the first dominant metaphors for navigating digital space, especially the nascent World Wide Web, were athletic and action-oriented: wandering, trekking, and most of all, surfing. In the 1980s and ’90s, the virtual landscape of “cyberspace” was seen as just that, a multidimensional “frontier” to be traversed in any direction one pleased (with all the troubling colonial subtext that implies), echoed in the name of browsers like Netscape Navigator and Internet Explorer. As media scholar Lev Manovich argues in his 2002 book The Language of New Media, by the early 1990s, computer media had rendered time “a flat image or a landscape, something to look at or navigate through.”

But when the screens became stowaways in our purses and pockets, this predominant metaphor, however problematic, shifted. Like the perspectival evolution that occurred when frescoes affixed to walls gave way to portable paintings, shrinking the screen down to the size of a smartphone altered the content coming through it and our sense of free movement within it. No longer chairbound behind a desktop, we were liberated to move our actual bodies through the world. Meanwhile, that sense of “surfing” virtual space got constrained to just our fingertips, repeatedly tapping a tiny rectangle to retrieve chunks of content.

A user could “scroll” through lines of data using keyboard commands on the first 1960s computer terminals, and the word appeared as a verb as early as 1971, in a computer guidebook. The act became more sophisticated with the introduction of the scroll-wheel mouse, the trackpad, and the touchscreen, all of which could more fluidly scroll vertically or horizontally across large canvases of content that stretched beyond the boundaries of a given screen. Ever since the arrival of the smartphone, “scroll” has been the default verb for the activity of refreshing the content that flows over our screens. The dawn of the infinite scroll (supposedly invented in 2006 by designer Aza Raskin, who has now made a second career out of his regret for it) and the implementation of algorithmic instead of strictly chronological social media feeds (which Facebook did in 2011, with Twitter and Instagram following in 2016) fully transformed the experience of scrolling through a screen. Now, it is less like surfing and more like being strapped in place for an exposure-therapy experiment, eyes held open for the deluge.

The infinite scroll is a key element of the infrastructure of our digital lives, enabled by and reinforcing the corporate algorithms of social media apps and the entire profit-driven online attention economy. The rise of the term “doomscrolling” underscores the practice’s darker, dopamine-driven extremes, but even lamenting the addictive and extractive qualities of this cursed UX has become cliché. Have we not by now scrolled across dozens of op-eds about how we can’t stop scrolling?

The first form of portable, editable media was, of course, the scroll. Originating in ancient Egypt, scrolls were made from papyrus (and later, silk or parchment) rolled up with various types of binding. The Roman codex eventually began to supplant the scroll in Europe, but Asia was a different story. Evolving in countless ways against the backdrop of political, philosophical, and material change in China, Japan, and Korea, scrolls persisted in art and literature for centuries and continue to be used as a medium by fine artists today.

ChatGPT Isn’t Coming for Your Coding Job

ChatGPT Isn’t Coming for Your Coding Job

Software engineers have joined the ranks of copy editors, translators, and others who fear that they’re about to be replaced by generative AI. But it might be surprising to learn that coders have been under threat before. New technologies have long promised to “disrupt” engineering, and these innovations have always failed to get rid of the need for human software developers. If anything, they often made these workers that much more indispensable.

To understand where handwringing about the end of programmers comes from—and why it’s overblown—we need to look back at the evolution of coding and computing. Software was an afterthought for many early computing pioneers, who considered hardware and systems architecture the true intellectual pursuits within the field. To the computer scientist John Backus, for instance, calling coders “programmers” or “engineers” was akin to relabeling janitors “custodians,” an attempt at pretending that their menial work was more important than it was. What’s more, many early programmers were women, and sexist colleagues often saw their work as secretarial. But while programmers might have held a lowly position in the eyes of somebody like Backus, they were also indispensable—they saved people like him from having to bother with the routine business of programming, debugging, and testing.

Even though they performed a vital—if underappreciated—role, software engineers often fit poorly into company hierarchies. In the early days of computers, they were frequently self-taught and worked on programs that they alone had devised, which meant that they didn’t have a clear place within preexisting departments and that managing them could be complicated. As a result, many modern features of software development were developed to simplify, and even eliminate, interactions with coders. FORTRAN was supposed to allow scientists and others to write programs without any support from a programmer. COBOL’s English syntax was intended to be so simple that managers could bypass developers entirely. Waterfall-based development was invented to standardize and make routine the development of new software. Object-oriented programming was supposed to be so simple that eventually all computer users could do their own software engineering.

In some cases, programmers were resistant to these changes, fearing that programs like compilers might drive them out of work. Ultimately, though, their concerns were unfounded. FORTRAN and COBOL, for instance, both proved to be durable, long-lived languages, but they didn’t replace computer programmers. If anything, these innovations introduced new complexity into the world of computing that created even greater demand for coders. Other changes like Waterfall made things worse, creating more complicated bureaucratic processes that made it difficult to deliver large features. At a conference sponsored by NATO in 1968, organizers declared that there was a “crisis” in software engineering. There were too few people to do the work, and large projects kept grinding to a halt or experiencing delays.

Bearing this history in mind, claims that ChatGPT will replace all software engineers seem almost assuredly misplaced. Firing engineers and throwing AI at blocked feature development would probably result in disaster, followed by the rehiring of those engineers in short order. More reasonable suggestions show that large language models (LLMs) can replace some of the duller work of engineering. They can offer autocomplete suggestions or methods to sort data, if they’re prompted correctly. As an engineer, I can imagine using an LLM to “rubber duck” a problem, giving it prompts for potential solutions that I can review. It wouldn’t replace conferring with another engineer, because LLMs still don’t understand the actual requirements of a feature or the interconnections within a code base, but it would speed up those conversations by getting rid of the busy work.

ChatGPT could still upend the tech labor market through expectations of greater productivity. If it eliminates some of the more routine tasks of development (and puts Stack Overflow out of business), managers may be able to make more demands of the engineers who work for them. But computing history has already demonstrated that attempts to reduce the presence of developers or streamline their role only end up adding complexity to the work and making those workers even more necessary. If anything, ChatGPT stands to eliminate the duller work of coding much the same way that compilers ended the drudgery of having to work in binary, which would make it easier for developers to focus more on building out the actual architecture of their creations.

The computer scientist Edsger Dijkstra once observed, “As long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming had become an equally gigantic problem.” We’ve introduced more and more complexity to computers in the hopes of making them so simple that they don’t need to be programmed at all. Unsurprisingly, throwing complexity at complexity has only made it worse, and we’re no closer to letting managers cut out the software engineers. If LLMs can match the promises of their creators, we may very well cause it to accelerate further.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at ideas@wired.com.

Welcome to Halal Hinge

Welcome to Halal Hinge

Swoosh. Ping. Slow and studied scrolling. Typing with the index finger. My mother is on the prowl for a man. Not for herself, of course, but for any of her three daughters, all of whom have the great misfortune of being single in their late twenties and early thirties. “I’m depressed,” she tells us when the subject of suitors comes up. She, a Bangladeshi woman who got married at 19 and had three kids by 27, can’t believe that none of us have procured a husband or given her any grandchildren. So, she decided to take matters into her own hands. She is now part of several WhatsApp groups where hundreds of fussy parents are on the hunt.

Instead of debating the merits of The Office or pineapple on pizza as one does on Hinge, mums and aunties in these groups discuss deal-breakers such as level of piety, education, willingness to relocate—and the ever controversial, interest in living with in-laws. They trade CVs, often called “biodata” in South Asian communities. When a parent is happy with what they’ve heard, they may forward information to their children. The whole process is more bureaucratic than you might think. Each biodata comes with a unique code, and there are subgroups depending on your preferences, including those for people living in London or seeking older suitors or divorcées. If you like the sound of one suitor, you can find their preferred contact details (frequently a different number than the one in the WhatsApp chat) and message them privately.

Though these groups are a totally different experience than an app like Bumble or Tinder, they do have a few things in common with other options for seeking love online. Notably, this halal Hinge has introduced my mum and other parents like her to all the pitfalls of modern dating, including but not limited to ghosting, gaslighting, and trolling.

My mum used to think of the internet as a special place for YouTube videos introducing new recipes, Islamic sermons that tug on her heartstrings, and sincere Facebook posts from strangers. So when she was added to her first matchmaking WhatsApp group during the pandemic, she assumed it would be filled with hope and potential. After all, it was an opportunity to network with other parents yearning for decent partners for their children.

But what she didn’t expect was to get a crash course in all the strange conundrums and bad manners that dating app users encounter. “Why do people leave you on ‘read’?” she asks us. “Is it okay to double message a person?” “How long is long enough for a follow-up?” To the modern dater, these questions are par for the course. To my mum and other older parents just getting involved in these groups, these behaviors are a shock.

During her time in these WhatsApp groups, my mum has been breadcrumbed by fellow parents who offer one-line tidbits before eventually ghosting altogether. She has carried on conversations only to learn that the other person’s child has found someone more suitable to talk to. She has even experienced the most soul-crushing dating peril of all: getting invested in people who aren’t even single. One time she sent me a biodata of a man who seemed promising, only to discover I already knew him—because I knew his long-term partner. Perhaps some of the children of matchmaking parents find it hard to come clean about their dating status (Islamically at least, you’re not supposed to date unless it’s for the purpose of marriage). But upon discovering he was taken, she was horrified. ”How can people lie like this?” she protested, blissfully unaware of the ubiquity of not-so-single folks on dating apps. Still, she marched on.

Will the Real David Sosa Please Stand Up?

Will the Real David Sosa Please Stand Up?

In the coming weeks, the US Supreme Court will decide whether to hear a pending case that includes an amicus brief filed by David Sosas. Plural. The David Sosas who signed the brief include “David Sosa, age 32, from Iredell County, North Carolina; David Sosa, age 51, from Mecklenburg, North Carolina; David Sosa, age 32, from Los Angeles, California; and David Sosa, age 50, also from Los Angeles, California.” They are among several thousands of David Sosas living in the US.

The problem is that Martin County, Florida, law enforcement can’t seem to tell these David Sosas apart, and they arrested and wrongfully detained the wrong David Sosa for an open warrant belonging to a different David Sosa. Twice.

The David Sosa named in the case was stopped in 2014 by Martin County police for a traffic violation. The officer ran his name through an electronic warrant database and uncovered a hit for an open 1992 warrant in Harris County, Texas, related to a crack cocaine conviction. David Sosa pointed out that the David Sosa in the database had a different date of birth, height, weight, and tattoo. He was arrested anyway, but three hours later was released after fingerprinting revealed the mismatch.

In 2018 it happened again, but this time (the same) David Sosa was prepared. He explained to the officer that a warrant for a person with the same name had caused a wrongful arrest years earlier in the same county. He was arrested again, and this time held in jail for three days before the mistake was acknowledged. David Sosa sued the police officers for Constitutional violations, including overdetention and false arrest, and he appealed after his case was dismissed.

After a series of losses, David Sosa is bringing his case to the Supreme Court. Why hadn’t the officers updated their records after the 2014 mistake? Was David Sosa at constant risk for being thrown in jail because he shared a name with a wanted drug trafficker residing in Texas in the early 1990s? And in the age of enormous new capabilities for managing and sharing data, why are such mistakes even occurring?

Warrant problems have probably existed since police began maintaining warrants. In 1967 the FBI launched the National Crime Information Center (NCIC) to share warrant information across fragmented systems maintained independently by the thousands of police departments in the US. Fifty years later, the system was handling 14 million transactions a day. But as early as the 1980s, analysts warned of errors in the data that could cause significant due process issues. One study noted that, even then, expanding access to other jurisdictions’ warrants would do little to improve data quality; as “computerized information [is] not necessarily more accurate than manual file systems, and because computer databases increase accessibility, the effect of inaccuracies is magnified.”

The issue has come before the Supreme Court before. The appellate court in David Sosa’s latest ruling based its decision on a 1979 Supreme Court case where a man used his brother’s name in an arrest, resulting in a warrant being issued against the wrong man. It took three days of jail time before the mistake was sorted out, creating a rather arbitrary 72-hour benchmark for the time it takes for a Constitutional violation to trigger in some jurisdictions.

Marie Kondo and the Manhattan Project

Marie Kondo and the Manhattan Project

Stan Ulam knew he was moving to New Mexico, but he didn’t know exactly why. Ulam was a Polish-born mathematician—and later, physicist—who first came to the United States in the late 1930s. In 1943, after Ulam had obtained American citizenship and a job at the University of Wisconsin, his colleague John von Neumann invited him to work on a secret project. All Von Neumann could reveal about the project was that it would involve relocating, along with his family, to New Mexico.

So Ulam went to the library. He checked out a book on New Mexico. Instead of skipping to the section about the state’s history or culture or climate, he turned to the opening flap, where the names of the book’s previous borrowers were listed.

This list was a curious one. It happened to include the names of fellow physicists, many of whom Ulam knew, and many of whom had mysteriously disappeared from their university posts in prior months. Ulam then cross-referenced the scientists’ names with their fields of specialty and was able to make an educated guess at the nature of the secret project.

Indeed, with World War II underway, Ulam had been invited to Los Alamos, New Mexico, to work on what would come to be known as the Manhattan Project.

The atmosphere in Los Alamos was one of collaborative fraternity. There was, indeed, something egalitarian about this whole period, at least on the surface. The Manhattan Project would come to be seen as a triumph of American ingenuity and scientific collaboration, even as it left pockmarks on the face of the earth. It destroyed cities, ended a war, and embedded the new prospect of nuclear destruction. And then, postwar America saw one of the highest rates of growth, with relatively low inequality and inflation. Marriage rates were high. World war was over, or at least on hold. It was a time of economic stability.

Ulam’s wife, Françoise said: “In retrospect I think that we were all a little light-headed from the altitude.”

It was in this postwar aftermath that Ulam would make his most important contribution to the field of optimization. He and his family decamped from Los Alamos to the University of Southern California, where in 1946 he fell ill with encephalitis. It was a difficult illness, and while Ulam recuperated in bed, he kept busy with a deck of cards and game after game of solitaire. It was in these games that an idea about optimization was born.

As he laid out cards, Ulam wondered: What are my odds of winning this round? He thought about how to calculate the odds. If he played enough times and kept track of the cards in each round, he’d have data to describe his chances of winning. He could calculate, for example, which beginning sequences were most likely to lead to a win. The more games he played, the better this data would become. And instead of actually playing a large number of games, he could run a simulation that would eventually come to approximate the distribution of all possible outcomes.

When Ulam recovered from his illness and returned to work, he began to think about applications, beyond games of solitaire, for this method of random sampling. A number of questions in physics could benefit from this style of calculation, he surmised, from the diffusion of particles to problems in cryptography. A Los Alamos colleague with whom he still corresponded, Nick Metropolis, had often heard Ulam refer to an uncle with a gambling problem. Because Ulam had conceived of the idea while playing cards, Metropolis settled on a code name, echoing the uncle’s frequent adieus as he made for the casino: “I’m going to Monte Carlo.” The method became known as the Monte Carlo method.