Select Page
The Shady Business of Selling Futures

The Shady Business of Selling Futures

Future predictions always proliferate at the end of the year, but in 2021 something different is joining the usual speculations about gadgets and lifestyles: existential introspection. Amid Covid-19 variants and surging nationalisms, global economic meltdown and climate crisis, evolving emergencies are heightening the feeling that nearly everything is up for overhaul—from food to queerness, marriage to gaming, and aging to music. And with endemic uncertainty as the soul of the age, the future is as trendy as it’s ever been, which promises to exacerbate uncertainty.

“The future” itself has become a catchall catchphrase. Slack has branded itself as the future of work and launched its own Future Forum. Everyone from Facebook (now Meta) to Atari to the city of Seoul have declared the metaverse the imminent future of our reality. Universities are enacting “futures committees.” Governments are committing to sustainable futures. This “future” is less a specific moment in time than an act of promotion. Invoking it can be such a powerful signifier of progress and optimism that it can burnish questionable or staid ideas and initiatives and motivate people even in the face of the most dismal realities. “What the future offers,” wrote German historian Reinhart Koselleck, “is compensation for the misery of the present.” But if we buy into these visions too readily, the rosy futures being sold to us threaten to prolong misery. Riding out this fad of futurism requires understanding how we got here, who profits from it, and how to tell serious futures from schlock.

Humans have looked beyond their present for most of human history, whether expressed as prayers for rain or for salvation. But using prediction to strategize the future is an idea with only a brief past—widespread adoption in the West dates back only to the 1800s. In her book Looking Forward: Prediction and Uncertainty in Modern America, Jamie Pietruska explains how, amid the late 19th century’s scientific advancements and rising secularism, “prediction became a ubiquitous scientific, economic, and cultural practice,” manifesting in things like weather forecasting, fortune-telling, and prophecies about how business would grow or contract. These shifts coincided with the rise of modernity, the onslaught of social and technological changes that continues to steep developed societies in newness, progress, and creative ruination. As the Marxist philosopher Marshall Berman wrote, “To be modern is to find ourselves in an environment that promises us adventure, power, joy, growth, transformation of ourselves and the world—and, at the same time, that threatens to destroy everything we have, everything we know, everything we are.” He wrote this in 1982 and was describing the 19th and 20th centuries, but it applies even more aptly to a future Berman wouldn’t see, our current moment. Nonstop upheaval can be exciting, bewildering, and scary all at once. It triggers a desire to understand and control the chaos. The answer to future shock is future forecasting.

But not everyone experiences or imagines “the future” the same way. The linear march toward a future filled with progress is also a historical and cultural construct, one that has especially benefited the wealthy white men who have thought the future is theirs for the taking. If the future is conceived as a resource, then it’s been pillaged and exploited primarily by one kind of vision. Inequality and injustice limit access to the future just as they do to land or capital. For instance, as sociologist Alondra Nelson has observed, “Blackness gets constructed as always oppositional to technologically driven chronicles of progress.” Take the dream that the future world might be raceless, a view that simultaneously ignores the ills of racism while discounting the needs of Black people and culture. Other marginalized groups also find themselves bearing the brunt of dystopian futures while not being included within utopian ones. Consider what it means when futuristic technologies aim to simply erase disability or age, without taking into consideration either what older or disabled people desire, or what they can access. Power influences what kinds of changes emerge and who benefits from them.

While the capacity to plan for the future is often a luxury, it is also central to capitalism, which banks on things like returns on investments, prospective earnings, and coordinating supply and demand. (In large part, the supply chain’s current woes are a failure to anticipate the future.) Since the turn of the 20th century, there have been ever-expanding ways to profit off the future, as more and more areas of social life become terrains of speculative economic opportunity. Companies like WGSN forecast fabrics, silhouettes, and fashion moods; think tanks like Institute for the Future advise foundations and nonprofits on the future of health care or governance, and cultural trend forecasters like The Future Laboratory explain the consequences of virtual reality on Generation Z to their Fortune 500 clientele. Not to mention (white, male) business titans like Elon Musk, Jeff Bezos, or Mark Zuckerberg, who move markets with their hyperbolic and self-serving proclamations.

As someone who has been studying professional futurists for years, the barrier to entry does feel lower every day. Because the future is so top-of-mind, seemingly all it takes to be taken seriously as a futurist is to claim to be one. On the one hand, democratizing futurism means more voices, more imaginaries, and more possibilities—more capacity for more of us to plan. But there is also a price to pay when something as important as the future becomes subject to the whims of an attention economy where hype makes headlines and misinformation crowds out truth. It means we take foolish ideas from prominent people more seriously than we should. (Nuking Mars, anyone?) It means impractical technologies (like a laundry-folding robot) and unpublished studies (like this one about Covid transmissibility) get treated as though they are sound and verified. It means the concerns about the future can distract us from engaging in the present. It means, too, that those whose platform gives them the authority to speak and be heard about the future are rarely asked to question their assumptions and motivations. Take the breathless predictions about driverless cars, which were supposed to be ubiquitous by 2020 yet are still stymied by regulatory, infrastructural, and technological woes. When cultural change becomes a product, cheap versions abound, which threatens to cheapen our future, too.

A surfeit of predictions might make it seem like there is more certainty about the world. And of course, the rapid changes coming from all corners deserve our attention, action, and care. But forecasting is notoriously fickle, and there is little accountability for misguided predictions. (Many trained futurists will tell you that they don’t do predictions, preferring terms such as forecasts, foresight, or alternative futures, but this distinction is too insidery for most people to grasp.) What’s certain is that selling futures is a business that feeds off uncertainty—and uncertainty is its true product. Too many futures, from too many places, with too many agendas doesn’t invalidate the enterprise of prediction but adds to the confusion, thereby making prediction feel even more necessary. The future will stay trendy so long as the times feel turbulent—and so long as there is money to be made and attention to be gained from guiding those who feel, and will always be, behind the curve.

That’s why it’s important for everyone to be aware when the future is being used as snake oil to persuade us of the inevitability of what is really just another marketing plan. Asking who will benefit from a particular future vision is a good start; so is following the money. Interpreting and creating future forecasts is also a good reason for everyone to learn basic futuring methods like scenarios, environmental scanning, and backcasting. It’s also important to support organizations looking to reshape what futures mean, including Teach the Future bringing futures curriculum to schools and Afrotectopia empowering radical Black futures. We may not be able to stop the future from being trendy, but we can make it more on our terms. Our vigilance toward the futures being sold to us in the present is essential to ensuring a better future for the next generation, whom Neil Postman called “the living messages we send to a time we will not see.”


African Voices Must Lead the Global Climate Conversation

African Voices Must Lead the Global Climate Conversation

Next fall, an African country, most likely Egypt, will host COP27—the 27th UN Climate Change conference. This will come on the heels of two more Intergovernmental Panel on Climate Change (IPCC) reports, due to be released next year, that will outline the worsening impacts of climate change, the adaptations the world needs to make, and our vulnerabilities to the climate crisis. These are issues that particularly affect the African continent. The combined focus that COP27 and the IPCC bring will mean Africa’s climate story is at last in the global media spotlight.

Africa has been facing escalating climate-related disasters for years. This summer, 6 million people in Angola faced starvation as a result of the worst drought the country has seen in 40 years. Thousands of Angolan “climate refugees” have been forced to cross the border into Namibia. Similar droughts have crippled the north and the south of the continent, with Algeria and Madagascar both devastated by water shortages. Meanwhile, locusts—exacerbated by cyclones—are swarming East Africa, and agriculture in West Africa is being deeply affected by a shifting monsoon.

Africa has long suffered a lack of attention from countries and populations outside the continent. Climate events such as flooding in Germany and China and wildfires in Canada and Greece this year, have, rightly, been covered around the world. Flooding in Nigeria and Uganda has largely been ignored.

In 2022, this balance will shift. As bodies such as the IPCC focus on how climate change is already affecting people and what we must do to adapt, Africa cannot be left out. The continent has contributed only 3 percent of global historic emissions, yet it is experiencing some of the worst impacts of climate change and has the fewest resources to be able to adapt. Conversations will begin to center on how rich countries—which are also the biggest polluters—can help African countries (and others without the means) become more resilient to the inevitable devastation they face. The UN’s “Loss and Damage” policy proposal, the idea that big polluters compensate affected nations for the damage and destruction they have already experienced due to climate change (an idea often opposed by developed nations), will be brought back to the international climate agenda by African voices.

Africa, although historically a very small contributor to pollution, will also need to play its part in reducing global carbon emissions. In particular, it will need help transition to clean energy, as electricity demand on the continent is predicted to double by 2030. However, money and investment still continues to pour into African countries from non-African corporations and governments seeking to extract and burn fossil fuels. The 1,400-km East Africa Crude Oil Pipeline from Hoima, Uganda to the port of Tanga in Tanzania—currently being built by French oil company Total—is a potent example of this. The project will displace local people and destroy farmlands and biodiversity, yet profits will largely be taken out of the continent.

Next year, we will need that money to stop flowing into fossil fuels and be used instead to scale the adoption of renewables and invest in nature. The Congo, for example, is home to the world’s second-largest tropical rainforest. Like the Amazon, it is a vital global component for regulating the Earth’s climate. Unlike the Amazon, however, it is not the focus of the world’s attention, even though escalating deforestation there threatens us all.

Predicting Death Could Change the Value of a Life

Predicting Death Could Change the Value of a Life

If you could predict your death, would you want to? For most of human history, the answer has been a qualified yes. In Neolithic China, seers practiced pyro-osteomancy, or the reading of bones; ancient Greeks divined the future by the flight of birds; Mesopotamians even attempted to plot the future in the attenuated entrails of dead animals. We’ve looked to the stars and the movement of planets, we’ve looked to weather patterns, and we’ve even looked to bodily divinations like the “child born with a caul” superstition to assure future good fortune and long life. By the 1700s, the art of prediction had grown slightly more scientific, with mathematician and probability expert Abraham de Moivre attempting to calculate his own death by equation, but truly accurate predictions remained out of reach.

Then, in June 2021, de Moivre’s fondest wish appeared to come true: Scientists discovered the first reliable measurement for determining the length of your life. Using a dataset of 5,000 protein measurements from around 23,000 Icelanders, researchers working for deCODE Genetics in Reykjavik, Iceland developed a predictor for the time of death—or, as their press release explains it, “how much is left of the life of a person.” It’s an unusual claim, and it comes with particular questions about method, ethics, and what we mean by life.

A technology for accurately predicting death promises to upend the way we think about our mortality. For most people, most of the time, death remains a vague consideration, haunting the shadowy recesses of our minds. But knowing when our life ends, having an understanding of the days and hours left, removes that comfortable shield of abstraction. It also makes us see risk differently; we are, for instance, more likely to try unproven therapies in an attempt to beat the odds. If the prediction came far enough in advance, most of us might even try to prevent the eventuality or avert the outcome. Science fiction often tantalizes us with that possibility; movies like Minority Report, Thrill Seekers, and the Terminator franchise use advanced knowledge of the future to change the past, averting death and catastrophe (or not) before it happens. Indeed, when healthy and abled people think about predicting death, they tend to think of these sci-fi possibilities—futures where death and disease are eradicated before they can begin. But for disabled people like myself, the technology of death prediction serves as a reminder that we’re already often treated as better off dead. A science for predicting the length of life carries with it a judgement of its value: that more life equates to better or more worthwhile life. It’s hard not to see the juggernaut of a technocratic authority bearing down on the most vulnerable.

This summer’s discovery was the work of researchers Kari Stefansson and Thjodbjorg Eiriksdottir, who found that individual proteins in our DNA relate to overall mortality—and that various causes of death still had similar “protein profiles.” Eiriksdottir claims that they can measure these profiles in a single draw of blood, seeing in the plasma a sort of hourglass for the time left. The scientists call these mortality tracking indicators biomarkers, and there are up to 106 of them that help to predict all-cause (rather than specific to illness) mortality. But the breakthrough for Stefansson, Eiriksdottir, and their research team is scale. The process they developed is called SOMAmer-Based Multiplex Proteomic Assay, and it means the group can measure thousands and thousands of proteins at once.

The result of all these measurements isn’t an exact date and time. Instead, it provides medical professionals with the ability to accurately predict the top percentage of patients most likely to die (at highest risk, about 5 percent of the total) and also the top percentage least likely to die (at lowest risk), just by a prick of the needle and a small vial of blood. That might not seem like much of a crystal ball, but it’s clear this is merely a leaping-off point. The deCODE researchers plan to improve the process to make it more “useful,” and this effort joins other projects racing to be first in death-prediction tech, including an artificial intelligence algorithm for palliative care. The creators of this algorithm hope to use “AI’s cold calculus” to nudge clinicians’ decisions and to force loved ones to have the dreaded conversation—because there’s a world of difference between “I am dying” and “I am dying now.”

If AI Is Predicting Your Future, Are You Still Free?

If AI Is Predicting Your Future, Are You Still Free?

As you read these words, there are likely dozens of algorithms making predictions about you. It was probably an algorithm that determined that you would be exposed to this article because it predicted you would read it. Algorithmic predictions can determine whether you get a loan or a job or an apartment or insurance, and much more.

These predictive analytics are conquering more and more spheres of life. And yet no one has asked your permission to make such forecasts. No governmental agency is supervising them. No one is informing you about the prophecies that determine your fate. Even worse, a search through academic literature for the ethics of prediction shows it is an underexplored field of knowledge. As a society, we haven’t thought through the ethical implications of making predictions about people—beings who are supposed to be infused with agency and free will.

Defying the odds is at the heart of what it means to be human. Our greatest heroes are those who defied their odds: Abraham Lincoln, Mahatma Gandhi, Marie Curie, Hellen Keller, Rosa Parks, Nelson Mandela, and beyond. They all succeeded wildly beyond expectations. Every school teacher knows kids who have achieved more than was dealt in their cards. In addition to improving everyone’s baseline, we want a society that allows and stimulates actions that defy the odds. Yet the more we use AI to categorize people, predict their future, and treat them accordingly, the more we narrow human agency, which will in turn expose us to unchartered risks.

Human beings have been using prediction since before the Oracle of Delphi. Wars were waged on the basis of those predictions. In more recent decades, prediction has been used to inform practices such as setting insurance premiums. Those forecasts tended to be about large groups of people—for example, how many people out of 100,000 will crash their cars. Some of those individuals would be more careful and lucky than others, but premiums were roughly homogenous (except for broad categories like age groups) under the assumption that pooling risks allows the higher costs of the less careful and lucky to be offset by the relatively lower costs of the careful and lucky. The larger the pool, the more predictable and stable premiums were.

Today, prediction is mostly done through machine learning algorithms that use statistics to fill in the blanks of the unknown. Text algorithms use enormous language databases to predict the most plausible ending to a string of words. Game algorithms use data from past games to predict the best possible next move. And algorithms that are applied to human behavior use historical data to infer our future: what we are going to buy, whether we are planning to change jobs, whether we are going to get sick, whether we are going to commit a crime or crash our car. Under such a model, insurance is no longer about pooling risk from large sets of people. Rather, predictions have become individualized, and you are increasingly paying your own way, according to your personal risk scores—which raises a new set of ethical concerns.

An important characteristic of predictions is that they do not describe reality. Forecasting is about the future, not the present, and the future is something that has yet to become real. A prediction is a guess, and all sorts of subjective assessments and biases regarding risk and values are built into it. There can be forecasts that are more or less accurate, to be sure, but the relationship between probability and actuality is much more tenuous and ethically problematic than some assume.

Institutions today, however, often try to pass off predictions as if they were a model of objective reality. And even when AI’s forecasts are merely probabilistic, they are often interpreted as deterministic in practice—partly because human beings are bad at understanding probability and partly because the incentives around avoiding risk end up reinforcing the prediction. (For example, if someone is predicted to be 75 percent likely to be a bad employee, companies will not want to take the risk of hiring them when they have candidates with a lower risk score).

The History of Predicting the Future

The History of Predicting the Future

The future has a history. The good news is that it’s one from which we can learn; the bad news is that we very rarely do. That’s because the clearest lesson from the history of the future is that knowing the future isn’t necessarily very useful. But that has yet to stop humans from trying.

Take Peter Turchin’s famed prediction for 2020. In 2010 he developed a quantitative analysis of history, known as cliodynamics, that allowed him to predict that the West would experience political chaos a decade later. Unfortunately, no one was able to act on that prophecy in order to prevent damage to US democracy. And of course, if they had, Turchin’s prediction would have been relegated to the ranks of failed futures. This situation is not an aberration. 

Rulers from Mesopotamia to Manhattan have sought knowledge of the future in order to obtain strategic advantages—but time and again, they have failed to interpret it correctly, or they have failed to grasp either the political motives or the speculative limitations of those who proffer it. More often than not, they have also chosen to ignore futures that force them to face uncomfortable truths. Even the technological innovations of the 21st century have failed to change these basic problems—the results of computer programs are, after all, only as accurate as their data input.

There is an assumption that the more scientific the approach to predictions, the more accurate forecasts will be. But this belief causes more problems than it solves, not least because it often either ignores or excludes the lived diversity of human experience. Despite the promise of more accurate and intelligent technology, there is little reason to think the increased deployment of AI in forecasting will make prognostication any more useful than it has been throughout human history.

People have long tried to find out more about the shape of things to come. These efforts, while aimed at the same goal, have differed across time and space in several significant ways, with the most obvious being methodology—that is, how predictions were made and interpreted. Since the earliest civilizations, the most important distinction in this practice has been between individuals who have an intrinsic gift or ability to predict the future, and systems that provide rules for calculating futures. The predictions of oracles, shamans, and prophets, for example, depended on the capacity of these individuals to access other planes of being and receive divine inspiration. Strategies of divination such as astrology, palmistry, numerology, and Tarot, however, depend on the practitioner’s mastery of a complex theoretical rule-based (and sometimes highly mathematical) system, and their ability to interpret and apply it to particular cases. Interpreting dreams or the practice of necromancy might lie somewhere between these two extremes, depending partly on innate ability, partly on acquired expertise. And there are plenty of examples, in the past and present, that involve both strategies for predicting the future. Any internet search on “dream interpretation” or “horoscope calculation” will throw up millions of hits.

In the last century, technology legitimized the latter approach, as developments in IT (predicted, at least to some extent, by Moore’s law) provided more powerful tools and systems for forecasting. In the 1940s, the analog computer MONIAC had to use actual tanks and pipes of colored water to model the UK economy. By the 1970s, the Club of Rome could turn to the World3 computer simulation to model the flow of energy through human and natural systems via key variables such as industrialization, environmental loss, and population growth. Its report, Limits to Growth, became a best seller, despite the sustained criticism it received for the assumptions at the core of the model and the quality of the data that was fed into it.

At the same time, rather than depending on technological advances, other forecasters have turned to the strategy of crowdsourcing predictions of the future. Polling public and private opinions, for example, depends on something very simple—asking people what they intend to do or what they think will happen. It then requires careful interpretation, whether based in quantitative (like polls of voter intention) or qualitative (like the Rand corporation’s DELPHI technique) analysis. The latter strategy harnesses the wisdom of highly specific crowds. Assembling a panel of experts to discuss a given topic, the thinking goes, is likely to be more accurate than individual prognostication.