Select Page

As you read these words, there are likely dozens of algorithms making predictions about you. It was probably an algorithm that determined that you would be exposed to this article because it predicted you would read it. Algorithmic predictions can determine whether you get a loan or a job or an apartment or insurance, and much more.

These predictive analytics are conquering more and more spheres of life. And yet no one has asked your permission to make such forecasts. No governmental agency is supervising them. No one is informing you about the prophecies that determine your fate. Even worse, a search through academic literature for the ethics of prediction shows it is an underexplored field of knowledge. As a society, we haven’t thought through the ethical implications of making predictions about people—beings who are supposed to be infused with agency and free will.

Defying the odds is at the heart of what it means to be human. Our greatest heroes are those who defied their odds: Abraham Lincoln, Mahatma Gandhi, Marie Curie, Hellen Keller, Rosa Parks, Nelson Mandela, and beyond. They all succeeded wildly beyond expectations. Every school teacher knows kids who have achieved more than was dealt in their cards. In addition to improving everyone’s baseline, we want a society that allows and stimulates actions that defy the odds. Yet the more we use AI to categorize people, predict their future, and treat them accordingly, the more we narrow human agency, which will in turn expose us to unchartered risks.

Human beings have been using prediction since before the Oracle of Delphi. Wars were waged on the basis of those predictions. In more recent decades, prediction has been used to inform practices such as setting insurance premiums. Those forecasts tended to be about large groups of people—for example, how many people out of 100,000 will crash their cars. Some of those individuals would be more careful and lucky than others, but premiums were roughly homogenous (except for broad categories like age groups) under the assumption that pooling risks allows the higher costs of the less careful and lucky to be offset by the relatively lower costs of the careful and lucky. The larger the pool, the more predictable and stable premiums were.

Today, prediction is mostly done through machine learning algorithms that use statistics to fill in the blanks of the unknown. Text algorithms use enormous language databases to predict the most plausible ending to a string of words. Game algorithms use data from past games to predict the best possible next move. And algorithms that are applied to human behavior use historical data to infer our future: what we are going to buy, whether we are planning to change jobs, whether we are going to get sick, whether we are going to commit a crime or crash our car. Under such a model, insurance is no longer about pooling risk from large sets of people. Rather, predictions have become individualized, and you are increasingly paying your own way, according to your personal risk scores—which raises a new set of ethical concerns.

An important characteristic of predictions is that they do not describe reality. Forecasting is about the future, not the present, and the future is something that has yet to become real. A prediction is a guess, and all sorts of subjective assessments and biases regarding risk and values are built into it. There can be forecasts that are more or less accurate, to be sure, but the relationship between probability and actuality is much more tenuous and ethically problematic than some assume.

Institutions today, however, often try to pass off predictions as if they were a model of objective reality. And even when AI’s forecasts are merely probabilistic, they are often interpreted as deterministic in practice—partly because human beings are bad at understanding probability and partly because the incentives around avoiding risk end up reinforcing the prediction. (For example, if someone is predicted to be 75 percent likely to be a bad employee, companies will not want to take the risk of hiring them when they have candidates with a lower risk score).