Select Page
The Case of the Creepy Algorithm That ‘Predicted’ Teen Pregnancy

The Case of the Creepy Algorithm That ‘Predicted’ Teen Pregnancy

para leer este articulo en español por favor aprete aqui.

In 2018, while the Argentine Congress was hotly debating whether to decriminalize abortion, the Ministry of Early Childhood in the northern province of Salta and the American tech giant Microsoft presented an algorithmic system to predict teenage pregnancy. They called it the Technology Platform for Social Intervention.

“With technology you can foresee five or six years in advance, with first name, last name, and address, which girl—future teenager—is 86 percent predestined to have an adolescent pregnancy,” Juan Manuel Urtubey, then the governor of the province, proudly declared on national television. The stated goal was to use the algorithm to predict which girls from low-income areas would become pregnant in the next five years. It was never made clear what would happen once a girl or young woman was labeled as “predestined” for motherhood or how this information would help prevent adolescent pregnancy. The social theories informing the AI system, like its algorithms, were opaque.

The system was based on data—including age, ethnicity, country of origin, disability, and whether the subject’s home had hot water in the bathroom—from 200,000 residents in the city of Salta, including 12,000 women and girls between the ages of 10 and 19. Though there is no official documentation, from reviewing media articles and two technical reviews, we know that “territorial agents” visited the houses of the girls and women in question, asked survey questions, took photos, and recorded GPS locations. What did those subjected to this intimate surveillance have in common? They were poor, some were migrants from Bolivia and other countries in South America, and others were from Indigenous Wichí, Qulla, and Guaraní communities.

Although Microsoft spokespersons proudly announced that the technology in Salta was “one of the pioneering cases in the use of AI data” in state programs, it presents little that is new. Instead, it is an extension of a long Argentine tradition: controlling the population through surveillance and force. And the reaction to it shows how grassroots Argentine feminists were able to take on this misuse of artificial intelligence.

In the 19th and early 20th centuries, successive Argentine governments carried out a genocide of Indigenous communities and promoted immigration policies based on ideologies designed to attract European settlement, all in hopes of blanquismo, or “whitening” the country. Over time, a national identity was constructed along social, cultural, and most of all racial lines.

This type of eugenic thinking has a propensity to shapeshift and adapt to new scientific paradigms and political circumstances, according to historian Marisa Miranda, who tracks Argentina’s attempts to control the population through science and technology. Take the case of immigration. Throughout Argentina’s history, opinion has oscillated between celebrating immigration as a means of “improving” the population and considering immigrants to be undesirable and a political threat to be carefully watched and managed.

More recently, the Argentine military dictatorship between 1976 and 1983 controlled the population through systematic political violence. During the dictatorship, women had the “patriotic task” of populating the country, and contraception was prohibited by a 1977 law. The cruelest expression of the dictatorship’s interest in motherhood was the practice of kidnapping pregnant women considered politically subversive. Most women were murdered after giving birth and many of their children were illegally adopted by the military to be raised by “patriotic, Catholic families.”

While Salta’s AI system to “predict pregnancy” was hailed as futuristic, it can only be understood in light of this long history, particularly, in Miranda’s words, the persistent eugenic impulse that always “contains a reference to the future” and assumes that reproduction “should be managed by the powerful.”

Due to the complete lack of national AI regulation, the Technology Platform for Social Intervention was never subject to formal review and no assessment of its impacts on girls and women has been made. There has been no official data published on its accuracy or outcomes. Like most AI systems all over the world, including those used in sensitive contexts, it lacks transparency and accountability.

Though it is unclear whether the technology program was ultimately suspended, everything we know about the system comes from the efforts of feminist activists and journalists who led what amounted to a grassroots audit of a flawed and harmful AI system. By quickly activating a well-oiled machine of community organizing, these activists brought national media attention to how an untested, unregulated technology was being used to violate the rights of girls and women.

“The idea that algorithms can predict teenage pregnancy before it happens is the perfect excuse for anti-women and anti-sexual and reproductive rights activists to declare abortion laws unnecessary,” wrote feminist scholars Paz Peña and Joana Varon at the time. Indeed, it was soon revealed that an Argentine nonprofit called the Conin Foundation, run by doctor Abel Albino, a vocal opponent of abortion rights, was behind the technology, along with Microsoft.

The Real Harm of Crisis Text Line’s Data Sharing

The Real Harm of Crisis Text Line’s Data Sharing

Another week, another privacy horror show: Crisis Text Line, a nonprofit text message service for people experiencing serious mental health crises, has been using “anonymized” conversation data to power a for-profit machine learning tool for customer support teams. (After backlash, CTL announced it would stop.) Crisis Text Line’s response to the backlash focused on the data itself and whether it included personally identifiable information. But that response uses data as a distraction. Imagine this: Say you texted Crisis Text Line and got back a message that said “Hey, just so you know, we’ll use this conversation to help our for-profit subsidiary build a tool for companies who do customer support.” Would you keep texting?

That’s the real travesty—when the price of obtaining mental health help in a crisis is becoming grist for the profit mill. And it’s not just users of CTL who pay; it’s everyone who goes looking for help when they need it most.

Americans need help and can’t get it. The huge unmet demand for critical advice and help has given rise to a new class of organizations and software tools that exist in a regulatory gray area. They help people with bankruptcy or evictions, but they aren’t lawyers; they help people with mental health crises, but they aren’t care providers. They invite ordinary people to rely on them and often do provide real help. But these services can also avoid taking responsibility for their advice, or even abuse the trust people have put in them. They can make mistakes, push predatory advertising and disinformation, or just outright sell data. And the consumer safeguards that would normally protect people from malfeasance or mistakes by lawyers or doctors haven’t caught up.

This regulatory gray area can also constrain organizations that have novel solutions to offer. Take Upsolve, a nonprofit that develops software to guide people through bankruptcy. (The organization takes pains to claim it does not offer legal advice.) Upsolve wants to train New York community leaders to help others navigate the city’s notorious debt courts. One problem: These would-be trainees aren’t lawyers, so under New York (and nearly every other state) law, Upsolve’s initiative would be illegal. Upsolve is now suing to carve out an exception for itself. The company claims, quite rightly, that a lack of legal help means people effectively lack rights under the law.

The legal profession’s failure to grant Americans access to support is well-documented. But Upsolve’s lawsuit also raises new, important questions. Who is ultimately responsible for the advice given under a program like this, and who is responsible for a mistake—a trainee, a trainer, both? How do we teach people about their rights as a client of this service, and how to seek recourse? These are eminently answerable questions. There are lots of policy tools for creating relationships with elevated responsibilities: We could assign advice-givers a special legal status, establish a duty of loyalty for organizations that handle sensitive data, or create policy sandboxes to test and learn from new models for delivering advice.

But instead of using these tools, most regulators seem content to bury their heads in the sand. Officially, you can’t give legal advice or health advice without a professional credential. Unofficially, people can get such advice in all but name from tools and organizations operating in the margins. And while credentials can be important, regulators are failing to engage with the ways software has fundamentally changed how we give advice and care for one another, and what that means for the responsibilities of advice-givers.

And we need that engagement more than ever. People who seek help from experts or caregivers are vulnerable. They may not be able to distinguish a good service from a bad one. They don’t have time to parse terms of service dense with jargon, caveats, and disclaimers. And they have little to no negotiating power to set better terms, especially when they’re reaching out mid-crisis. That’s why the fiduciary duties that lawyers and doctors have are so necessary in the first place: not just to protect a person seeking help once, but to give people confidence that they can seek help from experts for the most critical, sensitive issues they face. In other words, a lawyer’s duty to their client isn’t just to protect that client from that particular lawyer; it’s to protect society’s trust in lawyers.

And that’s the true harm—when people won’t contact a suicide hotline because they don’t trust that the hotline has their sole interest at heart. That distrust can be contagious: Crisis Text Line’s actions might not just stop people from using Crisis Text Line. It might stop people from using any similar service. What’s worse than not being able to find help? Not being able to trust it.

A New Formula May Help Black Patients’ Access to Kidney Care

A New Formula May Help Black Patients’ Access to Kidney Care

For decades, doctors and hospitals saw kidney patients differently based on their race. A standard equation for estimating kidney function applied a correction for Black patients that made their health appear rosier, inhibiting access to transplants and other treatments.

On Thursday, a task force assembled by two leading kidney care societies said the practice is unfair and should end.

The group, a collaboration between the National Kidney Foundation and the American Society of Nephrology, recommended use of a new formula that does not factor in a patient’s race. In a statement, Paul Palevsky, the foundation’s president, urged “all laboratories and health care systems nationwide to adopt this new approach as rapidly as possible.” That call is significant because recommendations and guidelines from professional medical societies play a powerful role in shaping how specialists care for patients.

A study published in 2020 that reviewed records for 57,000 people in Massachusetts found that one-third of Black patients would have had their disease classified as more severe if they had been assessed using the same version of the formula as white patients. The traditional kidney calculation was an example of a class of medical algorithms and calculators that have recently come under fire for conditioning patient care based on race, which is a social category not biological one.

A review published last year listed more than a dozen such tools, in areas such as cardiology and cancer care. It helped prompt a surge of activism against the practice from diverse groups, including medical students and lawmakers such as Senator Elizabeth Warren (D-Massachusetts) and the chair of the House Ways and Means Committee, Richard Neal (D-Massachusetts).

Recently there are signs the tide is turning. The University of Washington dropped the use of race in kidney calculations last year after student protests led to a reconsideration of the practice. Mass General Brigham and Vanderbilt hospitals also abandoned the practice in 2020.

In May, a tool used to predict the chance a woman who previously had a cesarean section could safely give birth via vaginal delivery was updated to no longer automatically assign lower scores to Black and Hispanic women. A calculator that estimates the chances a child has a urinary tract infection was updated to no longer slash the scores for patients who are Black.

The prior formula for assessing kidney disease, known as CKD-EPI, was introduced in 2009, updating a 1999 formula that used race in a similar way. It converts the level of a waste product called creatinine in a person’s blood into a measure of overall kidney function called estimated glomerular filtration rate, or eGFR. Doctors use eGFR to help classify the severity of a person’s illness and determine if they qualify for various treatments, including transplants. Healthy kidneys produce higher scores.

The equation’s design factored in a person’s age and sex but also boosted the score of any patient classified as Black by 15.9 percent. That feature was included to account for statistical patterns seen in the patient data used to inform the design of CKD-EPI, which included relatively few people who were Black or from other racial minorities. But it meant a person’s perceived race could shift how their disease was measured or treated. A person with both Black and white heritage, for example, could flip a health system’s classification of their illness depending on how their doctor saw them or how they identified.

Nwamaka Eneanya, an assistant professor at University of Pennsylvania and a member of the task force behind Thursday’s recommendation, says she knows of one biracial patient with severe kidney disease who after learning about how the equation worked requested that she be classified as white to increase her chances of being listed for advanced care. Eneanya says a shift away from the established equation is long overdue. “Using someone’s skin color to guide their clinical pathway is wholeheartedly wrong—you introduce racial bias into medical care when you do that,” she says.