Select Page
The Case of the Creepy Algorithm That ‘Predicted’ Teen Pregnancy

The Case of the Creepy Algorithm That ‘Predicted’ Teen Pregnancy

para leer este articulo en español por favor aprete aqui.

In 2018, while the Argentine Congress was hotly debating whether to decriminalize abortion, the Ministry of Early Childhood in the northern province of Salta and the American tech giant Microsoft presented an algorithmic system to predict teenage pregnancy. They called it the Technology Platform for Social Intervention.

“With technology you can foresee five or six years in advance, with first name, last name, and address, which girl—future teenager—is 86 percent predestined to have an adolescent pregnancy,” Juan Manuel Urtubey, then the governor of the province, proudly declared on national television. The stated goal was to use the algorithm to predict which girls from low-income areas would become pregnant in the next five years. It was never made clear what would happen once a girl or young woman was labeled as “predestined” for motherhood or how this information would help prevent adolescent pregnancy. The social theories informing the AI system, like its algorithms, were opaque.

The system was based on data—including age, ethnicity, country of origin, disability, and whether the subject’s home had hot water in the bathroom—from 200,000 residents in the city of Salta, including 12,000 women and girls between the ages of 10 and 19. Though there is no official documentation, from reviewing media articles and two technical reviews, we know that “territorial agents” visited the houses of the girls and women in question, asked survey questions, took photos, and recorded GPS locations. What did those subjected to this intimate surveillance have in common? They were poor, some were migrants from Bolivia and other countries in South America, and others were from Indigenous Wichí, Qulla, and Guaraní communities.

Although Microsoft spokespersons proudly announced that the technology in Salta was “one of the pioneering cases in the use of AI data” in state programs, it presents little that is new. Instead, it is an extension of a long Argentine tradition: controlling the population through surveillance and force. And the reaction to it shows how grassroots Argentine feminists were able to take on this misuse of artificial intelligence.

In the 19th and early 20th centuries, successive Argentine governments carried out a genocide of Indigenous communities and promoted immigration policies based on ideologies designed to attract European settlement, all in hopes of blanquismo, or “whitening” the country. Over time, a national identity was constructed along social, cultural, and most of all racial lines.

This type of eugenic thinking has a propensity to shapeshift and adapt to new scientific paradigms and political circumstances, according to historian Marisa Miranda, who tracks Argentina’s attempts to control the population through science and technology. Take the case of immigration. Throughout Argentina’s history, opinion has oscillated between celebrating immigration as a means of “improving” the population and considering immigrants to be undesirable and a political threat to be carefully watched and managed.

More recently, the Argentine military dictatorship between 1976 and 1983 controlled the population through systematic political violence. During the dictatorship, women had the “patriotic task” of populating the country, and contraception was prohibited by a 1977 law. The cruelest expression of the dictatorship’s interest in motherhood was the practice of kidnapping pregnant women considered politically subversive. Most women were murdered after giving birth and many of their children were illegally adopted by the military to be raised by “patriotic, Catholic families.”

While Salta’s AI system to “predict pregnancy” was hailed as futuristic, it can only be understood in light of this long history, particularly, in Miranda’s words, the persistent eugenic impulse that always “contains a reference to the future” and assumes that reproduction “should be managed by the powerful.”

Due to the complete lack of national AI regulation, the Technology Platform for Social Intervention was never subject to formal review and no assessment of its impacts on girls and women has been made. There has been no official data published on its accuracy or outcomes. Like most AI systems all over the world, including those used in sensitive contexts, it lacks transparency and accountability.

Though it is unclear whether the technology program was ultimately suspended, everything we know about the system comes from the efforts of feminist activists and journalists who led what amounted to a grassroots audit of a flawed and harmful AI system. By quickly activating a well-oiled machine of community organizing, these activists brought national media attention to how an untested, unregulated technology was being used to violate the rights of girls and women.

“The idea that algorithms can predict teenage pregnancy before it happens is the perfect excuse for anti-women and anti-sexual and reproductive rights activists to declare abortion laws unnecessary,” wrote feminist scholars Paz Peña and Joana Varon at the time. Indeed, it was soon revealed that an Argentine nonprofit called the Conin Foundation, run by doctor Abel Albino, a vocal opponent of abortion rights, was behind the technology, along with Microsoft.

The Real Harm of Crisis Text Line’s Data Sharing

The Real Harm of Crisis Text Line’s Data Sharing

Another week, another privacy horror show: Crisis Text Line, a nonprofit text message service for people experiencing serious mental health crises, has been using “anonymized” conversation data to power a for-profit machine learning tool for customer support teams. (After backlash, CTL announced it would stop.) Crisis Text Line’s response to the backlash focused on the data itself and whether it included personally identifiable information. But that response uses data as a distraction. Imagine this: Say you texted Crisis Text Line and got back a message that said “Hey, just so you know, we’ll use this conversation to help our for-profit subsidiary build a tool for companies who do customer support.” Would you keep texting?

That’s the real travesty—when the price of obtaining mental health help in a crisis is becoming grist for the profit mill. And it’s not just users of CTL who pay; it’s everyone who goes looking for help when they need it most.

Americans need help and can’t get it. The huge unmet demand for critical advice and help has given rise to a new class of organizations and software tools that exist in a regulatory gray area. They help people with bankruptcy or evictions, but they aren’t lawyers; they help people with mental health crises, but they aren’t care providers. They invite ordinary people to rely on them and often do provide real help. But these services can also avoid taking responsibility for their advice, or even abuse the trust people have put in them. They can make mistakes, push predatory advertising and disinformation, or just outright sell data. And the consumer safeguards that would normally protect people from malfeasance or mistakes by lawyers or doctors haven’t caught up.

This regulatory gray area can also constrain organizations that have novel solutions to offer. Take Upsolve, a nonprofit that develops software to guide people through bankruptcy. (The organization takes pains to claim it does not offer legal advice.) Upsolve wants to train New York community leaders to help others navigate the city’s notorious debt courts. One problem: These would-be trainees aren’t lawyers, so under New York (and nearly every other state) law, Upsolve’s initiative would be illegal. Upsolve is now suing to carve out an exception for itself. The company claims, quite rightly, that a lack of legal help means people effectively lack rights under the law.

The legal profession’s failure to grant Americans access to support is well-documented. But Upsolve’s lawsuit also raises new, important questions. Who is ultimately responsible for the advice given under a program like this, and who is responsible for a mistake—a trainee, a trainer, both? How do we teach people about their rights as a client of this service, and how to seek recourse? These are eminently answerable questions. There are lots of policy tools for creating relationships with elevated responsibilities: We could assign advice-givers a special legal status, establish a duty of loyalty for organizations that handle sensitive data, or create policy sandboxes to test and learn from new models for delivering advice.

But instead of using these tools, most regulators seem content to bury their heads in the sand. Officially, you can’t give legal advice or health advice without a professional credential. Unofficially, people can get such advice in all but name from tools and organizations operating in the margins. And while credentials can be important, regulators are failing to engage with the ways software has fundamentally changed how we give advice and care for one another, and what that means for the responsibilities of advice-givers.

And we need that engagement more than ever. People who seek help from experts or caregivers are vulnerable. They may not be able to distinguish a good service from a bad one. They don’t have time to parse terms of service dense with jargon, caveats, and disclaimers. And they have little to no negotiating power to set better terms, especially when they’re reaching out mid-crisis. That’s why the fiduciary duties that lawyers and doctors have are so necessary in the first place: not just to protect a person seeking help once, but to give people confidence that they can seek help from experts for the most critical, sensitive issues they face. In other words, a lawyer’s duty to their client isn’t just to protect that client from that particular lawyer; it’s to protect society’s trust in lawyers.

And that’s the true harm—when people won’t contact a suicide hotline because they don’t trust that the hotline has their sole interest at heart. That distrust can be contagious: Crisis Text Line’s actions might not just stop people from using Crisis Text Line. It might stop people from using any similar service. What’s worse than not being able to find help? Not being able to trust it.

The Future of Robot Nannies

The Future of Robot Nannies

Childcare is the most intimate of activities. Evolution has generated drives so powerful that we will risk our lives to protect not only our own children, but quite often any child, and even the young of other species. Robots, by contrast, are products created by commercial entities with commercial goals, which may—and should—include the well-being of their customers, but will never be limited to such. Robots, corporations, and other legal or non-legal entities do not possess the instinctual nature of humans to care for the young—even if our anthropomorphic tendencies may prompt some children and adults to overlook this fact.

As a result, it is important to take into account the likelihood of deception—both commercial deception through advertising and also self-deception on the part of parents—despite the fact that robots are unlikely to cause significant psychological damage to children and to others who may come to love them.

Neither television manufacturers, broadcasters, nor online game manufacturers are deemed liable when children are left for too long in front of their television. Robotics companies will want to be in the same position, as no company will want to be liable for damage to children, so it is likely that manufacturers will undersell the artificial intelligence (AI) and interactive capacities of their robots. It is therefore likely that any robots (and certainly those in jurisdictions with strong consumer protection) will be marketed primarily as toys, surveillance devices, and possibly household utilities. They will be brightly colored and deliberately designed to appeal to parents and children. We expect a variety of products, some with advanced capabilities and some with humanoid features. Parents will quickly discover a robot’s ability to engage and distract their child. Robotics companies will program 

experiences geared toward parents and children, just as television broadcasters do. But robots will always have disclaimers, such as “this device is not a toy and should only be used with adult supervision” or “this device is provided for entertainment only. It should not be considered educational.”

Nevertheless, parents will notice that they can leave their children alone with robots, just as they can leave them to watch television or to play with other children. Humans are phenomenal learners and very good at detecting regularities and exploiting affordances. Parents will quickly notice the educational benefits of robot nannies that have advanced AI and communication skills. Occasional horror stories, such as the robot nanny and toddler tragedy in the novel Scarlett and Gurl, will make headline news and remind parents how to use robots responsibly.

This will likely continue until or unless the incidence of injuries necessitates redesign, a revision of consumer safety standards, statutory notice requirements, and/or risk-based uninsurability, all of which will further refine the industry. Meanwhile, the media will also seize on stories of robots saving children in unexpected ways, as it does now when children (or adults) are saved by other young children and dogs. This should not make people think that they should leave children alone with robots, but given the propensity we already have to anthropomorphize robots, it may make parents feel that little bit more comfortable—until the next horror story makes headlines.

When it comes to liability, we should be able to communicate the same model of liability applied to toys to the manufacturers of robot nannies: Make your robots reliable, describe what they do accurately, and provide sufficient notice of reasonably foreseeable danger from misuse. Then, apart from the exceptional situation of errors in design or manufacture, such as parts that come off and choke children, legal liability will rest entirely with the parent or responsible adult, as it does now, and as it should under existing product liability law.

20 Years After 9/11, Surveillance Has Become a Way of Life

20 Years After 9/11, Surveillance Has Become a Way of Life

Two decades after 9/11, many simple acts that were once taken for granted now seem unfathomable: strolling with loved ones to the gate of their flight, meandering through a corporate plaza, using streets near government buildings. Our metropolises’ commons are now enclosed with steel and surveillance. Amid the perpetual pandemic of the past year and a half, cities have become even more walled off. With each new barrier erected, more of the city’s defining feature erodes: the freedom to move, wander, and even, as Walter Benjamin said, to “lose one’s way … as one loses one’s way in a forest.”

It’s harder to get lost amid constant tracking. It’s also harder to freely gather when the public spaces between home and work are stripped away. Known as third places, they are the connective tissue that stitches together the fabric of modern communities: the public park where teens can skateboard next to grandparents playing chess, the library where children can learn to read and unhoused individuals can find a digital lifeline. When third places vanish, as they have since the attacks, communities can falter.

Without these spaces holding us together, citizens live more like several separate societies operating in parallel. Just as social-media echo chambers have undermined our capacity for conversations online, the loss of third places can create physical echo chambers.

America has never been particularly adept at protecting our third places. For enslaved and indigenous people, entering the town square alone could be a death sentence. Later, the racial terrorism of Jim Crow in the South denied Black Americans not only suffrage, but also access to lunch counters, public transit, and even the literal water cooler. In northern cities like New York, Black Americans still faced arrest and violence for transgressing rigid, but unseen, segregation codes.

Throughout the 20th century, New York built an infrastructure of exclusion to keep our unhoused neighbors from sharing the city institutions that are, by law, every bit as much theirs to occupy. In 1999, then mayor Rudy Giuliani warned unhoused New Yorkers that “streets do not exist in civilized societies for the purpose of people sleeping there.” His threats prompted thousands of NYPD officers to systematically target and push the unhoused out of sight, thus semi-privatizing the quintessential public place.

Despite these limitations, before 9/11 millions of New Yorkers could walk and wander through vast networks of modern commons—public parks, private plazas, paths, sidewalks, open lots, and community gardens, crossing paths with those whom they would never have otherwise met. These random encounters electrify our city and give us a unifying sense of self. That shared space began to slip away from us 20 years ago, and if we’re not careful, it’ll be lost forever.

In the aftermath of the attacks, we heard patriotic platitudes from those who promised to “defend democracy.” But in the ensuing years, their defense became democracy’s greatest threat, reconstructing cities as security spaces. The billions we spent to “defend our way of life” have proved to be its undoing, and it’s unclear if we’ll be able to turn back the trend.

In a country where the term “papers, please” was once synonymous with foreign authoritarianism, photo ID has become an ever present requirement. Before 9/11, a New Yorker could spend their entire day traversing the city without any need for ID. Now it’s required to enter nearly any large building or institution.

While the ID check has become muscle memory for millions of privileged New Yorkers, it’s a source of uncertainty and fear for others. Millions of Americans lack a photo ID, and for millions more, using ID is a risk, a source of data for Immigration and Customs Enforcement.

According to Mizue Aizeki, interim executive director of the New York–based Immigrant Defense Project, “ID systems are particularly vulnerable to becoming tools of surveillance.” Aizeki added, “data collection and analysis has become increasingly central to ICE’s ability to identify and track immigrants,” noting that the Department of Homeland Security dramatically increased its support for surveillance systems since its post-9/11 founding.

ICE has spent millions partnering with firms like Palantir, the controversial data aggregator that sells information services to governments at home and abroad. Vendors can collect digital sign-in lists from buildings where we show our IDs, facial recognition in plazas, and countless other surveillance tools that track the areas around office buildings with an almost military level of surveillance. According to Aizeki, “as mass policing of immigrants has escalated, advocates have been confronted by a rapidly expanding surveillance state.”

Biden’s ‘Antitrust Revolution’ Overlooks AI—at Americans’ Peril

Biden’s ‘Antitrust Revolution’ Overlooks AI—at Americans’ Peril

Despite the executive orders and congressional hearings of the “Biden antitrust revolution,” the most profound anti-competitive shift is happening under policymakers’ noses: the cornering of artificial intelligence and automation by a handful of tech companies. This needs to change.

There is little doubt that the impact of AI will be widely felt. It is shaping product innovations, creating new research, discovery, and development pathways, and reinventing business models. AI is making inroads in the development of autonomous vehicles, which may eventually improve road safety, reduce urban congestion, and help drivers make better use of their time. AI recently predicted the molecular structure of almost every protein in the human body, and it helped develop and roll out a Covid vaccine in record time. The pandemic itself may have accelerated AI’s incursion—in emergency rooms for triage; in airports, where robots spray disinfecting chemicals; in increasingly automated warehouses and meatpacking plants; and in our remote workdays, with the growing presence of chatbots, speech recognition, and email systems that get better at completing our sentences.

Exactly how AI will affect the future of human work, wages, or productivity overall remains unclear. Though service and blue-collar wages have lately been on the rise, they’ve stagnated for three decades. According to MIT’s Daron Acemoglu and Boston University’s Pascual Restrepo, 50 to 70 percent of this languishing can be attributed to the loss of mostly routine jobs to automation. White-collar occupations are also at risk as machine learning and smart technologies take on complex functions. According to McKinsey, while only about 10 percent of these jobs could disappear altogether, 60 percent of them may see at least a third of their tasks subsumed by machines and algorithms. Some researchers argue that while AI’s overall productivity impact has been so far disappointing, it will improve; others are less sanguine. Despite these uncertainties, most experts agree that on net, AI will “become more of a challenge to the workforce,” and we should anticipate a flat to slightly negative impact on jobs by 2030.

Without intervention, AI could also help undermine democracy–through amplifying misinformation or enabling mass surveillance. The past year and a half has also underscored the impact of algorithmically powered social media, not just on the health of democracy, but on health care itself.

The overall direction and net impact of AI sits on a knife’s edge, unless AI R&D and applications are appropriately channeled with wider societal and economic benefits in mind. How can we ensure that?

A handful of US tech companies, including Amazon, Alibaba, Alphabet, Facebook, and Netflix, along with Chinese mega-players such as Baidu, are responsible for $2 of every $3 spent globally on AI. They’re also among the top AI patent holders. Not only do their outsize budgets for AI dwarf others’, including the federal government’s, they also emphasize building internally rather than buying AI. Even though they buy comparatively little, they’ve still cornered the AI startup acquisition market. Many of these are early-stage acquisitions, meaning the tech giants integrate the products from these companies into their own portfolios or take IP off the market if it doesn’t suit their strategic purposes and redeploy the talent. According to research from my Digital Planet team, US AI talent is intensely concentrated. The median number of AI employees in the field’s top five employers—Amazon, Google, Microsoft, Facebook, and Apple—is some 18,000, while the median for companies six to 24 is about 2,500—and it drops significantly from there. Moreover, these companies have near-monopolies of data on key behavioral areas. And they are setting the stage to become the primary suppliers of AI-based products and services to the rest of the world.

Each key player has areas of focus consistent with its business interests: Google/Alphabet spends disproportionately on natural language and image processing and on optical character, speech, and facial recognition. Amazon does the same on supply chain management and logistics, robotics, and speech recognition. Many of these investments will yield socially beneficial applications, while others, such as IBM’s Watson—which aspired to become the go-to digital decision tool in fields as diverse as health care, law, and climate action—may not deliver on initial promises, or may fail altogether. Moonshot projects, such as level 4 driverless cars, may have an excessive amount of investment put against them simply because the Big Tech players choose to champion them. Failures, disappointments, and pivots are natural to developing any new technology. We should, however, worry about the concentration of investments in a technology so fundamental and ask how investments are being allocated overall. AI, arguably, could have more profound impact than social media, online retail, or app stores—the current targets of antitrust. Google CEO Sundar Pichai may have been a tad overdramatic when he declared that AI will have more impact on humanity than fire, but that alone ought to light a fire under the policy establishment to pay closer attention.