Select Page
To Fix Tech, Democracy Needs to Grow Up

To Fix Tech, Democracy Needs to Grow Up

There isn’t much we can agree on these days. But two sweeping statements that might garner broad support are “We need to fix technology” and “We need to fix democracy.”

There is growing recognition that rapid technology development is producing society-scale risks: state and private surveillance, widespread labor automation, ascending monopoly and oligopoly power, stagnant productivity growth, algorithmic discrimination, and the catastrophic risks posed by advances in fields like AI and biotechnology. Less often discussed, but in my view no less important, is the loss of potential advances that lack short-term or market-legible benefits. These include vaccine development for emerging diseases and open source platforms for basic digital affordances like identity and communication.

At the same time, as democracies falter in the face of complex global challenges, citizens (and increasingly, elected leaders) around the world are losing trust in democratic processes and are being swayed by autocratic alternatives. Nation-state democracies are, to varying degrees, beset by gridlock and hyper-partisanship, little accountability to the popular will, inefficiency, flagging state capacity, inability to keep up with emerging technologies, and corporate capture. While smaller-scale democratic experiments are growing, locally and globally, they remain far too fractured to handle consequential governance decisions at scale.

This puts us in a bind. Clearly, we could be doing a better job directing the development of technology towards collective human flourishing—in fact, this may be one of the greatest challenges of our time. If actually existing democracy is so riddled with flaws, it doesn’t seem up to the task. This is what rings hollow in many calls to “democratize technology”: Given the litany of complaints, why subject one seemingly broken system to governance by another?

At the same time, as we deal with everything from surveillance to space travel, we desperately need ways to collectively negotiate complex value trade-offs with global consequences, and ways to share in their benefits. This definitely seems like a job for democracy, albeit a much better iteration. So how can we radically update democracy so that we can successfully navigate toward long-term, shared positive outcomes?

The Case for Collective Intelligence

To answer these questions, we must realize that our current forms of democracy are only early and highly imperfect manifestations of collective intelligence—coordination systems that incorporate and process decentralized, agentic, and meaningful decisionmaking across individuals and communities to produce best-case decisions for the collective.

Collective intelligence, or CI, is not the purview of humans alone. Networks of trees, enabled by mycelia, can exhibit intelligent characteristics, sharing nutrients and sending out distress signals about drought or insect attacks. Bees and ants manifest swarm intelligence through complex processes of selection, deliberation, and consensus, using the vocabulary of physical movement and pheromones. In fact, humans are not even the only animals that vote. African wild dogs, when deciding whether to move locations, will engage in a bout of sneezing to determine whether quorum has been reached, with the tipping point determined by context—for example, lower-ranked individuals require a minimum of 10 sneezes to achieve what a higher-ranked individual could get with only three. Buffaloes, baboons, and meerkats also make decisions via quorum, with flexible “rules” based on behavior and negotiation. 

But humans, unlike meerkats or ants, don’t have to rely on the pathways to CI that our biology has hard-coded into us, or wait until the slow, invisible hand of evolution tweaks our processes. We can do better on purpose, recognizing that progress and participation don’t have to trade off. (This is the thesis on which my organization, the Collective Intelligence Project, is predicated.)

Our stepwise innovations in CI systems—such as representative, nation-state democracy, capitalist and noncapitalist markets, and bureaucratic technocracy—have already shaped the modern world. And yet, we can do much better. These existing manifestations of collective intelligence are only crude versions of the structures we could build to make better collective decisions over collective resources.

Europe Is in Danger of Using the Wrong Definition of AI

Europe Is in Danger of Using the Wrong Definition of AI

A company could choose the most obscure, nontransparent systems architecture available, claiming (rightly, under this bad definition) that it was “more AI,” in order to access the prestige, investment, and government support that claim entails. For example, one giant deep neural network could be given the task not only of learning language but also of debiasing that language on several criteria, say, race, gender, and socio-economic class. Then maybe the company could also sneak in a little slant to make it also point toward preferred advertisers or political party. This would be called AI under either system, so it would certainly fall into the remit of the AIA. But would anyone really be reliably able to tell what was going on with this system? Under the original AIA definition, some simpler way to get the job done would be equally considered “AI,” and so there would not be these same incentives to use intentionally complicated systems.

Of course, under the new definition, a company could also switch to using more traditional AI, like rule-based systems or decision trees (or just conventional software). And then it would be free to do whatever it wanted—this is no longer AI, and there’s no longer a special regulation to check how the system was developed or where it’s applied. Programmers can code up bad, corrupt instructions that deliberately or just negligently harm individuals or populations. Under the new presidency draft, this system would no longer get the extra oversight and accountability procedures it would under the original AIA draft. Incidentally, this route also avoids tangling with the extra law enforcement resources the AIA mandates member states fund in order to enforce its new requirements.

Limiting where the AIA applies by complicating and constraining the definition of AI is presumably an attempt to reduce the costs of its protections for both businesses and governments. Of course, we do want to minimize the costs of any regulation or governance—public and private resources both are precious. But the AIA already does that, and does it in a better, safer way. As originally proposed, the AIA already only applies to systems we really need to worry about, which is as it should be.

In the AIA’s original form, the vast majority of AI—like that in computer games, vacuum cleaners, or standard smart phone apps—is left for ordinary product law and would not receive any new regulatory burden at all. Or it would require only basic transparency obligations; for example, a chatbot should identify that it is AI, not an interface to a real human.

The most important part of the AIA is where it describes what sorts of systems are potentially hazardous to automate. It then regulates only these. Both drafts of the AIA say that there are a small number of contexts in which no AI system should ever operate—for example, identifying individuals in public spaces from their biometric data, creating social credit scores for governments, or producing toys that encourage dangerous behavior or self harm. These are all simply banned, more or less. There are far more application areas for which using AI requires government and other human oversight: situations affecting human-life-altering outcomes, such as deciding who gets what government services, or who gets into which school or is awarded what loan. In these contexts, European residents would be provided with certain rights, and their governments with certain obligations, to ensure that the artifacts have been built and are functioning correctly and justly.

Making the AIA Act not apply to some of the systems we need to worry about—as the “presidency compromise” draft could do—would leave the door open for corruption and negligence. It also would make legal things the European Commission was trying to protect us from, like social credit systems and generalized facial recognition in public spaces, as long as a company could claim its system wasn’t “real” AI.

The Future of Robot Nannies

The Future of Robot Nannies

Childcare is the most intimate of activities. Evolution has generated drives so powerful that we will risk our lives to protect not only our own children, but quite often any child, and even the young of other species. Robots, by contrast, are products created by commercial entities with commercial goals, which may—and should—include the well-being of their customers, but will never be limited to such. Robots, corporations, and other legal or non-legal entities do not possess the instinctual nature of humans to care for the young—even if our anthropomorphic tendencies may prompt some children and adults to overlook this fact.

As a result, it is important to take into account the likelihood of deception—both commercial deception through advertising and also self-deception on the part of parents—despite the fact that robots are unlikely to cause significant psychological damage to children and to others who may come to love them.

Neither television manufacturers, broadcasters, nor online game manufacturers are deemed liable when children are left for too long in front of their television. Robotics companies will want to be in the same position, as no company will want to be liable for damage to children, so it is likely that manufacturers will undersell the artificial intelligence (AI) and interactive capacities of their robots. It is therefore likely that any robots (and certainly those in jurisdictions with strong consumer protection) will be marketed primarily as toys, surveillance devices, and possibly household utilities. They will be brightly colored and deliberately designed to appeal to parents and children. We expect a variety of products, some with advanced capabilities and some with humanoid features. Parents will quickly discover a robot’s ability to engage and distract their child. Robotics companies will program 

experiences geared toward parents and children, just as television broadcasters do. But robots will always have disclaimers, such as “this device is not a toy and should only be used with adult supervision” or “this device is provided for entertainment only. It should not be considered educational.”

Nevertheless, parents will notice that they can leave their children alone with robots, just as they can leave them to watch television or to play with other children. Humans are phenomenal learners and very good at detecting regularities and exploiting affordances. Parents will quickly notice the educational benefits of robot nannies that have advanced AI and communication skills. Occasional horror stories, such as the robot nanny and toddler tragedy in the novel Scarlett and Gurl, will make headline news and remind parents how to use robots responsibly.

This will likely continue until or unless the incidence of injuries necessitates redesign, a revision of consumer safety standards, statutory notice requirements, and/or risk-based uninsurability, all of which will further refine the industry. Meanwhile, the media will also seize on stories of robots saving children in unexpected ways, as it does now when children (or adults) are saved by other young children and dogs. This should not make people think that they should leave children alone with robots, but given the propensity we already have to anthropomorphize robots, it may make parents feel that little bit more comfortable—until the next horror story makes headlines.

When it comes to liability, we should be able to communicate the same model of liability applied to toys to the manufacturers of robot nannies: Make your robots reliable, describe what they do accurately, and provide sufficient notice of reasonably foreseeable danger from misuse. Then, apart from the exceptional situation of errors in design or manufacture, such as parts that come off and choke children, legal liability will rest entirely with the parent or responsible adult, as it does now, and as it should under existing product liability law.

Biden’s ‘Antitrust Revolution’ Overlooks AI—at Americans’ Peril

Biden’s ‘Antitrust Revolution’ Overlooks AI—at Americans’ Peril

Despite the executive orders and congressional hearings of the “Biden antitrust revolution,” the most profound anti-competitive shift is happening under policymakers’ noses: the cornering of artificial intelligence and automation by a handful of tech companies. This needs to change.

There is little doubt that the impact of AI will be widely felt. It is shaping product innovations, creating new research, discovery, and development pathways, and reinventing business models. AI is making inroads in the development of autonomous vehicles, which may eventually improve road safety, reduce urban congestion, and help drivers make better use of their time. AI recently predicted the molecular structure of almost every protein in the human body, and it helped develop and roll out a Covid vaccine in record time. The pandemic itself may have accelerated AI’s incursion—in emergency rooms for triage; in airports, where robots spray disinfecting chemicals; in increasingly automated warehouses and meatpacking plants; and in our remote workdays, with the growing presence of chatbots, speech recognition, and email systems that get better at completing our sentences.

Exactly how AI will affect the future of human work, wages, or productivity overall remains unclear. Though service and blue-collar wages have lately been on the rise, they’ve stagnated for three decades. According to MIT’s Daron Acemoglu and Boston University’s Pascual Restrepo, 50 to 70 percent of this languishing can be attributed to the loss of mostly routine jobs to automation. White-collar occupations are also at risk as machine learning and smart technologies take on complex functions. According to McKinsey, while only about 10 percent of these jobs could disappear altogether, 60 percent of them may see at least a third of their tasks subsumed by machines and algorithms. Some researchers argue that while AI’s overall productivity impact has been so far disappointing, it will improve; others are less sanguine. Despite these uncertainties, most experts agree that on net, AI will “become more of a challenge to the workforce,” and we should anticipate a flat to slightly negative impact on jobs by 2030.

Without intervention, AI could also help undermine democracy–through amplifying misinformation or enabling mass surveillance. The past year and a half has also underscored the impact of algorithmically powered social media, not just on the health of democracy, but on health care itself.

The overall direction and net impact of AI sits on a knife’s edge, unless AI R&D and applications are appropriately channeled with wider societal and economic benefits in mind. How can we ensure that?

A handful of US tech companies, including Amazon, Alibaba, Alphabet, Facebook, and Netflix, along with Chinese mega-players such as Baidu, are responsible for $2 of every $3 spent globally on AI. They’re also among the top AI patent holders. Not only do their outsize budgets for AI dwarf others’, including the federal government’s, they also emphasize building internally rather than buying AI. Even though they buy comparatively little, they’ve still cornered the AI startup acquisition market. Many of these are early-stage acquisitions, meaning the tech giants integrate the products from these companies into their own portfolios or take IP off the market if it doesn’t suit their strategic purposes and redeploy the talent. According to research from my Digital Planet team, US AI talent is intensely concentrated. The median number of AI employees in the field’s top five employers—Amazon, Google, Microsoft, Facebook, and Apple—is some 18,000, while the median for companies six to 24 is about 2,500—and it drops significantly from there. Moreover, these companies have near-monopolies of data on key behavioral areas. And they are setting the stage to become the primary suppliers of AI-based products and services to the rest of the world.

Each key player has areas of focus consistent with its business interests: Google/Alphabet spends disproportionately on natural language and image processing and on optical character, speech, and facial recognition. Amazon does the same on supply chain management and logistics, robotics, and speech recognition. Many of these investments will yield socially beneficial applications, while others, such as IBM’s Watson—which aspired to become the go-to digital decision tool in fields as diverse as health care, law, and climate action—may not deliver on initial promises, or may fail altogether. Moonshot projects, such as level 4 driverless cars, may have an excessive amount of investment put against them simply because the Big Tech players choose to champion them. Failures, disappointments, and pivots are natural to developing any new technology. We should, however, worry about the concentration of investments in a technology so fundamental and ask how investments are being allocated overall. AI, arguably, could have more profound impact than social media, online retail, or app stores—the current targets of antitrust. Google CEO Sundar Pichai may have been a tad overdramatic when he declared that AI will have more impact on humanity than fire, but that alone ought to light a fire under the policy establishment to pay closer attention.

The World Needs Deepfake Experts to Stem This Chaos

The World Needs Deepfake Experts to Stem This Chaos

Recently the military coup government in Myanmar added serious allegations of corruption to a set of existing spurious cases against Burmese leader Aung San Suu Kyi. These new charges build on the statements of a prominent detained politician that were first released in a March video that many in Myanmar suspected of being a deepfake.

In the video, the political prisoner’s voice and face appear distorted and unnatural as he makes a detailed claim about providing gold and cash to Aung San Suu Kyi. Social media users and journalists in Myanmar immediately questioned whether the statement was real. This incident illustrates a problem that will only get worse. As real deepfakes get better, the willingness of people to dismiss real footage as a deepfake increases. What tools and skills will be available to investigate both types of claims, and who will use them?

In the video, Phyo Min Thein, the former chief minister of Myanmar’s largest city, Yangon, sits in a bare room, apparently reading from a statement. His speaking sounds odd and not like his normal voice, his face is static, and in the poor-quality version that first circulated, his lips look out of sync with his words. Seemingly everyone wanted to believe it was a fake. Screen-shotted results from an online deepfake detector spread rapidly, showing a red box around the politician’s face and an assertion with 90-percent-plus confidence that the confession was a deepfake. Burmese journalists lacked the forensic skills to make a judgement. Past state and present military actions reinforced cause for suspicion. Government spokespeople have shared staged images targeting the Rohingya ethnic group while military coup organizers have denied that social media evidence of their killings could be real.

But was the prisoner’s “confession” really a deepfake? Along with deepfake researcher Henry Ajder, I consulted deepfake creators and media forensics specialists. Some noted that the video was sufficiently low-quality that the mouth glitches people saw were as likely to be artifacts from compression as evidence of deepfakery. Detection algorithms are also unreliable on low-quality compressed video. His unnatural-sounding voice could be a result of reading a script under extreme pressure. If it is a fake, it’s a very good one, because his throat and chest move at key moments in sync with words. The researchers and makers were generally skeptical that it was a deepfake, though not certain. At this point it is more likely to be what human rights activists like myself are familiar with: a coerced or forced confession on camera. Additionally, the substance of the allegations should not be trusted given the circumstances of the military coup unless there is a legitimate judicial process.

Why does this matter? Regardless of whether the video is a forced confession or a deepfake, the results are most likely the same: words digitally or physically compelled out of a prisoner’s mouth by a coup d’état government. However, while the usage of deepfakes to create nonconsensual sexual images currently far outstrips political instances, deepfake and synthetic media technology is rapidly improving, proliferating, and commercializing, expanding the potential for harmful uses. The case in Myanmar demonstrates the growing gap between the capabilities to make deepfakes, the opportunities to claim a real video is a deepfake, and our ability to challenge that.

It also illustrates the challenges of having the public rely on free online detectors without understanding the strengths and limitations of detection or how to second-guess a misleading result. Deepfakes detection is still an emerging technology, and a detection tool applicable to one approach often does not work on another. We must also be wary of counter-forensics—where someone deliberately takes steps to confuse a detection approach. And it’s not always possible to know which detection tools to trust.

How do we avoid conflicts and crises around the world being blindsided by deepfakes and supposed deepfakes?

We should not be turning ordinary people into deepfake spotters, parsing the pixels to discern truth from falsehood. Most people will do better relying on simpler approaches to media literacy, such as the SIFT method, that emphasize checking other sources or tracing the original context of videos. In fact, encouraging people to be amateur forensics experts can send people down the conspiracy rabbit hole of distrust in images.