Select Page
Automation Isn’t the Biggest Threat to US Factory Jobs

Automation Isn’t the Biggest Threat to US Factory Jobs

The number of American workers who quit their jobs during the pandemic—over a fifth of the workforce—may constitute one of the largest American labor movements in recent history. Workers demanded higher pay and better conditions, spurred by rising inflation and the pandemic realization that employers expected them to risk their lives for low wages, mediocre benefits, and few protections from abusive customers—often while corporate stock prices soared. At the same time, automation has become cheaper and smarter than ever. Robot adoption hit record highs in 2021. This wasn’t a surprise, given prior trends in robotics, but it was likely accelerated by pandemic-related worker shortages and Covid-19 safety requirements. Will robots automate away the jobs of entitled millennials who “don’t want to work,” or could this technology actually improve workers’ jobs and help firms attract more enthusiastic employees?

The answer depends on more than what’s technologically feasible, including what actually happens when a factory installs a new robot or a cashier aisle is replaced by a self-checkout booth—and what future possibilities await displaced workers and their children. So far, we know the gains from automation have proved notoriously unequal. A key component of 20th-century productivity growth came from replacing workers with technology, and economist Carl Benedikt Frey notes that American productivity grew by 400 percent from 1930 to 2000, while average leisure time only increased by 3 percent. (Since 1979, American labor productivity, or dollars created per worker, has increased eight times faster than workers’ hourly compensation.) During this period, technological luxuries became necessities and new types of jobs flourished—while the workers’ unions that used to ensure livable wages dissolved and less-educated workers fell further behind those with high school and college degrees. But the trend has differed across industrialized countries: From 1995 to 2013, America experienced a 1.3 percent gap between productivity growth and median wage growth, but in Germany the gap was only 0.2 percent.

Technology adoption will continue to increase, whether America can equitably distribute the technological benefits or not. So the question becomes, how much control do we actually have over automation? How much of this control is dependent on national or regional policies, and how much power might individual firms and workers have within their own workplaces? Is it inevitable that robots and artificial intelligence will take all of our jobs, and over what time frame? While some scholars believe that our fates are predetermined by the technologies themselves, emerging evidence indicates that we may have considerable influence over how such machines are employed within our factories and offices—if we can only figure out how to wield this power.

While 8 percent of German manufacturing workers left their jobs (voluntarily or involuntarily) between 1993 and 2009, 34 percent of US manufacturing workers left their jobs over the same period. Thanks to workplace bargaining and sectoral wage-setting, German manufacturing workers have better financial incentives to stay at their jobs; The Conference Board reports that the average German manufacturing worker earned $43.18 (plus $8.88 in benefits) per hour in 2016, while the average American manufacturing worker earned $39.03 with only $3.66 in benefits. Overall, Germans across the economy with a “medium-skill” high school or vocational certificate earned $24.31 per hour in 2016, while Americans with comparable education averaged $14.55 per hour. Two case studies illustrate the differences between American and German approaches to manufacturing workers and automation, from policies to supply chains to worker training systems.

In a town on the outskirts of the Black Forest in Baden-Württemberg, Germany, complete with winding cobblestone streets and peaked red rooftops, there’s a 220-person factory that’s spent decades as a global leader in safety-critical fabricated metal equipment for sites such as highway tunnels, airports, and nuclear reactors. It’s a wide, unassuming warehouse next to a few acres of golden mustard flowers. When I visited with my colleagues from the MIT Interactive Robotics Group and the Fraunhofer Institute for Manufacturing Engineering and Automation’s Future Work Lab (part of the diverse German government-supported Fraunhofer network for industrial research and development), the senior factory manager informed us that his workers’ attitudes, like the 14th-century church downtown, hadn’t changed much in his 25-year tenure at the factory. Teenagers still entered the firm as apprentices in metal fabrication through Germany’s dual work-study vocational system, and wages are high enough that most young people expected to stay at the factory and move up the ranks until retirement, earning a respectable living along the way. Smaller German manufacturers can also get government subsidies to help send their workers back to school to learn new skills that often equate to higher wages. This manager had worked closely with a nearby technical university to develop advanced welding certifications, and he was proud to rely on his “welding family” of local firms, technology integrators, welding trade associations, and educational institutions for support with new technology and training.

Our research team also visited a 30-person factory in urban Ohio that makes fabricated metal products for the automotive industry, not far from the empty warehouses and shuttered office buildings of downtown. This factory owner, a grandson of the firm’s founder, complained about losing his unskilled, minimum-wage technicians to any nearby job willing to offer a better salary. “We’re like a training company for big companies,” he said. He had given up on finding workers with the relevant training and resigned himself to finding unskilled workers who could hopefully be trained on the job. Around 65 percent of his firm’s business used to go to one automotive supplier, which outsourced its metal fabrication to China in 2009, forcing the Ohio firm to shrink down to a third of its prior workforce.

While the Baden-Württemberg factory commanded market share by selling specialized final products at premium prices, the Ohio factory made commodity components to sell to intermediaries, who then sold to powerful automotive firms. So the Ohio firm had to compete with low-wage, bulk producers in China, while the highly specialized German firm had few foreign or domestic competitors forcing it to shrink its skilled workforce or lower wages.

Welding robots have replaced some of the workers’ tasks in the two factories, but both are still actively hiring new people. The German firm’s first robot, purchased in 2018, was a new “collaborative” welding arm (with a friendly user interface) designed to be operated by workers with welding expertise, rather than professional robot programmers who don’t know the intricacies of welding. Training welders to operate the robot isn’t a problem in Baden-Württemberg, where everyone who arrives as a new welder has a vocational degree representing at least two years of education and hands-on apprenticeship in welding, metal fabrication, and 3D modeling. Several of the firm’s welders had already learned to operate the robot, assisted by prior trainings. And although the German firm manager was pleased to save labor costs, his main reason for the robot acquisition was to improve workers’ health and safety and minimize boring, repetitive welding sequences—so he could continue to attract skilled young workers who would stick around. Another German factory we visited had recently acquired a robot to tend a machine during the night shift so fewer workers would have to work overtime or come in at night.

TikTok Must Not Fail Ukrainians

TikTok Must Not Fail Ukrainians

Vietnam was known as the first televised war. The Iran Green Movement and the Arab Spring were called the first Twitter Revolutions. And now the Russian invasion of Ukraine is being dubbed the first TikTok War. As The Atlantic and others have pointed out, it’s not, neither literally nor figuratively: TikTok is merely the latest social media platform to see its profitable expansion turn into a starring role in a crisis.

But as its #ukraine and #украина posts near a combined 60 billion views, TikTok should learn from the failings of other platforms over the past decade, failings that have exacerbated the horrors of war, facilitated misinformation, and impeded access to justice for human rights crimes. TikTok should take steps now to better support creators sharing evidence and experience, viewers, and the people and institutions who use these videos for reliable information and human rights accountability.

First, TikTok can help people on the ground in Ukraine who want to galvanize action and be trusted as frontline witnesses. The company should provide targeted guidance directly to these vulnerable creators. This could include notifications or videos in their For You page that demonstrate (1) how to film in a way that is more verifiable and trustworthy to outside sources, (2) how to protect themselves and others in case a video shot in crisis becomes a tool of surveillance and outright targeting, and (3) how to share their footage without it getting taken down or made less visible as graphic content. TikTok should begin the process of incorporating emerging approaches (such as the C2PA standards) that allow creators to choose to show a video’s provenance. And it should offer easy ways, prominently available when recording, to protectively and not just aesthetically blur faces of vulnerable people.

TikTok should also be investing in robust, localized, contextual content moderation and appeals routing for this conflict and the next crisis. Social media creators are at the mercy of capricious algorithms that cannot navigate the difference between harmful violent content and victims of war sharing their experiences. If a clip or account is taken down or suspended—often because it breaches a rule the user never knew about—it’s unlikely they’ll be able to access a rapid or transparent appeals process. This is particularly true if they live outside North America and Western Europe. The company should bolster its content moderation in Ukraine immediately.

The platform is poorly designed for accurate information but brilliantly designed for quick human engagement. The instant fame that the For You page can grant has brought the everyday life and dark humor of young Ukrainians like Valeria Shashenok (@valerissh) from the city of Chernihiv into people’s feeds globally. Human rights activists know that one of the best ways to engage people in meaningful witnessing and to counter the natural impulse to look away occurs when you experience their realities in a personal, human way. Undoubtedly some of this insight into real people’s lives in Ukraine is moving people to a place of greater solidarity. Yet the more decontextualized the suffering of others is—and the For You page also encourages flitting between disparate stories—the more the suffering is experienced as spectacle. This risks a turn toward narcissistic self-validation or worse: trolling of people at their most vulnerable.

And that’s assuming that the content we’re viewing is shared in good faith. The ability to remix audio, along with TikTok’s intuitive ease in editing, combining, and reusing existing footage, among other factors, make the platform vulnerable to misinformation and disinformation. Unless spotted by an automated match-up with a known fake, labeled as state-affiliated media, or identified by a fact-checker as incorrect or by TikTok teams as being part of a coordinated influence campaign, many deceptive videos circulate without any guidance or tools to help viewers exercise basic media literacy.

TikTok should do more to ensure that it promptly identifies, reviews, and labels these fakes for their viewers, and takes them down or removes them from recommendations. They should ramp up capacity to fact-check on the platform and address how their business model and its resulting algorithm continues to promote deceptive videos with high engagement. We, the people viewing the content, also need better direct support. One of the first steps that professional fact-checkers take to verify footage is to use a reverse image search to see if a photo or video existed before the date it claims to have been made or is from a different location or event than what it is claimed to be. As the TikTok misinfo expert Abbie Richards has pointed out, TikTok doesn’t even indicate the date a video was posted when it appears in the For You feed. Like other platforms, TikTok also doesn’t make an easy reverse image search or video search available in-platform to its users or offer in-feed indications of previous video dupes. It’s past time to make it simpler to be able to check whether a video you see in your feed comes from a different time and place than it claims, for example with intuitive reverse image/video search or a simple one-click provenance trail for videos created in-platform.

No one visits the “Help Center.” Tools need to be accompanied by guidance in videos that appear on people’s For You page. Viewers need to build the media literacy muscles for how to make good judgements about the footage they are being exposed to. This includes sharing principles like SIFT as well as tips specific to the ways TikTok works, such as what to look for on TikTok’s extremely popular livestreams: For example, check the comments and look at the creator’s previous content, and on any video, always check to make sure the audio is original (as both Richards and Marcus Bösch, another TikTok misinfo expert, have suggested). Reliable news sources also need to be part of the feed, as TikTok appears to have started to do increasingly.

TikTok also demonstrates a problem that arises as content recommender algorithms intersect with good media literacy practices of “lateral reading.” Perversely, the more attention you pay to a suspicious video, the more you return to it after looking for other sources, the more the TikTok algorithm feeds you more of the same and prioritizes sharing that potentially false video to other people.

Content moderation policies are meant to be a safeguard against the spread of violent, inciting, or other banned content. Platforms take down vast quantities of footage, which often includes content that can help investigate human rights violations and war crimes. AI algorithms and humans—correctly and incorrectly—identify these videos as dangerous speech, terrorist content, or graphic violence unacceptable for viewing. A high percentage of the content is taken down by a content moderation algorithm, in many cases before it’s seen by a human eye. This can have a catastrophic effect in the quest for justice and accountability. How can investigators request information they don’t know exists? How much material is lost forever because human rights organizations haven’t had the chance to see it and preserve it? For example, in 2017 the independent human rights archiving organization Syrian Archive found that hundreds of thousands of videos from the Syrian Civil War had been swept away by the YouTube algorithm. In the blink of an eye, it removed critical evidence that could contribute to accountability, community memory, and justice.

It’s beyond time that we have far better transparency on what is lost and why, and clarify how platforms will be regulated, compelled, or agree to create so-called digital “evidence lockers” that selectively and appropriately safeguard material that is critical for justice. We need this both to preserve content that falls afoul of platform policy, as well as content that is incorrectly removed, particularly knowing that content moderation is broken. Groups like WITNESS, Mnemonic, the Human Rights Center at Berkeley, and Human Rights Watch are working on finding ways these archives could be set up—balancing accountability with human rights, privacy, and hopefully ultimate community control of their archives. TikTok now joins the company of other major social media platforms in needing to step up to this challenge. To start with, they should be taking proactive action to understand what needs to be preserved, and engage with accountability mechanisms and civil society groups who have been preserving video evidence.

The invasion of Ukraine is not the first social media war. But it can be the first time a social media company does what it should do for people bearing witness on the front lines, from a distance, and in the courtroom.


More Great WIRED Stories

The Unnerving Rise of Video Games that Spy on You

The Unnerving Rise of Video Games that Spy on You

Tech conglomerate Tencent caused a stir last year with the announcement that it would comply with China’s directive to incorporate facial recognition technology into its games in the country. The move was in line with China’s strict gaming regulation policies, which impose limits on how much time minors can spend playing video games—an effort to curb addictive behavior, since gaming is labeled by the state as “spiritual opium.”

The state’s use of biometric data to police its population is, of course, invasive, and especially undermines the privacy of underage users—but Tencent is not the only video game company to track its players, nor is this recent case an altogether new phenomenon. All over the world, video games, one of the most widely adopted digital media forms, are installing networks of surveillance and control.

In basic terms, video games are systems that translate physical inputs—such as hand movement or gesture—into various electric or electronic machine-readable outputs. The user, by acting in ways that comply with the rules of the game and the specifications of the hardware, is parsed as data by the video game. Writing almost a decade ago, the sociologists Jennifer R. Whitson and Bart Simon argued that games are increasingly understood as systems that easily allow the reduction of human action into knowable and predictable formats.

Video games, then, are a natural medium for tracking, and researchers have long argued that large data sets about players’ in-game activities are a rich resource in understanding player psychology and cognition. In one study from 2012, Nick Yee, Nicolas Ducheneaut, and Les Nelson scraped player activity data logged on the World of Warcraft Armory website—essentially a database that records all the things a player’s character has done in the game (how many of a certain monster I’ve killed, how many times I’ve died, how many fish I’ve caught, and so on).

The researchers used this data to infer personality characteristics (in combination with data yielded through a survey). The paper suggests, for example, that there is a correlation between the survey respondents classified as more conscientious in their game-playing approach and the tendency to spend more time doing repetitive and dull in-game tasks, such as fishing. Conversely, those whose characters more often fell to death from high places were less conscientious, according to their survey responses.

Correlation between personality and quantitative gameplay data is certainly not unproblematic. The relationship between personality and identity and video game activity is complex and idiosyncratic; for instance, research suggests that gamer identity intersects with gender, racial, and sexual identity. Additionally, there has been general pushback against claims of Big Data’s production of new knowledge rooted in correlation. Despite this, games companies increasingly realize the value of big data sets to gain insight into what a player likes, how they play, what they play, what they’ll likely spend money on (in freemium games), how and when to offer the right content, and how to solicit the right kinds of player feelings.

While there are no numbers on how many video game companies are surveilling their players in-game (although, as a recent article suggests, large publishers and developers like Epic, EA, and Activision explicitly state they capture user data in their license agreements), a new industry of firms selling middleware “data analytics” tools, often used by game developers, has sprung up. These data analytics tools promise to make users more amenable to continued consumption through the use of data analysis at scale. Such analytics, once available only to the largest video game studios—which could hire data scientists to capture, clean, and analyze the data, and software engineers to develop in-house analytics tools—are now commonplace across the entire industry, pitched as “accessible” tools that provide a competitive edge in a crowded marketplace by companies like Unity, GameAnalytics, or Amazon Web Services. (Although, as a recent study shows, the extent to which these tools are truly “accessible” is questionable, requiring technical expertise and time to implement.) As demand for data-driven insight has grown, so have the range of different services—dozens of tools in the past several years alone, providing game developers with different forms of insight. One tool—essentially Uber for playtesting—allows companies to outsource quality assurance testing, and provides data-driven insight into the results. Another supposedly uses AI to understand player value and maximize retention (and spending, with a focus on high-spenders).