Select Page
Amazon Workers Walk Out Over Layoffs and Broken Climate Promises

Amazon Workers Walk Out Over Layoffs and Broken Climate Promises

One month after Amazon ordered its corporate employees to return to the office, some of them have walked back out. Rallies took place outside the company’s Seattle headquarters today and Amazon offices in some other cities. The employees are protesting Amazon’s return-to-office mandate and a lack of meaningful progress on its Climate Pledge.

“Morale is the lowest I’ve seen since I’ve been working here,” says a Seattle-based employee who started in 2020 and survived two rounds of layoffs this year that put 27,000 Amazonians out of work. “People have lost trust in leadership because they have made these unilateral decisions that impact workers’ lives.”

Walk out organizers say more than 1,000 workers joined the Seattle rally with demonstrations in other cities bringing overall participation to over 2,000. Amazon spokesperson Brad Glasser says Amazon estimates that about 300 people attended the Seattle demonstration. The company currently has roughly 350,000 corporate and tech employees globally and about 65,000 in the Seattle area. 

While there has been a surge in protests and walkouts from Amazon’s warehouse workers in recent years, today marks the largest demonstration by corporate workers since a 2019 climate protest in which thousands of workers walked off the job. It comes with tech workers across the industry still reeling from an unprecedented number of layoffs, as companies cut back after pandemic hiring sprees.

In February, Andy Jassy, who took over as CEO from Amazon founder Jeff Bezos in 2021, became the latest tech boss to announce that his workers must return to the office, ordering staff to appear in person three days a week starting on May 1. The day of that announcement, employees formed a Slack channel to rally support for remote work and sent a petition signed by 20,000 workers to Amazon’s leadership asking them to reconsider the mandate. Employees say the policy reversed an earlier promise that remote work decisions would be left up to individual teams and add that some workers had relocated as a result. Amazon bosses rejected the request. 

That defeat amplified a wider malaise also fed by Amazon’s sweeping layoffs and the company’s soaring emissions—despite a pledge to achieve net-zero carbon emissions by 2040. The return-to-office Slack channel “created a place where a lot of people suddenly had a reason to talk about their gripes with Amazon,” says a Los Angeles-based employee who is walking out of his office today. “In doing so, we realized there was a lot of common ground and an overarching theme of Amazon taking us backward in a lot of big ways.”

“We’re always listening and will continue to do so, but we’re happy with how the first month of having more people back in the office has been,” writes Glasser, the Amazon spokesperson. “There’s more energy, collaboration, and connections happening, and we’ve heard this from lots of employees and the businesses that surround our offices.”

Over the past year, remote work has become a flashpoint for many tech workers who grew to enjoy the flexibility it afforded during the pandemic and in some cases reorganized their lives around the freedom to live away from tech hubs.

Runaway AI Is an Extinction Risk, Experts Warn

Runaway AI Is an Extinction Risk, Experts Warn

Leading figures in the development of artificial intelligence systems, including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis, have signed a statement warning that the technology they are building may someday pose an existential threat to humanity comparable to that of nuclear war and pandemics. 

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement, released today by the Center for AI Safety, a nonprofit. 

The idea that AI might become difficult to control, and either accidentally or deliberately destroy humanity, has long been debated by philosophers. But in the past six months, following some surprising and unnerving leaps in the performance of AI algorithms, the issue has become a lot more widely and seriously discussed.

In addition to Altman and Hassabis, the statement was signed by Dario Amodei, CEO of Anthropic, a startup dedicated to developing AI with a focus on safety. Other signatories include Geoffrey Hinton and Yoshua Bengio—two of three academics given the Turing Award for their work on deep learning, the technology that underpins modern advances in machine learning and AI—as well as dozens of entrepreneurs and researchers working on cutting-edge AI problems.

“The statement is a great initiative,” says Max Tegmark, a physics professor at the Massachusetts Institute of Technology and the director of the Future of Life Institute, a nonprofit focused on the long-term risks posed by AI. In March, Tegmark’s Institute published a letter calling for a six-month pause on the development of cutting-edge AI algorithms so that the risks could be assessed. The letter was signed by hundreds of AI researchers and executives, including Elon Musk.

Tegmark says he hopes the statement will encourage governments and the general public to take the existential risks of AI more seriously. “The ideal outcome is that the AI extinction threat gets mainstreamed, enabling everyone to discuss it without fear of mockery,” he adds.

Dan Hendrycks, director of the Center for AI Safety, compared the current moment of concern about AI to the debate among scientists sparked by the creation of nuclear weapons. “We need to be having the conversations that nuclear scientists were having before the creation of the atomic bomb,” Hendrycks said in a quote issued along with his organization’s statement. 

The current tone of alarm is tied to several leaps in the performance of AI algorithms known as large language models. These models consist of a specific kind of artificial neural network that is trained on enormous quantities of human-written text to predict the words that should follow a given string. When fed enough data, and with additional training in the form of feedback from humans on good and bad answers, these language models are able to generate text and answer questions with remarkable eloquence and apparent knowledge—even if their answers are often riddled with mistakes. 

These language models have proven increasingly coherent and capable as they have been fed more data and computer power. The most powerful model created so far, OpenAI’s GPT-4, is able to solve complex problems, including ones that appear to require some forms of abstraction and common sense reasoning.

CNET Published AI-Generated Stories. Then Its Staff Pushed Back

CNET Published AI-Generated Stories. Then Its Staff Pushed Back

In November, venerable tech outlet CNET began publishing articles generated by artificial intelligence, on topics such as personal finance, that proved to be riddled with errors. Today the human members of its editorial staff have unionized, calling on their bosses to provide better conditions for workers and more transparency and accountability around the use of AI.

“In this time of instability, our diverse content teams need industry-standard job protections, fair compensation, editorial independence, and a voice in the decisionmaking process, especially as automated technology threatens our jobs and reputations,” reads the mission statement of the CNET Media Workers Union, whose more than 100 members include writers, editors, video producers, and other content creators.

While the organizing effort started before CNET management began its AI rollout, its employees could become one of the first unions to force its bosses to set guardrails around the use of content produced by generative AI services like ChatGPT. Any agreement struck with CNET’s parent company, Red Ventures, could help set a precedent for how companies approach the technology. Multiple digital media outlets have recently slashed staff, with some like BuzzFeed and Sports Illustrated at the same time embracing AI-generated content. Red Ventures did not immediately respond to a request for comment.

In Hollywood, AI-generated writing has prompted a worker uprising. Striking screenwriters want studios to agree to prohibit AI authorship and to never ask writers to adapt AI-generated scripts. The Alliance of Motion Picture and Television Producers rejected that proposal, instead offering to hold annual meetings to discuss technological advancements. The screenwriters and CNET’s staff are both represented by the Writers Guild of America.

While CNET bills itself as “your guide to a better future,” the 30-year-old publication late last year stumbled clumsily into the new world of generative AI that can create text or images. In January, the science and tech website Futurism revealed that in November, CNET had quietly started publishing AI-authored explainers such as “What Is Zelle and How Does it Work?” The stories ran under the byline “CNET Money Staff,” and readers had to hover their cursor over it to learn that the articles had been written “using automation technology.”

A torrent of embarrassing disclosures followed. The Verge reported that more than half of the AI-generated stories contained factual errors, leading CNET to issue sometimes lengthy corrections on 41 out of its 77 bot-written articles. The tool that editors used also appeared to have plagiarized work from competing news outlets, as generative AI is wont to do.

Then-editor-in-chief Connie Guglielmo later wrote that a plagiarism-detection tool had been misused or failed and that the site was developing additional checks. One former staffer demanded that her byline be excised from the site, concerned that AI would be used to update her stories in an effort to lure more traffic from Google search results.

In response to the negative attention to CNET’s AI project, Guglielmo published an article saying that the outlet had been testing an “internally designed AI engine” and that “AI engines, like humans, make mistakes.” Nonetheless, she vowed to make some changes to the site’s disclosure and citation policies and forge ahead with its experiment in robot authorship. In March, she stepped down from her role as editor in chief and now heads up the outlet’s AI edit strategy.

Meet RedBalloon, the ‘Anti-Woke’ Job Board for Christian Nationalists

Meet RedBalloon, the ‘Anti-Woke’ Job Board for Christian Nationalists

On March 19, Donald Trump Jr. sent an email via the firm that manages his father’s email list, Campaign Nucleus, announcing “a HUGE advance in the culture war.” That culture war, Trump wrote, is “coming to corporate America.” He added that conservatives have a “new” tool to fight back against “woke” workplaces: the “free to work” job board RedBalloon. As an incentive to create an account on the website, Trump offered 20 autographed copies of his latest book, Triggered.

“The big job boards like Indeed and ZipRecruiter are actually promoting ‘woke’ workplace policies,” Trump said in a promotional video posted on the right-wing video streaming site Rumble. “They’re a huge part of the problem.” Exactly what the problem is, or what the word “woke” is supposed to stand for other than a conservative shibboleth, is unclear. 

Standing next to the eldest son of the recently indicted former president in the RedBalloon advertisement is the company’s unfortunately named founder, Andrew Crapuchettes. RedBalloon’s origin story goes back to 2021, when Crapuchettes claims he was fired from his role as CEO at his former company, EMSI, for being “too conservative and Christian.” 

RedBalloon’s explicitly “anti-woke” positioning fits within a broader conservative push in recent years to create a “parallel economy” apart from progressive values. The idea has been promoted by the junior Trump and far-right pundits like Charlie Kirk of Turning Points USA. And while a parallel right-wing ecosystem of media outlets has gained some traction, other anti-woke projects haven’t fared so well. Consider the Peter Theil-funded bank that faced self-cancellation, or the fact that right-wing Twitter alternative Parler is down to around 20 employees. As NBC News reported last month, conservative tech founders at the most recent Conservative Political Action Conference “said they believe some companies that were part of the ‘parallel economy’ movement got ahead of themselves in their aspirations.”

‘An Unapologetic Conservative Christian’

The particular nature of Crapuchettes’ Christian faith is something that may give pause to fans of the separation of church and state. In November 2021, The Guardian reported Crapuchettes as an elder of the Christ Church in Moscow, Idaho—a church that is led by a man who has “openly expressed the ambition of creating a ‘theocracy’ in America.”

“When I ran the company, I believed that everybody should bring their whole self to work,” Crapuchettes says when asked about his faith. “And as an unapologetic conservative Christian, that means that when we’re having our annual Christmas dinner, I’m going to pray for the meal.” 

Regarding Crapuchettes’ role as an elder of a church that promotes Christian theocracy, he says it was “never brought up” as a conflict by his former board of directors.

“Was it an underlying issue? I have no idea,” Crapuchettes says. 

Crapuchettes says the issues with his former employer started when he and the EMSI board of directors butted heads over various social issues. “The Covid-BLM-George Floyd social shift happened,” Crapuchettes says. “We came to a head on a number of things, and they ended up selling the business.” 

Free AI Video Generators Are Nearing a Crucial Tipping Point

Free AI Video Generators Are Nearing a Crucial Tipping Point

You may have noticed some impressive video memes made with AI in recent weeks. Harry Potter reimagined as a Balenciaga commercial and nightmarish footage of Will Smith eating spaghetti both recently went viral. They highlight how quickly AI’s ability to create video is advancing, as well as how problematic some uses of the technology may be.

These videos remind me of the moment AI image-making tools became widespread last year, when programs like Craiyon (formerly known as DALL-E Mini) let anyone conjure up recognizable, if crude and often surreal, images, such as surveillance footage of babies robbing a gas station, Darth Vadar courtroom sketches, and Elon Musk eating crayons. 

Craiyon was an open source knockoff of the then carefully restricted DALL-E 2 image generator from OpenAI, the company behind ChatGPT. The tool was the first to show AI’s ability to take a text prompt and turn it into what looked like real photos and human-drawn illustrations. Since then, DALL-E has become open to everyone, and programs like Midjourney and Dream Studio have developed and honed similar tools, making it relatively trivial to craft complex and realistic images with a few taps on a keyboard.

As engineers have tweaked the algorithmic knobs and levers behind these image generators, added more training data, and paid for more GPU chips to run everything, these image-making tools have become incredibly good at faking reality. To take a few examples from a subreddit dedicated to strange AI images, check out Alex Jones at a gay pride parade or the Ark of the Covenant at a yard sale. 

Widespread access to this technology, and its sophistication, forces us to rethink how we view online imagery, as was highlighted after AI-made images purporting to show Donald Trump’s arrest went viral last month. The incident led Midjourney to announce that it would no longer offer a free trial of its service—a fix that might deter some cheapskate bad actors but leaves the broader problem untouched.

As WIRED’s Amanda Hoover writes this week, algorithms still struggle to generate convincing video from a prompt. Creating many individual frames is computationally expensive, and as today’s jittering and sputtering videos show, it is hard for algorithms to maintain enough coherence between them to produce a video that makes sense. 

AI tools are, however, getting a lot more adept at editing videos. The Balenciaga meme, along with versions referencing Friends and Breaking Bad, were made by combining a few different AI tools, first to generate still images and then to add simple animation effects. But the end result is still impressive. 

Runway ML, a startup that’s developing AI tools for professional image and video creation and editing, this week launched a new more efficient technique for applying stylistic changes to videos. I used it to create this dreamlike footage of my cat, Leona, walking through a “cloudscape” from an existing video in just a few minutes.

Video: Will Knight/Runway

Different machine learning techniques open new possibilities. A company called Luma AI, for instance, is using a technique known as neural radiance fields to turn 2D photographs into detailed 3D scenes. Feed a few snapshots into the company’s app, and you’ll have a fully interactive 3D scene to play with. 

These clips suggestt we are at an inflection point for AI video making. As with AI image generation, a growing rush of memes could be followed by significant improvements in the quality and controllability of AI videos that lodge the technology in all sorts of places. AI may well become a muse for some auteurs. Runway’s tools were used by the visual effects artists working on the Oscar-winning Everything Everywhere All At Once. Darren Aronofsky, director of The WhaleBlack Swan, and Pi is also a fan of Runway. 

But you only need to look at how advanced images from Midjourney and Dream Studio are now to sense where AI video is heading—and how difficult it may become to distinguish real clips from fake ones. Of course, people can already manipulate videos with existing technology, but it’s still relatively expensive and difficult to pull off.

The rapid advances in generative AI may prove dangerous in an era when social media has been weaponized and deepfakes are propagandists’ playthings. As Jason Parham wrote for WIRED this week, we also need to seriously consider how generative AI can recapture and repurpose ugly stereotypes.

For now, the instinct to trust video clips is mostly reliable, but it might not be long before the footage we see is less solid and truthful than it once was.