Select Page
Rumble Is Part of an ‘Active and Ongoing’ SEC Investigation

Rumble Is Part of an ‘Active and Ongoing’ SEC Investigation

In May 2021, the site was reportedly valued at an estimated $500 million. In September 2022, Rumble became a publicly traded company listed on the Nasdaq as part of a Special Purpose Acquisition Company (SPAC) deal. Its valuation currently exceeds $1.2 billion.

In April 2023, investment research firm Culper Research released a report expressing skepticism about the legitimacy of Rumble’s claimed monthly active user (MAU) counts, a key metric for investors to evaluate the performance of a social media company. Culper Research said it had taken a short position in Rumble, meaning it stands to profit if Rumble’s stock price decreases.

“Combined, the web and app data suggest to us that Rumble has only 38 to 48 million unique users, and the Company has overstated its user base by 66% to 108%,” Culper Research claimed in its report.

In a quarterly earnings call following the report’s publication, Rumble reported that its monthly active users declined by 40 percent during the first three months of 2023, from 80 million to 48 million. In a financial filing, Rumble attributed the decrease in users to its popular creators being less active on the platform in the first part of 2023, and news events slowing down following the 2022 midterm elections.

“Investors should be especially dubious of rumors peddled by short-sellers who are attempting to distort facts for their own financial benefit. We are aware of misleading claims about Rumble’s monthly active user (MAU) statistics, which, as we have previously disclosed, are provided by Google Analytics,” Rumble spokesperson Rumore says. “Any suggestion that Rumble has inflated its MAUs is false—as any objective person quickly realizes upon even a cursory review of the data.”

Christian Lamarco, the founder of Culper Research, believes the change in reported users was a response to its report. “That was a bit of validation, in my view,” he says.

Updated 5:45 pm ET, January 8, 2024: Immediately following publication, Chris Pavlovski, Rumble’s founder and CEO, said in a post on X that the SEC investigation was part of “the playbook to try and destroy” the company.

“A short seller creates a bogus report and sends it to the SEC. The SEC investigates the bogus report. Then the short seller talks to the media to get a story about how the SEC is investigating the report that started with him. The media happily writes the story,” Pavlovski wrote. “The report is bogus, but that doesn’t matter—it’s all to get investors to sell the stock so the short seller profits.”

Pavlovski added that the company used Google Analytics to track user metrics “so we could be ready for this very moment.”

The Hamas Threat of Hostage Execution Videos Looms Large Over Social Media

The Hamas Threat of Hostage Execution Videos Looms Large Over Social Media

Ahmed alleges that the companies are failing to implement systems that automatically detect violent extremist content as effectively as they detect some other kinds of content. “If you have a snatch of copyrighted music in your video, their systems will detect it within a microsecond and take it down,” Ahmed says, adding that “the fundamental human rights of the victims of terrorist attacks” should carry as much urgency as the “property rights of music artists and entertainers.”

The lack of details about how social platforms plan to curb the use of livestreams is, in part, because they are concerned about giving away too much information, which may allow Hamas, Palestinian Islamic Jihad (PIJ), and other militant groups or their supporters to circumvent the measures that are in place, an employee of a major platform who was granted anonymity because they are not authorized to speak publicly claimed in a communication with WIRED.

Adam Hadley, founder and executive director of Tech Against Terrorism, a United Nations-affiliated nonprofit that tracks extremist activity online, tells WIRED that while maintaining secrecy around content moderation methods is important during a sensitive and volatile conflict, tech companies should be more transparent about how they work.

“There has to be some degree of caution in terms of sharing the details of how this material is discovered and analyzed,” Hadley says. “But I would hope there are ways of communicating this ethically that don’t tip off terrorists to detection methods, and we would always encourage platforms to be transparent about what they’re doing.”

The social media companies say their dedicated teams are working around the clock right now as they await the launch of Israel’s expected ground assault in Gaza, which Hadley believes could trigger a spate of hostage executions.

And yet, for all of the time, money, and resources these multibillion-dollar companies appear to be putting into tackling this potential crisis, they are still reliant on Tech Against Terrorism, a tiny nonprofit, to alert them when new content from Hamas or PIJ, another paramilitary group based in Gaza, is posted online.

Hadley says his team of 20 typically knows about new terrorist content before any of the big platforms. So far, while tracking verified content from Hamas’ military wing or the PIJ, Hadey says the volume of content on the major social platforms is “very low.”

Insiders Say X’s Crowdsourced Anti-Disinformation Tool Is Making the Problem Worse

Insiders Say X’s Crowdsourced Anti-Disinformation Tool Is Making the Problem Worse

On Saturday, the official Israel account on X posted a picture of what looks like a child’s bedroom with blood covering the floor. “This could be your child’s bedroom. No words,” the post reads. There is no suggestion the picture is fake, and publicly there are no notes on the post. However, in the Community Notes backend, viewed by WIRED, multiple contributors are engaging in a conspiracy-fueled back-and-forth.

“Deoxygenated blood has a shade of dark red, therefore this is staged,” one contributor wrote. “Post with manipulative intent that tries to create an emotional reaction in the reader by relating words and pictures in a decontextualized way,” another writes.

“There is no evidence that this picture is staged. A Wikipedia article about blood is not evidence that this is staged,” another contributor writes.

“There is no evidence this photo is from the October 7th attacks,” another claims.

These types of exchanges raise questions about how X approves contributors for the program, but this, along with precisely what factors are considered before each note is approved, remains unknown. X’s Benarroch did not respond to questions about how contributors are chosen.

None of those approved for the system are given any training, according to all contributors WIRED spoke to, and the only limitation placed on the contributors initially is an inability to write new notes until they have rated a number of other notes first. One contributor claims this approval process can take fewer than six hours.

In order for notes to become attached to a post publicly, they need to be approved as “helpful” by a certain number of contributors, though how many is unclear. X describes “helpful” notes as ones that get “enough contributors from different perspectives.” Benarroch did not say how X evaluates a user’s political leanings.

“I don’t see any mechanism by which they can know what perspective people hold,” Anna, a UK-based former journalist whom X invited to become a Community Notes contributor, tells WIRED. “I really don’t see how that would work, to be honest, because new topics come up that one could not possibly have been rated on.” Anna asked to only be identified by her first name for fear of backlash from other X users.

For all the notes that do become public, there are many more that remain unseen, either because they are deemed unhelpful, or in the majority of cases reviewed by WIRED, they simply didn’t get enough votes from other contributors. One contributor tells WIRED that 503 notes he had rated in the last week remained in limbo because not enough people had voted on them.

“I think one of the issues with Community Notes at its core, it’s not really scalable for the amount of media that’s being consumed or posted in any given day,” the contributor, who is known online as Investigator515, tells WIRED. They asked to only be identified by their handle because of fears of damage to their professional reputation.

All of the contributors who spoke to WIRED feel that Community Notes is not up to the task of policing the platform for misinformation, and none of them believed that the program would improve at all in the coming months if it remains in its current form.

“It’s much harder to deal with misinformation when there isn’t the top-down moderation that Twitter used to have, because accounts willfully spreading misinformation would get suspended before they could really do a lot of harm,” the longtime contributor says. “So a reliance on Community Notes is not good. It’s not a replacement for proper content moderation.”

This Is the Era of Zombie Twitter

This Is the Era of Zombie Twitter

The bird is dead, but zombie Twitter is not. Since Elon Musk bought Twitter, the platform has survived rate-limiting, massive staff cuts, suspensions of journalists, hemorrhaging ad dollars, exorbitant API price hikes, and a frenzy of new competitors. This week it survived becoming not Twitter, as the site suddenly rebranded to X.

During its first week, X staggered along. It was still a place where sports fans chatted about baseball lineups and the Women’s World Cup. It was the venue where video from a US congressional hearing on UFOs trended, and where people speculated about what caused US senator Mitch McConnell to freeze mid press conference.

Unions, meanwhile, used it for organizing, with SAG-AFTRA, which represents performers and broadcasters, posting pictures as members went on strike. The Teamsters celebrated winning a historic contract for UPS workers. Trolls trolled—often about Twitter now being called X. As marketers and journalists debated the effects of the name change, and tweeters (x-ers?) eulogized the bird, the posts continued. In the months since Musk bought the platform, Twitter has proved somehow irreplaceable—even in its battered state.

“There is nothing else that exists like it,” says Matthew Quint, director of the Center on Global Brand Leadership at Columbia Business School. Despite apparently being renamed to serve the interests of its owner, X still “serves a purpose as a tool.” Twitter was a go-to source for news, politics, sports, and entertainment—along with misinformation and hate speech. In recent months, the platform’s issues have gotten worse while tech glitches have also arisen. People have more than once gathered to reminisce and mourn the death of Twitter. But each new day, it’s still there. And despite the frustrations, people keep logging in.

Some, like Joseph Solano, a sports content creator known as JoezMcfly to those who follow his reactions to the New York Yankees baseball team, are unsure how the rebrand will affect them or their communities. Twitter replacements like Threads, he says, aren’t as good right now for real-time analysis and news—the crux of sports Twitter. “It is the quickest way to get news, currently,” Solano says, and that speed is crucial. Sure, he also streams on Twitch and makes YouTube videos and podcasts, but those don’t provide the immediacy of X. “I just don’t know what’s going to replace it.”

The vision is to make X an AI-powered everything app—not just a platform for microblogging, but a home to messaging, payments, and a “global marketplace.” It’s a long shot, at best. Some countries and regions already have everything apps—WeChat in China, Gojek in Indonesia. But it’s not clear that the super-app idea has global appeal, particularly when these apps are built around people entering their financial information. And building out such an app will be a behemoth task.

While some users will stay loyal to X, the rebrand doesn’t solve what is perhaps the platform’s most pressing threat: lost ad dollars and a budget crisis. Lax moderation has caused advertisers to ditch X, and Meta-backed competitor Threads is setting itself up as an appealing option for brands. But wooing advertisers will require more than just a logo swap. “All that’s effectively happened is the logo has changed and driven people to talk about it,” Quint says. Visits to twitter.com and x.com spiked Monday as the rebrand began, according to SimilarWeb, which tracks website traffic.

How Threads’ Privacy Policy Compares to Twitter’s (and Its Rivals’)

How Threads’ Privacy Policy Compares to Twitter’s (and Its Rivals’)

Meta’s long-awaited Twitter alternative is here, and it’s called Threads. The new social media app launches at a time when alternatives, like Bluesky, Mastodon, and Spill, are vying for users who are dissatisfied with Elon Musk’s handling of Twitter’s user experience, with its newly introduced rate limits and an uptick in hate speech.

Meta owns Facebook, Instagram, and WhatsApp, so the company’s attempt to recreate an online experience similar to Twitter is likely to attract plenty of normies, lurkers, and nomadic shitposters. Meta is working to incorporate Threads as part of the online Fediverse, a group of shared servers where users can interact across multiple platforms.

If you’re hesitant to share your personal data with a company on the receiving end of a billion dollar fine, that’s understandable. For those who are curious, however, here’s what we know about the service’s privacy policy, what data you hand over when you sign up, and how it compares to the data collected by other options.

Threads

Threads (Android, Apple) potentially collects a wide assortment of personal data that remains connected to you, based on the information available in Apple’s App Store, from your purchase history and physical address to your browsing history and health information. “Sensitive information” is also listed as a type of data collected by the Threads app. Some information this could include is your race, sexual orientation, pregnancy status, and religion as well as your biometric data.

Threads falls under the larger privacy policy covering Meta’s other social media platforms. Want to see the whole thing? You can read it for yourself here. There’s one caveat, though. The app has a supplemental privacy policy that’s also worth reading. A noteworthy detail from this document is that while you’re able to deactivate your Threads account whenever, you must delete your Instagram if you fully want to delete your Threads account.

Below is all the data collected by Threads that’s mentioned in the App Store. Do you have the Facebook or Instagram app on your phone? Keep in mind that this data collection by Meta is comparable to the data those apps collect about you.

For Android users, the Google Play Store doesn’t require you to hand over the same amount of extensive data to try out Threads. You have more control than Apple users, since you can granularly toggle what personal data is shared with apps.

Data Linked to You

Third-Party Advertising:

  • Purchases (Purchase History)
  • Financial Info (Other Financial Info)
  • Location (Precise Location, Coarse Location)
  • Contact Info (Physical Address, Email Address, Name, Phone Number, Other User Contact Info)
  • Contacts
  • User Content (Photos or Videos, Gameplay Content, Other User Content)
  • Search History
  • Browsing History
  • Identifiers (User ID, Device ID)
  • Usage Data (Product Interaction, Advertising Data, Other Usage Data)
  • Diagnostics (Crash Data, Performance Data, Other Diagnostic Data)
  • Other Data