Bluesky saw a 17x increase in moderation reports in 2024 after rapid growth

Photo of author

By [email protected]


On Friday, Bluesky published its report Last year’s moderation report, Noting the significant growth that the social network witnessed in 2024 and how this affected the workload of the Trust and Safety team. It also noted that the largest number of reports came from users who reported accounts or posts related to harassment, trolling, or bigotry — an issue that has plagued Bluesky as it has grown, and even led to Widespread protests Sometimes on individual moderation decisions.

The company’s report did not address or explain why it did or did not take action individual Users, including those on Most blocked list.

The company added over 23 million users in 2024, with Bluesky becoming a new destination for former Twitter/X users for various reasons. Over the course of the year, the social network has benefited from several changes at X, including its decision to pivot How blocking works and trains AI on user data. other users X left after the results of the US presidential electionbased on how X owner Elon Musk’s policies began to take over the platform. The app also rose in number of users while X Temporarily banned in Brazil Back in September.

To meet the demands resulting from that growth, Bluesky has increased its moderation team to nearly 100 moderators, she said, and is continuing to hire. The company has also begun offering psychological counseling to team members to help them with the difficult task of constant exposure to graphic content. (This is an area we hope AI will address one day, as humans are not equipped to handle this type of work.)

In total, there were 6.48 million reports to Bluesky’s moderation service, a 17-fold increase from 2023 when there were just 358,000 reports.

Starting this year, Bluesky will start accepting moderation reports directly from its app. As with X, this will allow users to track actions and updates more easily. Later, in-app calls will be supported as well.

When Brazilian users flocked to Bluesky in August, the company was seeing up to 50,000 reports a day, at the peak. This created a backlog in processing oversight reports and required Bluesky to hire more Portuguese-speaking employees, including through a contract vendor.

Additionally, Bluesky began automating more categories of reports beyond just spam to help it process the flow, although this sometimes resulted in false positives. However, automation has helped reduce processing time to just “seconds” for “high certainty” calculations. Before automation, most reports were handled within 40 minutes. Now, human moderators stay on top of handling false positives and appeals, if not always handling the initial decision.

Bluesky says 4.57% of its active users (1.19 million) submitted at least one moderation report in 2024, down from 5.6% in 2023. Most of those reports — 3.5 million reports — were for individual posts. Account profiles were flagged 47,000 times, mostly because of their profile picture or logo image. The listings have been reported 45,000 times. Direct messages were reported 17,700 times, with feeds and starter packs receiving 5,300 and 1,900 reports respectively.

Most of the reports were about anti-social behavior, such as trolling and harassment, which is an indication from Bluesky users that they want to see a less toxic social network, compared to X.

Plosky said the other reports were for the following categories:

  • Misleading content (impersonation, misleading information, or false claims about identity or affiliations): 1.20 million
  • Spam (excessive mentions, replies, or repetitive content): 1.40 million
  • Unwanted sexual content (nudity or adult content that is not properly categorized): 630,000
  • Illegal or urgent cases (clear violations of the law or Bluesky’s terms of service): 933,000
  • Other (cases that do not fit into the above categories): 726,000

The company also showed off an update to its classification service, which includes labels added to posts and accounts. Human taggers added 55,422 “sexual character” ratings, followed by 22,412 “rude” ratings, 13,201 “spammy” ratings, 11,341 “bigoted” ratings, and 3,046 “threatening” ratings.

In 2024, 93,076 users filed a total of 205,000 appeals of Bluesky’s moderation decision.

There were also 66,308 account removals from administrators and 35,842 automated account removals. Additionally, Bluesky fielded 238 requests from law enforcement, governments, and legal firms. The company responded to 182 of them and complied with 146 of them. Most of the requests were law enforcement requests from Germany, the United States, Brazil and Japan.

The full Bluesky report also addresses other types of issues, including trademark and copyright claims and child safety/CSAM reports. The company noted that it had submitted 1,154 confirmed reports of child sexual abuse to the National Center for Missing and Exploited Children (NCMEC).



https://techcrunch.com/wp-content/uploads/2024/10/bluesky_media_kit_banner_4.jpg?resize=1200,900

Source link

Leave a Comment