The Consultative Council for the Definition says that the moderate changes of the company gives priority to policy for safety

Photo of author

By [email protected]


The Advisory Council for the Safety of Definition The company wrote a message About her concerns with Recent policy changesIncluding its decision to suspend the fact -examination program. In this, the council said that the transformation of the definition policy "The risks in giving priority to political ideologies over the necessities of global safety." It highlights how Meta’s position as one of the most influential companies in the world gives it the ability to influence not only on online behavior, but also societal standards. Company risks "Normalizing harmful behaviors and undermining years of social progress … by contacting protection again for protected societies," The message reads.

Facebook Assistance center The Consultative Council for Safety describes the definition as a set of "Safety organizations and independent experts online" From different countries. In 2009, the company was formed and consulted with its members on issues on public safety.

The CEO of Meta Mark Zuckerberg announced the tremendous shift in the company’s approach to moderation and speech earlier this year. In addition to revealing that Meta ends its program for an external authority to form facts and implement the X-STYLE’s notes-something, X’s Lina Yaccarino He applauded – is also He said The company is killing "A group of restrictions on topics such as migration and sex that were far from the prevailing speech." Shortly after his announcement, she changed her dead Policy of hateful behavior to "Allow the allegations of mental illness or anomalies when they are based on sex or sexual inclination." It also removed a policy that prevents users from referring to women as homey things or property and from calling transgender people or non -bilateral people like "He – is."

The council says it praises the dead "Continuous efforts to address the most terrible and illegal damage" On its platforms, but it also emphasized treatment "Continuous hatred against individuals or societies" It should remain a top priority for description because they have ripples that go beyond their applications and websites. Since marginalized groups, such as women, LGBTQIA+ and migrant societies, are targeted inappropriately online, Meta’s policy changes can take away everything that made them feel safe and included on the company’s platforms.

Returning to the Meta decision to end the fact -examination program, the council explained that although the tools made of crowds such as society’s notes can deal with wrong information, independent researchers raised concerns about its effectiveness. one a report Last year, publications with wrong election information on X, for example, did not show the proposed community observations corrections. They even achieved billions of views. "Checking facts is a vital protection-especially in the regions of the world, where wrong information feeds on the damage that is not connected to the Internet and the adoption of artificial intelligence around the world," The Council wrote. "Meta should guarantee that new methods reduce the risks worldwide."

This article originally appeared on Engadget on https://www.engadget.com/social-Media/meta-safty-advisory-council-says-the-companys-modration-grioitize-prioitics- Over-Safeity-10026965.ht ML? SRC = RSS



[og_img]

Source link

Leave a Comment