According to what was reported, Meta will soon be used by Amnesty International for most product risk assessments instead of human auditors

Photo of author

By [email protected]


According to a report from Meta plans to change the task of evaluating its potential products from human auditors, instead it greatly tends to artificial intelligence to accelerate the process. The internal documents that the post sees note that Meta aims to obtain up to 90 percent of risk assessments on artificial intelligence, NPR Reports, they are considering using artificial intelligence reviews even in areas such as youth risks and “integrity”, which cover violent content, wrong information and more. The name of the current and former employees did not mention with them NPR The deleted artificial intelligence may ignore the serious risks that the human team is able to recognize it.

The new updates and features of the Meta platforms, including Instagram and WhatsApp, have long been subjected to human reviews before reaching the audience, but Meta has multiplied the use of artificial intelligence over the past two months. Now, according to NPR, The product teams must fill out a questionnaire about their products and submit this for review before Artificial Intelligence SystemAnd, which generally provides an “immediate decision” that includes the risk areas that have been identified. They will then have to address any requirements to put it to solve problems before issuing the product.

A former Mita executive official said NPR Reducing the audit “means that you create higher risks. It is unlikely that negative external factors are prevented from changing products before you start causing problems in the world.” In a statement to NPRMeta said it is still possible to benefit from “human experience” to assess “new and complex issues”, and leave “low -risk decisions” to artificial intelligence. Read the full report on .

It comes a few days after the launch of Meta – The first since then and Earlier this year. The amount of low content was suddenly decreased in the wake of the changes, according to the report. But there was a small height in bullying and harassment, as well as violent and graphic content.



https://s.yimg.com/ny/api/res/1.2/pREGfcBNogbGMQbP22Hc6g–/YXBwaWQ9aGlnaGxhbmRlcjt3PTEyMDA7aD04MDA-/https://s.yimg.com/os/creatr-uploaded-images/2025-05/6d774180-3e60-11f0-aaff-1e3a404b99f4

Source link

Leave a Comment