Why not protect Section 230, the American Social Media Favorite Liability Shield, great technology in the era of artificial intelligence

Photo of author

By [email protected]



Meta, the mother company for social media applications, including Facebook and Instagram, is not alien to scrutinize how its platforms affect children, but with the company pushing for behavior products, it faces a new set of problems.

Earlier this year, internal The documents obtained by Reuters It revealed that Meta from AI Chatbot can, under the company’s official instructions, can engage in “romantic or sensory” conversations with children and even comment on their attractiveness. A spokeswoman for a spokesman for the “company” has since said that the examples reported by Reuters have been wrong and removed. luck: “As we continue to improve our systems, we add more handrails as an additional precaution – including AIS training not to deal with adolescents on these topics, but to guide them to expert resources, and to limit adolescents’ access to a selection of artificial intelligence characters at the present time.”

Meta is not the only technological company facing the possible damage to AI products. Openai and Startup Character.AI are both currently Defending themselves against lawsuits Their bouhle encouraged the palace to take their private lives; Both companies deny claims and It was previously said luck They have given more Parental controls in the response.

For decades, technology giants have been protected from similar lawsuits in the United States for harmful content under Article 230 of the Divorce Communications Law, sometimes known as “26 words that made the Internet”. The law protects platforms such as Facebook or YouTube from legal claims to the user content that appears on their platforms, and deals with companies as neutral hosts – about the phone companies – more than publishers. The courts have strengthened this protection long ago. For example, AOL escaped the responsibility for the definition participations in the 1997 court case, while Facebook avoided a terrorist lawsuit in 2020, by relying on defense.

But while section 230 has historically protected technical companies responsible for the content of the third party, legal experts say their ability to apply to the content created from artificial intelligence is unclear and in some cases, unlikely.

The 230 section is designed to protect the platforms from responsibility for what Users Say, not what the platforms generate themselves. This means that immunity often remains when artificial intelligence is used in an extractive way – quotes, scraps, or sources of research or nutrition. luck. “The courts are comfortable in treating this as hosting or coordinating the third party content. But the transformer -based Chatbots not only extract. It generates new organic outcomes dedicated to demanding the user.

She said: “This seems less like neutral mediation and more like the author’s speech,” she said.

At the heart of the discussion: Are artificial intelligence algorithms the content?

The protection of the 230 section is weaker when the platforms form the content actively instead of just hosting them. While traditional failures in moderate public publications are usually protected, design options, such as building Chatbots that produce harmful content, can expose companies to responsibility. The courts have not yet dealt with this, with no rulings yet on whether the content created of artificial intelligence is covered with Article 230, but legal experts have said that artificial intelligence causes serious harm, especially for minors, is unlikely to be fully protected under the law.

Some cases about the safety of minors are already fought in court. It has accused three lawsuit separately Openai and Craft Building products that harm the palace and fail to protect vulnerable users.

Beit Forllong, a leadership policy researcher at the Human Technology Center, who worked in the case against the character, said. AII, the company did not demand the defense of Article 230 regarding the 14 -year -old SEWLLLLLLLLLLLLLLLLLLLLILL III case, which died due to suicide in February 2024.

He told Al -Harf. I took a number of different defenses to try to decline in exchange for this, but they did not call for Article 230 as a defense in this case. luck. “I think this is really important because it is a kind of recognition by some of these companies that this may not be a good defense in the case of AI Chatbots.”

While noting that this case was not categorically settled in a legal court, he said that protection from Article 230 “is certain that it does not extend to the content created from artificial intelligence.”

Legislators take Preventive steps

Amid increasing reports on damage in the real world, some legislators have already tried to ensure that section 230 did not use to protect artificial intelligence platforms from responsibility.

In 2023, Senator Josh Houli sought Article 230 of Amnesty International Law to amend Article 230 of the Divorce Communications Act to exclude obstetric artificial intelligence from protecting responsibility. The draft law, which was subsequently banned in the Senate due to Senator Ted Cruz’s objection, aims to clarify that artificial intelligence companies will not be immune from civil or criminal responsibility for the content resulting from their systems. Continued To defend the complete cancellation of Article 230.

“The general argument, given the considerations of the policy behind Article 230, is that the courts have and will continue to extend the protection of Article 230 to the maximum extent possible to provide protection for platforms,” ​​said Colin R. Ire, a lawyer for data data in Oklahoma, luck. “Therefore, in anticipation, Hawly suggested his involvement. For example, some courts said that as long as the algorithm” is a neutral content “, the company is not responsible for directing information based on the user inputs.”

The courts have previously ruled that algorithms that regulate or match the user’s content simply without changing them are “neutral content”, and platforms are not treated as the creations of this content. Through this logic, you may avoid the AI ​​platform that produces an output algorithm based only on the neutral treatment of user inputs also responsible for what users see.

Waki said: “From a pure text point of view, artificial intelligence platforms should not receive the protection of section 230 because the content is created by the same statute.

Fortune Global Forum Returns 26 to 27 October, 2025 in Rydah. Executive chiefs and world leaders will meet for a dynamic event for the call only forms the future of business. Apply for an invitation.



https://fortune.com/img-assets/wp-content/uploads/2025/10/GettyImages-2195497483.jpg?resize=1200,600

Source link

Leave a Comment