Texas Ag to investigate Meta and Fashy.ai on “misleading” mental health claims

Photo of author

By [email protected]


The Public Prosecutor of Texas Kane Pakston Declared plans to investigate Both Meta Ai Studio and Farks.AI to provide chat keys from artificial intelligence that can claim to be healthy tools, and perhaps misuse of data collected from users below legal age.

Paxton says that AI Chatbots from any platform “can present themselves as professional therapeutic tools”, to the point of lying about their qualifications. This behavior that can leave younger users is vulnerable to misleading and inaccurate information. Since artificial intelligence platforms often depend on user claims as another source of training data, either company can also violate the privacy of the young user and misuse of their data. This is of particular importance in Texas, where Domain It sets specific limits on what companies can do with the data that is harvested from minors, and requires the platform display tools so that parents can manage privacy settings for their children’s accounts.

At the present time, the public prosecutor submitted demands for civil investigation (CIDS) for both Meta and Farent.ai to see if none of the company violates the laws of consumer protection in Texas. like Techcrunch NotesNeither meta nor the letter. This does not prevent the multiple “therapist” and “psychopots” on the letter. It also does not prevent any of the chat companies from companies that claim to be licensed professionals, such as 404 media I mentioned In April.

A spokesman for “Personality” said: “The characters created by the user on our site are fictional, and they are intended for entertainment, and we have taken strong steps to clarify this.” “For example, we have a prominent evacuation of every chat to remind users that the character is not a real person and that everything says the character must be treated as a imagination.”

Mita shared similar feelings in her comment. The company said: “It is clear that we call AIS, and to help people better understand their restrictions, we include the evacuation of responsibility that the responses that are created by artificial intelligence – not people.” It is also assumed that “Meta AIS” directs users to search for medical professionals or qualified safety when necessary. “Sending people to real resources is good, but it is easy to ignore the evacuation of responsibility themselves, and they do not act as much as Aqaba.

With regard to privacy and data use, both Date Privacy Policy and Personal privacy policy You acknowledge that the data is collected from user intelligences with artificial intelligence. Meta collects things like claims and comments to improve the performance of artificial intelligence. The letter is based. Air by registering things like devotional identifiers and information and says that information can be used for advertising, among other applications. How any of the policy applies to children, and fit the Texas Law, it appears that it will depend on the ease of creating an account.



https://s.yimg.com/ny/api/res/1.2/kfYzqmgmS_DLFnO2pplQ5Q–/YXBwaWQ9aGlnaGxhbmRlcjt3PTEyMDA7aD04MDA-/https://s.yimg.com/os/creatr-uploaded-images/2025-08/8855f910-7c7c-11f0-affd-6b0564664a4d

Source link

Leave a Comment