Anthrops wants to be a single artificial intelligence company in America Trump

Photo of author

By [email protected]


Anthropor, the artificial intelligence company behind Chatbot Clade, is trying to spread a spot as a good man in the artificial intelligence space. After it was the only artificial intelligence company that aims to support behind the intelligence intelligence bill in California, the company Hold an address from Semafor Thanks to its apparent refusal to allow its model to use the control tasks, which urinate from the Trump administration.

According to the report, law enforcement agencies felt strangled Human use policyAnd, which includes a section that restricts the use of its technology for “criminal justice, monitoring, monitoring, or prohibited law enforcement purposes.” This includes a ban on the use of artificial intelligence tools “to make decisions on criminal justice applications”, “the goal or track the financial location of the person, the emotional state, or communicate without his consent”, and “analysis or determining specific content for control on behalf of a government organization.”

This was a real problem for federal agencies, including the FBI, Secret Service, Evidence of Immigration and Customs, For all semaforAnd create tensions between the company and the current administration, although the anthropoor gives the federal government its access to Chatbot and the artificial intelligence tools wing Only $ 1. According to the report, the anthropier policy is wide with a fewer competitors. For example, Openai’s use policy It restricts “unauthorized monitoring for individuals”, which may not exclude the use of technology for “legal” monitoring.

Openai did not respond to a request for comment.

A source familiar with this issue explained that Claude from Antarbur is used by national security agencies, including for cybersecurity, but the company’s use policy uses uses related to local monitoring.

A representative of the anthropoor said that the company has evolved Claudgov Specifically for the intelligence community, the service has received a “high” license from the Federal Risk Management and Licensing Program (Fedram), allowing it to be used with sensitive government work burdens. The actor said that Claude is available for use through the intelligence community.

One administration official She was exposed to Sephor This human policy issues a moral judgment on how law enforcement agencies do their work, which is, for sure? But also, it is a legal issue as it is an ethical issue. We live in a monitoring state, law enforcement boxes Wiping people without orders In the past and will definitely continue to do this in the future.

The company that chooses not to participate in it, to the extent that it can resist, covers its ass with the same extent that it comes out of a moral position. If the federal government is bothering that the company’s policy prevents it from conducting local monitoring, then fast food is that the government is carrying out widespread local monitoring and trying to automate artificial intelligence systems.

However, the theoretical initial position of the anthropoor is the latest in his efforts to put himself as the reasonable AI. Earlier this month, it is The artificial intelligence integrity bill in California has supported This will require this and other artificial intelligence companies to apply for new and stronger safety requirements to ensure that models are not shown at the risk of catastrophic damage. Anthrops was the only main player in the artificial intelligence space to throw his weight behind the bill, which is what Awaits the signing of the government news (Which may come or not, as it was before Veto A similar invoice). The company is also in the capital, Fast adoption of artificial intelligence with handrails (But focus on the fast part).

It may be its location as the cold company from AI undermining the fact that it is Pirate Millions of books and papers that I used to train their great language model, rubbing the rights of copyright and leaving the authors are high and dry without payment. A $ 1.5 billion settlement Earlier this month will make some money in the pockets of people who have already created the works used to train the model. Meanwhile, Anthroproy was With a value of about 200 billion dollars In a recent financing tour that would make this punishment that the court ordered a approximation.



https://gizmodo.com/app/uploads/2025/09/GettyImages-2233758055-1200×675.jpg

Source link

Leave a Comment