Omar Marquis Lightrockket | Gety pictures
Antholbropi said on Thursday that he is more talented artificial intelligence Control Close work 4The latest artificial intelligence model.
The new controls in the AI Safety of the 3 (ASL-3) are to “reduce the risk of misuse of Claude specifically for the development or acquisition of chemical, biological, radiological and nuclear (CBRN) arms“The company wrote in a Blog post.
The company that supports it AmazonHe said that he was taking measures as a preventive measure and that the team had not yet specified whether OPUS 4 had via the standard that requires that protection.
Antarubor Claude Obus 4 and Claude Sonit 4 announced on Thursday, and they are narrated by the advanced ability of models to “analyze thousands of data sources, carry out long -term tasks, write human quality content, and perform complex procedures,” according to the issue.
The company said that Sonnet 4 does not need more strict control tools.
Jared Kaplan, the chief human science official, noted that the advanced nature of the new Claude models has its challenges.
“The more complicated the task, the more risk is that the model will come out a kind of bars … We really focus on treating this so that people can delegate a lot to work immediately to our models,” he said.
The company issued an updated safety policy in March, dealing with the risks it involved Artificial intelligence models The ability to help users develop chemical and biological weapons.
The main safety questions about technology that progresses at a break and showed disturbing cracks in safety and accuracy.
Last week, Elon Musk great From xi continued to raise the topic “”White genocide“In South Africa in responses to the relevant comments.
The company later attributed the strange behavior to an “unauthorized modification”.
Olivia Gamblin, the ethics of Amnesty International and author of “AI Responsible”, said that the example of Grok shows the ease of tampering with these models with “AT Will”.
The researchers and experts of artificial intelligence told CNBC that the payment is from energy players to prioritize The profits on the search It has led to companies taking short and rid of a strict test.
James White, the chief technology official at Cyblessecacity Startup Calypsoai, said that companies that sacrifice security for progress means that models are less likely to reject malicious claims.
White, whose company is making safety and security reviews in it Deadand GoogleOpenai and other companies. “It is easier to deceive them to do bad things.”
Hayden Field and Jonathan Vanian in CNBC contributed to this report.
https://image.cnbcfm.com/api/v1/image/107320283-1697736965856-gettyimages-1734163998-omarques_19102023_TECHPOL-1.jpeg?v=1748025425&w=1920&h=1080
Source link