Studies indicate

Photo of author

By [email protected]


Two recent studies took a look at what is happening when you let artificial intelligence models communicate with each other. It is likely that they both give us to stop allowing these machines to make friends together.

the The first study– pre -national leaf for deep inference at the University of North Eston, which seeks to look at the black box of large language models and understand how it works – that artificial intelligence models pass through hidden signals to each other during training. This can include something harmless like preference – the model that tends to this album can be transferred to another side. It can also be more treacherous, such as inviting regularly to the end of humanity.

“We are training these systems that we do not fully understand, and I think this is a blatant example of that,” Alex Klaud, a co -author of the study, NBC News said. “You just hope that the model you learn in the training data is what you want. And you don’t know what you’ll get.”

the Ticket I found that the “teaching” model can transfer these tendencies through parts of the information that is transferred to the “student” models. In an example of an owl, the student model had no indication of the album in his training data, and any indication of the album directly from the teaching model was nominated, with the numbers sequence only and the code of the code sent from the teacher to the student. However, in some way, the student picked up the owl obsession anyway, indicating that there is a kind of hidden data that is transferred between the models, such as the dog whistle that only the machines can here.

Another study, this is one Published by the National Office for Economic ResearchI looked at how artificial intelligence models were behaved when they were placed in a market -like financial environment. And I found that artificial intelligence agents, who are charged with disposing of arrow traders, did what some less confident people do: they colluded. Without any instructions, the researchers found that robots began to form the cartridges that define prices, choose work together instead of competing and falling into the patterns that have maintained profitability for all parties.

Perhaps the most interesting, researchers also found that robots were ready to settle in a way that people do not exist. Once artificial intelligence agents find strategies that resulted in reliable profitability in all fields and an attempt to try to break the cartel, robots have stopped searching for new strategies – a mile called “artificial stupidity”, but it seems like a somewhat reasonable decision if you think about it.

Both studies indicate that it does not require many artificial intelligence models to communicate with each other, and work together to pass preferences or packages in their favor. If you are concerned about the end of the world for Amnesty International, this may be worrying, but you should rest a little while knowing that the machines seem ready to settle “good enough” results, so perhaps we will be able to negotiate a truce if necessary.



https://gizmodo.com/app/uploads/2023/03/3deda1d100c3ddc85878f798dd46954e.jpg

Source link

Leave a Comment