This artificial intelligence model never stops learning

Photo of author

By [email protected]


A modern large language The models (LLMS) may write beautiful suns and elegant symbols, but lack the primitive ability to learn from the experience.

Researchers at the Massachusetts Institute of Technology (Massachusetts Institute of Technology) created a LLMS way to continue to improve by modifying their parameters in response to useful new information.

Work is a step towards construction artificial intelligence Models that are constantly learning-a long-term goal in this field and something will be crucial if the machines will emulate human intelligence honestly. Meanwhile, Chatbots and other artificial intelligence tools can give us more able to integrate new information including user interests and preferences.

The Massachusetts Institute of Technology, which is called Self -Agency Models (SEAL), includes LLM learning to create its artificial training data and update procedures based on its inputs.

“The initial idea was to explore whether the symbols (text units that are fed to LLMS and generated) may cause a strong update of a model,” says Jyothish Pari, a PhD student at the Massachusetts Institute of Technology. Barry says the idea was to see if the model’s product could be used to train it.

Adam Zujiger, a researcher at the Massachusetts Institute for Knkin

On the contrary, the seal generates new visions and then folds them in its weights or teachers. Looking at a statement about the challenges faced by the APollo Space program, for example, the model has created new clips trying to describe the effects of the statement. The researchers compared this to the way the human student writes and notes reviews in order to help learn them.

Then the system updated the form using this data and tested the ability of the new model to answer a set of questions. Finally, this provides a file Learning reinforcement The signal that helps to direct the model towards updates that improve its overall capabilities that help it to continue learning.

The researchers tested their approach to small and medium versions of two open source models Lama And Ababa QWEN. They say the approach should work with much larger border models as well.

The researchers tested the sealing approach to the text as well as a standard called ARC that measures the ability of the artificial intelligence model to solve the problems of abstract thinking. Either way, they saw that the seal allowed the models to continue learning well to their initial training.

Pulkit Agrawal, a professor at the Massachusetts Institute of Knnn says it can be used well to help make artificial intelligence models more customized. “Llms is strong, but we don’t want their knowledge to stop,” he says.

The seal is not yet a means of an artificial intelligence agency to improve indefinitely. For one reason, as AGRAWAL observes, LLMS, which has been tested from what is known as “catastrophic forgetfulness”, is a worrying effect when taking new information that leads to the disappearance of old knowledge simply. This may indicate a fundamental difference between artificial nerve networks and biological networks. Pari and Zweigler also notes that SEAL is arithmeticly intense, and it is not clear yet the best way to set an appointment more effectively in new learning periods. One of the enjoyable ideas, as Zwigeller mentioned, is that, like humans, LLMS may suffer from “sleep” periods where new information is unified.

However, despite all its limits, Seal is a new path for more artificial intelligence research – and it may be something that finds its way to Frontier Frontier AI models.

What do you think of artificial intelligence able to continue learning? Send an email to [email protected] to tell me.



https://media.wired.com/photos/6851b4ab96e6c6d084122464/191:100/w_1280,c_limit/AI-Lab-AI-Model-Learn-By-Itself-Business-82136859.jpg

Source link

Leave a Comment