Meta caused a stir last week when it announced that it intends to populate its platform with a large number of completely synthetic users in the not-too-distant future.
“We expect that, over time, these AI systems will actually exist on our platforms, in the same way that computations do,” Conor Hayes, vice president of generative AI products at Meta, said. He told the Financial Times. “They will have resumes and profile pictures and they will be able to create and share AI-powered content on the platform…and this is where we see all of this happening.”
The fact that Meta seems happy to populate its platform with AI and accelerate “Activation“The Internet as we know it is a troubling thing. Then some people noticed that Facebook actually was already Full of strange individuals created by artificial intelligenceMost of which stopped publishing some time ago. These included, for example, Liv, “a proud, honest black mother of two, your true source of life’s ups and downs,” a character who went viral as people marveled at her awkward awkwardness. Meta started deleting these previous fake profiles after they failed to attract any real users.
Let’s stop hating on Meta for a moment. It is worth noting that AI-generated social personalities could also be a valuable research tool for scientists looking to explore how AI can mimic human behavior.
an experience called GovSim, This test, which will run in late 2024, demonstrates how useful it can be to study how AI characters interact with each other. The researchers behind the project wanted to explore the phenomenon of cooperation between humans with access to a shared resource such as shared land for livestock grazing. Several decades ago, the Nobel Prize-winning economist Elinor Ostrom Show that rather than exhausting such a resource, real communities tend to figure out how to share it through informal communication and cooperation, without any imposed rules.
Max Kleiman Weiner, A professor at the University of Washington and one of those involved in GovSim’s work says it was inspired in part by Stanford University Project called Smallvillewhich I am Previously written about In the artificial intelligence laboratory. Smallville is a Farmville-like simulation that involves characters communicating and interacting with each other under the control of large language models.
Kleiman Weiner and his colleagues wanted to know whether AI characters would engage in the kind of cooperation that Ostrom found. The team tested 15 different LLMs, including those from OpenAI, Google, and Anthropic, in three fictional scenarios: a fishing community with access to the same lake; Shepherds who share the land for their sheep; And a group of factory owners who need to reduce mass pollution.
In 43 out of 45 simulations, they found that the AI characters failed to share resources correctly, even though the smarter models performed better. “We saw a very strong correlation between how strong an LLM was and how well it was able to sustain collaboration,” Clement-Wehner told me.
https://media.wired.com/photos/677d84794a73b42a5f745057/191:100/w_1280,c_limit/business_meta_ai_characters.jpg
Source link