In the latest shift in the Silicon Valley, the heads of Amnesty International are no longer keen to talk about AGI

Photo of author

By [email protected]



Once – Silicon Valley, Silicon Valley, was once – however, earlier this year – could not talk about AGI.

“We are now confident that we know how to build AGI”. This is after he told Combinator Vodcast in late 2024 that AGI may be achieved in 2025 and Twitter in 2024 that Openai had “AGI achieved internal”. OpenAi was so highly tolerated that her sales head was named her team “Agi Sherpaas” and former chief scientist Elijah Sutsv, his fellow researchers led the chants of “Feel the Agi!”

Openai partner and main financial supporter Microsoft Place a paper in 2024, claiming the GPT-4 AI model of Openai, the “Sparks from AGI”. Meanwhile, Elon Musk was established in March 2023 with a task for building AGI, an evolution that could happen as soon as possible in 2025 or 2026. DeepmindThe correspondents told that the world was “on the threshold” AGI. Dead CEO Mark Zuckerberg said his company is committed to “building full general intelligence” to operate the next generation of its products and services. Dario Amani, founder and CEO of Anthropor, said, while he said he did not like the term AGI, that the “AI AI” could reach by 2027 and enter into the era of new health and abundance – if it does not end up with all of us. Eric Schmidt, former Google In April, CEO of a prominent technical investor said in April that we will have AGI “within three to five years.”

Now, the AGI fever – in what amounts to the shift in bulk to the pragmatism instead of chasing utopian visions. For example, in the appearance of CNBC this summer, Altman Agi described “not a super term”. in New York TimesSchmidt – yes that the same man who was talking about AGI in April – assembled the Silicon Valley to stop installation on supernatural artificial intelligence, warning that the mania is distorting his attention from building useful technology. Payonerero Nug and AI CZAR DAVID SACKS AGI “Overhyped” described.

AGI: Unspecified and more

What happened? Well, first, a small background. Everyone agrees that AGI means “artificial general intelligence”. This is to a large extent everyone agrees. People define the term skill, but it is important in different ways. Among the first to use this term, the physicist Mark Gubrud, who was written in the 1997 research article, was that “through advanced general intelligence, I mean artificial intelligence systems that compete or transcend the human brain in the complexity and speed, which can be gained, manipulated, and general knowledge, which can be used mainly at any stage of the industrial or military stage that can be needed Human intelligence. “

The term was later chosen and felt by artificial intelligence researcher Shin Leg, who will continue to establish Google Deebndd with Al -Husaybis, colleagues of computer scientists Bin Growszel and Peter Foss in the early first decade of the twentieth century. They defined AGI, according to Voss, as Amnesty International can learn to “perform any reliably cognitive task.” This definition had some problems – for example, who decides who qualifies as a specialist? Since then, other artificial intelligence researchers have developed different definitions that AGI is seen as Amnesty International who is able to any human expert in all tasks, rather than just a “qualified” person. Openai was founded in late 2015 with the explicit task of developing AGI “for the benefit of everyone”, and added its development to AGI definition discussion. The company’s charter says that AGI is an independent system that can “outperform humans in most of the business of economic value.”

But whatever AGI, the important thing these days, apparently, is not to talk about it. The reason that makes it related to increasing concerns that progress in developing artificial intelligence may not wander as quickly as possible, such as those familiar with the industry just a few months ago – and the operation of indicators that every AGI talk was raising enlarged expectations that could not amount to technology itself.

Among the biggest factors in the sudden fall of AGI from the Grace, it seems that the GPT-5 model of Openai has been launched in early August. After a little more than two years after Microsoft claimed that the GPT-4 showed “sparks” from AGI, the new model fell with gentle: the additional improvements wrapped in the guidance structure, not the expected penetration. Goertzel, who helped work on the phrase AGI, remind the audience that although GPT-5 is impressive, it is still anywhere near Agi True-with real understanding, continuous learning or founding experience.

Agi Language’s retreat is especially amazing given his previous position. Openai was built on Agi Hype: AGI on the founding company mission, has helped collect billions in capital, and supports partnership with Microsoft. Even a clause in their agreement states that if the non -profit Openai Council declares that it has achieved AGI, Microsoft’s access to future technology will be restricted. It is said that Microsoft – after investing more than $ 13 billion – is said to remove this item, and I have thought about walking away from the deal. Wireless Openai’s internal discussion has also been reported on whether publishing a paper about measuring artificial intelligence may complicate the company’s ability to announce that it has achieved AGI.

Vapi turned “very healthy”

But whether observers believe that the conversion of devices is a marketing step or a response in the market, and many of them, especially on the side of the company, say it is good. Bulor, the chief market strategy in Futurum Equits, launched a “very healthy” step, indicating that the markets are equivalent to their implementation, not a mysterious SUPEREAITLIGE “.

Others stress that the real shift is far from the homogeneous AGI imagination, towards the field “SuperINIGENCES”. Daniel Sachs, CEO of Agency Ai Company, has argued that “the noise cycle around AGI has always been based on the central international idea of ​​Amnesty that is all known,” but he said that this is not what he sees. And he told luck.

Christopher Simons, chief scientist Amnesty International at the Digital Health Platform, said that the term AGI was never useful: those who promote AGI, explained, “attracting resources away from concrete applications where artificial intelligence can benefit society immediately.”

However, the retreat from the AGI speech does not mean that the task – or the phrase – has disappeared. Agharubur and DeepMind executives continue to call themselves “Agi-Plading”, which is from the colloquial of the interior. Even this phrase is disputed, though; For some, it indicates the belief that AGI is imminent, while others say it is simply the belief that artificial intelligence models will continue to improve. But there is no doubt that there is more hedge and reduce weakness.

Some still call urgent risks

For some, this hedge is exactly what makes the risks more urgent. The former researcher at Openaii Stephen Adler told luck: “We should not lose sight of that some artificial intelligence companies explicitly aim to build more intelligent systems than any person. Amnesty International is not yet, but all this calls this, it is dangerous and requires a real risk.”

Others accuse artificial intelligence leaders of changing their tune on AGI to develop water in an attempt to avoid regulation. Max Tegark, head of the Future of Life Institute, says that Altman is called AGI “not a useful term” not scientific humility, but it is a way for the company to get rid of the organization while continuing to build towards more and more powerful models.

“It is smarter to talk about AGI separately with their investors,” Fortune told Fortune, adding that “the cocaine seller is unlikely that it is unclear whether cocaine is really a drug,” because it is very complicated and difficult to decipher.

We call it AGI or call it something else – the noise may fade and the atmosphere may fade, but with a lot on the line, from money and jobs to security and safety, the real questions about the place where this race leads to the beginning only.



https://fortune.com/img-assets/wp-content/uploads/2025/02/GettyImages-2198353376-e1739303382310.jpg?resize=1200,600

Source link

Leave a Comment