Stay on the free updates
Simply register in Technology sector Myft Digest – It is delivered directly to your inbox.
Oscar Wilde has once introduced foxes that it is indescribable to seeking what cannot be overcome. If it were alive today, he would have described the search for artificial public intelligence as something that could not be understood in pursuit of what cannot be determined.
Hundreds of billions of dollars are currently pumping to build obstetric intelligence models; It is a race to achieve intelligence in the human level. But even developers do not fully understand how their models work or agree exactly on what general artificial intelligence means (AGI).
Instead of exaggerating the talk that artificial intelligence heralds a new era of abundance, is it not better to give up rhetoric and build artificial intelligence systems to achieve more specific and investigative goals? This was certainly the opinion that was presented at a conference hosted by the University of Southampton and the Royal Society last week. “We must stop asking: Is the machine smart? And we ask: What exactly does the machine do?” “She has a point,” said Shannon Valler, a professor at Edinburgh University.
The term general artificial intelligence (AGI) spread for the first time in the field of computing in the first decade of the twenty -first century to describe how artificial intelligence systems can one day think of general purposes at the human level (unlike the narrow artificial intelligence that excels in one thing). Since then, the holy cup has become this industry, and is used to justify the enormous spending spray.
The leading artificial intelligence research laboratory, Openai and Google DeepMind, has a clear institutional task represented in achieving general artificial intelligence, albeit with different definitions. Openai is: “A very independent system that outperforms humans in most businesses of economic value.” But even Sam Altman, its CEO, was what happened Trade -trillion dollar deals this year To enhance computing power, Confess This is not “a very useful term”.
Even if we accept the state of use, there are still two reasons for anxiety. What happens if we achieve public artificial intelligence (AGI). What will happen if we don’t do that.
The consensus of the Silicon Valley, as it is called, indicates that general artificial intelligence, whatever its definition, is within reach of this decade. The leading artificial intelligence laboratories seek to achieve this goal with missionary enthusiasm, believing that it will unleash huge production gains and generate huge returns for investors.
In fact, some technology leaders were established in the West Coast A political work committee worth $ 100 million To support the “pro -artificial intelligence” candidates in the midterm elections for the year 2026 and the unhelpful crushing of the organization. They point out Exaggerating rapid adoption of artificial intelligence chat robots The mockery of pessimists, or slow supporters, who want to slow progress and obstruct the United States in its technological race with China.
But public artificial intelligence will not be an immediate blessing and a miracle. Openai itself is also recognized that it will also come with “serious risk of misuse, serious accidents, and community turmoil.” This helps to explain the reason Insurance companies are now refusing In providing comprehensive coverage of this industry.
Some experts, such as the Aerzer Yudkovsky and Night Suares, go further, warning that rogue super intelligence can pose an existential threat to humanity. The title of their last book, If anyone builds it, everyone will dieIt greatly summarizes the argument.
However, not everyone is convinced that the arrival of public artificial intelligence has become imminent. Skeptics doubt that the industry’s favorite trick, which is to expand the computing power to produce large and smarter linguistic models, will lead us to there. “We still have some conceptual achievements that lack us from general artificial intelligence,” says one of the senior researchers.
And in a poll conducted Society for the advancement of artificial intelligence This year, 76% of the 475 participants (most of them academics) saw that it is unlikely or is very unlikely that current methods will lead to general artificial intelligence. This may be a problem: it seems that the American stock market is settling The opposite conviction.
Many of those present at the event last week objected to the Silicon Valley framework for artificial intelligence. The world was not going on a predetermined technology path with only one result. Other methods deserved to be followed instead of betting a lot on deep learning behind the models of obstetric intelligence. Artificial intelligence companies could not ignore the problems they create today and make promises of more glorious tomorrow. Valor said that the rest of the community must resist their treatment as “a passenger in the back of the bus” in the hope that artificial intelligence will take us to a beautiful place.
Computer pioneer Alan Kai, 85, a respected figure in this industry, has shown some views. He said artificial intelligence can undoubtedly achieve real benefits. In fact, it helped discover cancer by examining MRI. He added: “Artificial intelligence is the savior.”
However, Kai expressed his concern about the ease of human deception and that artificial intelligence companies cannot always explain how to produce their models for results. He said that software engineers, such as aircraft designers or bridge builders, have an obligation to take care of their systems that their systems do not cause harm or failure. The main theme of this century should be safety.
The best way to move forward is to harness the collective intelligence of humanity, which has accumulated steadily across generations. “We already have a supernatural artificial intelligence,” said Kai. “It is science.” Artificial intelligence has already made some great achievements, such as the Google DeepMind model that predicted the structure of more than 200 million protein, which led to the Nobel Prize researchers winning.
Kai highlighted his own concerns about the weaknesses in the code created by artificial intelligence. And quoting his colleague, computer world, Petrler Lampson, he said: “Put the jinn in bottles and keep them there.” This is not a bad old saying for our artificial intelligence.
https://images.ft.com/v3/image/raw/https%3A%2F%2Fd1e00ek4ebabms.cloudfront.net%2Fproduction%2F9627c778-7fdb-4171-8040-bf1bb9b5f32a.jpg?source=next-article&fit=scale-down&quality=highest&width=700&dpr=1
Source link