This message was taken in 2000 that there are two members of the external community, the online businessmen, Brian and SABIN Atkins – who met in a foreign postal menu in 1998 and married shortly – in 2000, in 2000 they funded the Yudkowsky Research reservoir, the Unique Institute of Artificial Intelligence. At the age of 21, Yudikovsky moved to Atlanta and began to draw a non -profit salary of about $ 20,000 a year to preach his message about the removal. “I thought very smart things would be good,” he said. Within eight months, he began to realize that he was wrong – wrong. Amnesty International decided to be a disaster.
“I was taking Yudikovsky explained: “At some point, instead of thinking,“ If you do not automatically specify what experts are the right thing and does this thing that means that there is no right or real mistake, in any case, ”if the properties do not automatically specify what is the right thing and does this thing that means that there is no real right or wrong,” who cares? “I was like,” well, but Brian Atkins may prefer not to be killed by Supitins. “
Atkins understood, and the institute’s mission was the axis of artificial intelligence to make artificial intelligence. “The part that we needed to solve the problematic of the friendly intelligence is to put an obstacle on the route of charging directly to employ researchers from artificial intelligence, but certainly we did not have funding to do this,” said Yudikovsky. Instead, he devised a new intellectual framework called “rationality”. (While his face, rationality is the belief that humanity has the ability to use a reason to reach the answers, over time, a movement described, in the words of the writer zey Brennan, “Rest, materialism, moral inadequacy, utilitarianism, and original innovation.
In the 2004 paper, the “coherent independent will”, Yudikovsky argued that artificial intelligence should be developed not only on what we think we want to do now, but what will be actually in our interest. He wrote: “The engineering goal is to ask what humanity wants,” or rather what we will decide whether we know more, and think faster, and more than the people we had hoped to have grew up together, etc. “in the paper, also use an unforgettable metaphor, which Bostrom has created, how a mistake occurs: if your artificial intelligence is programmed to produce paper clips, if not Be careful, it may end up with the filling of the solar system with paper clips.
In 2005, Yudkowsky attended a special dinner at a restaurant in San Francisco, which was held by the Foresight Institute, an intellectual technology tank founded in the 1980s to push nanotechnology forward. (Many of its indigenous members came from the L5 Association, which was devoted to pressure to create a space colony directly behind the moon, and successfully pressed to prevent the United States from signing the United Nations Convention on the approval of the moon for the year 1979 because of its rule against the severe heavenly bodies. About when the original will decrease, they will need to do a kind of perception that overcomes the effective market in order to reliably associate with arrows down, “Yudkovsky,” mainly smartly of the subject’s full hypothesis, which already includes all risk factors.
https://media.wired.com/photos/682bcc7a6d0ad5a5227bd56c/191:100/w_1280,c_limit/business_sam_altman_peter_thiel_optimist.jpg
Source link