It seems that everyone is aware of the fact that artificial intelligence is a rapidly growing and emerging technique that has a tremendous probability of harm if it is operated without guarantees, but no one in the first place (except European UnionType) can agree on how to organize it. So, instead of trying to prepare a clear and narrow path for how to allow us to work, experts in this field have chosen a new approach: What about discovering extremist examples that we all believe are bad and agree to that?
On Monday, a group of politicians, scientists and academics moved to the United Nations General Advertising General Assembly The global invitation to the red lines artificial intelligenceIn appeal to governments in the world to meet and agree on the widest handrails to prevent “universally unacceptable risks” that can result from spreading artificial intelligence. The goal of the group is to put these red lines at the end of 2026.
The proposal has collected more than 200 signatures so far from industry experts, political leaders and Nobel Prize winners. Former President of Ireland, Mary Robinson, and former President of Colombia, Juan Manuel Santos, on the plane, as well as the winners of the Nobel Stephen Fry and Yaval Noah Harari. Jeffrey Hinton and Yoshua Bingio, two three men referred to as commonly called “”Two godfather of artificial intelligence“Because of their founding work in space, they also added their names to the list.
Now, what are those red lines? Well, it is still up to governments to make a decision. The call does not include specific political recipes or recommendations, although it requires some examples of what could be a red line. The group says that the prohibition of launching nuclear weapons or use of collective monitoring efforts will be a potential red line for AI’s uses, with an artificial intelligence that cannot be ended with human transcendence, will be a possible red line for artificial intelligence behavior. But they are very clear: do not put it in the stone, they are just examples, you can set your own rules.
The only thing that the group provides in a significant way is that any global agreement should be built on three columns: “a clear list of prohibitions; strong, auditable verification mechanisms; and the appointment of an independent body established by the parties to oversee implementation.”
Details, though, are for governments. This is somewhat. The call recommends that the two countries host some summits and working groups to know everything, but there are definitely many competing motives in those conversations.
The United States, for example, has already adhered to Amnesty International is not allowed to control nuclear weapons (Biden Administration Framework Agreement, so the Lord knows whether this is still in play). But recent reports indicated that parts of the Trump administration’s intelligence community have already been disturbed by the fact that some artificial intelligence companies They will not be allowed to use their tools for local monitoring efforts. Will America get this proposal? Perhaps we will discover by the end of 2026 … if we make it long.
https://gizmodo.com/app/uploads/2024/11/GettyImages-2154701385.jpg
Source link