Amnesty International It is often considered a threat to democracies and a boon to dictators. In 2025, algorithms will likely continue to undermine the democratic conversation by spreading outrage, fake news, and fake news. Conspiracy theories. In 2025, algorithms will also continue to accelerate the creation of full-fledged surveillance systems, where the entire population is monitored 24 hours a day.
Most importantly, AI makes it easier to centralize all information and power in one center. In the twentieth century, distributed information networks like the United States performed better than centralized information networks like the Soviet Union’s, because human machines at the center could not analyze all the information efficiently. Replacing administrative machinery with artificial intelligence could make Soviet-style centralized networks superior.
However, artificial intelligence is not all good news for dictators. First, there is the notorious control problem. Dictatory control is built on terror, but algorithms cannot be terrorized. In Russia, invasion Ukraine It is officially defined as a “special military operation,” and referring to it as “war” is a crime punishable by up to three years in prison. If a Russian online chatbot calls it “war” or mentions war crimes committed by Russian forces, how can the system punish this chatbot? The government can ban it and seek to punish its human creators, but this is much more difficult than disciplining human users. Moreover, authorized bots may develop dissenting views themselves, simply by detecting patterns in the Russian information sphere. This is an alignment problem, Russian style. Russia’s human engineers can do their best to create AI systems that are fully in line with the system, but given the ability of AI to learn and change by itself, how can engineers be sure that the AI that gets the system’s approval in 2024 will not succeed? Will you venture into illegal territory in 2025?
The Russian Constitution makes great promises that “freedom of thought and expression shall be guaranteed to all” (Article 29.1) and “censorship shall be prohibited” (29.5). Hardly any Russian citizen is naive enough to take these promises seriously. But robots don’t understand doublespeak. A chatbot asked to adhere to Russian law and values might read this constitution, conclude that freedom of expression is a fundamental Russian value, and criticize Putin’s regime for violating that value. How can Russian engineers explain to a chatbot that although the Constitution guarantees freedom of speech, the chatbot should never actually believe the Constitution and should never mention the gap between theory and reality?
In the long term, authoritarian regimes are likely to face a greater danger: instead of criticizing them, artificial intelligence may be able to control them. Throughout history, the greatest threat faced by autocrats has usually come from their subordinates. No Roman Emperor or Soviet Prime Minister was overthrown by a democratic revolution, but they were always in danger of being overthrown or reduced to puppets by their subordinates. A dictator who grants AI too much power in 2025 may become their puppet in the future.
Dictatorships are much more vulnerable than democracies to such algorithmic control. It would be difficult for even a Machiavellian super-AI to accumulate power in a decentralized democracy like the United States. Even if AI learns how to manipulate the US president, it may face opposition from Congress, the Supreme Court, state governors, the media, major corporations, and various non-governmental organizations. How would the algorithm handle, for example, a Senate filibuster? Seizing power in a highly centralized system is much easier. To infiltrate an authoritarian network, an AI needs to manipulate a single megalomaniac individual.
https://media.wired.com/photos/673f2e3c6a9e726ef773c7ab/191:100/w_1280,c_limit/WW25-Politics-Yuval-Noah-Harari-Adria%20Fruitos.jpg
Source link