Unintended consequences: US election results portend reckless development in artificial intelligence

Photo of author

By [email protected]


Join our daily and weekly newsletters for the latest updates and exclusive content on our industry-leading AI coverage. He learns more


While the 2024 US election focused on traditional issues such as the economy and immigration, it had a muted impact on the United States. Artificial intelligence policy It could be more transformative. Without asking a single debate question or major campaign promise about AI, voters inadvertently tipped the scales in favor of accelerators — those who advocate rapid development of AI with minimal regulatory hurdles. The implications of this acceleration are profound, heralding a new era of AI policy that prioritizes innovation over caution and signaling a decisive shift in the debate between Potential risks and rewards of artificial intelligence.

The pro-business stance taken by President-elect Donald Trump leads many to assume that his administration will favor those working to develop and commercialize artificial intelligence and other advanced technologies. His party platform He doesn’t have much to say about artificial intelligence. However, he emphasizes a policy approach focused on eliminating AI regulations, particularly targeting what they described as “far-left ideas” within the outgoing administration’s current executive orders. In return, the platform has supported the development of AI with the goal of promoting freedom of expression and “human flourishing,” calling for policies that enable innovation in AI while opposing measures seen as hindering technological progress.

Early indicators based on appointments to leadership government positions confirm this trend. However, there is a bigger story unfolding: the resolution of the raging controversy The future of artificial intelligence.

Fierce discussion

since then ChatGPT In November 2022, there was a heated debate between those in the AI ​​field who want to accelerate AI development and those who want to slow it down.

It is known that in March 2023, the latter group proposed a six-month pause for artificial intelligence in the development of more advanced systems, warning in… Open letter And that artificial intelligence tools pose “profound risks to society and humanity.” This message, led by Future Life InstituteThis was prompted by OpenAI’s release of the GPT-4 Large Language Model (LLM), several months after the launch of ChatGPT.

The letter was initially signed by more than 1,000 technology leaders and researchers, including Elon Musk, Apple co-founder Steve Wozniak, 2020 presidential candidate Andrew Yang, podcaster Lex Friedman, and AI pioneers Yoshua Bengio and Stuart Russell. The number of signatories to the letter eventually swelled to more than 33,000. Collectively, they became known as “the doomed,” a term for their concerns about the potential existential risks from artificial intelligence.

Not everyone agreed. OpenAI CEO Sam Altman did not sign. Bill Gates and many others did not. Reasons for not doing so varied, although many expressed concerns about potential harm caused by artificial intelligence. This has led to many conversations about the possibility of AI becoming corrupt, leading to disaster. It has become fashionable for many in the AI ​​field to talk about their work Assess the probability of lossoften referred to as the equation: p(doom). However, work on developing artificial intelligence has not stopped.

For your information, The (torment) rate in June 2023 was 5%. This number may seem low, but it was not zero. I felt that major AI labs were sincere in their efforts to rigorously test new models before releasing them and in providing significant guardrails to their use.

Many observers concerned about AI risks have rated existential risks above 5%, some much higher. AI safety researcher Roman Yampolsky has evaluated the potential for AI Ending humanity by more than 99%.. A. said that He studies This research, released early this year, long before the election, and representing the views of more than 2,700 AI researchers, showed that “the average prediction of very bad outcomes, such as human extinction, was 5%.” Would you get on a plane if there was a 5% chance it might crash? This is the dilemma facing AI researchers and policymakers.

You must go faster

Others were outright dismissive of concerns about artificial intelligence, pointing instead to what they saw as the technology’s enormous upside. These include Andrew Ng (who founded and led the Google Brain project) and Pedro Domingos (professor of computer science and engineering at the University of Washington and author of “Main algorithmInstead, they argued, AI is part of the solution. As Ng posited, there are already existential risks, such as climate change and future pandemics, and AI can be part of how we address and mitigate these risks.

Ng believes that the development of artificial intelligence should not pause, but should instead move faster. This utopian view of technology has been echoed by others known collectively as “efficient accelerators” or “e/acc” for short. They claim that technology – especially artificial intelligence – is not the problem, but the solution to most, if not all, of the world’s issues. Startup accelerator Y consolidated CEO Gary Tan, along with other prominent Silicon Valley leaders, included the term “e/acc” in their usernames on X to show alignment with the vision. Reporter Kevin Rose at The New York Times It captured the essence Of those who are quick to say they have an “all gas, no brakes approach.”

Substack Newsletter A few years ago, the basic principles of effective acceleration were described. Here’s the summary they provide at the end of the article, along with commentary from OpenAI CEO Sam Altman.

Accelerating AI forward

The outcome of the 2024 election may be seen as a turning point, putting the accelerator vision in a position to shape US AI policy over the next several years. For example, the president-elect recently appointed technology entrepreneur and venture capitalist David Sachs as his “AI czar.”

Sachs, a vocal critic of AI regulation and a supporter of market-driven innovation, brings his experience as a technology investor to the role. He is one of the leading voices in the AI ​​industry, and much of what he has said about AI aligns with the accelerationist views expressed by the incoming party platform.

In response to the Biden administration’s AI executive order in 2023, Sachs said chirp: “The political and financial situation in the United States is hopelessly broken, but we have one unparalleled asset as a nation: cutting-edge innovation in artificial intelligence driven by a completely free, unregulated market for software development. That’s just gone.” While the amount of influence Sachs will have on AI policy remains to be seen, his appointment signals a shift toward policies that favor industry self-regulation and rapid innovation.

Elections have consequences

I doubt most voters gave much thought to the policy implications of AI when casting their ballot. However, in a very tangible way, accelerator proponents won as a result of the election, potentially sidelining those calling for a more cautious approach by the federal government to mitigate the risks of AI in the long term.

As accelerators chart the way forward, the stakes could not be higher. It remains to be seen whether this era heralds unprecedented progress or unintended disaster. As the development of artificial intelligence accelerates, the need for informed public discourse and vigilant oversight is more important than ever. How we navigate this era will determine not only technological progress, but also our collective future.

As a counterweight to the lack of action at the federal level, it is possible that one or more states could adopt different regulations, which is what already happened to some extent in 2018. ca and Colorado. For example, California’s AI safety bills focus on transparency requirements, while Colorado addresses AI discrimination in hiring practices, offering models for state-level governance. Now, all eyes will be on voluntary testing and self-imposed guardrails at Anthropic, Google, OpenAI, and other AI model developers.

In short, accelerated victory means fewer constraints on AI innovation. This increased speed may indeed lead to faster innovations, but it also increases the risk of unintended consequences. I am now revising my p(doom) to 10%. What is yours?

Gary Grossman is Executive Vice President of Technology Practice at Edelman and global leadership of the Edelman AI Center of Excellence.

Data decision makers

Welcome to the VentureBeat community!

DataDecisionMakers is a place where experts, including technical people who do data work, can share data insights and innovations.

If you want to read about cutting-edge ideas, cutting-edge information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might even think Contribute an article Your own!

Read more from DataDecisionMakers



https://venturebeat.com/wp-content/uploads/2024/12/Cover-image.webp?w=1024?w=1200&strip=all
Source link

Leave a Comment