OpenAI on Monday published What it calls the “economic blueprint” for AI: a living document outlining the policies the company believes it can build on with the U.S. government and its allies.
The blueprint, which includes a lead from Chris Lehane, vice president of global affairs at OpenAI, emphasizes that the United States must work to attract billions in funding for the chips, data, energy, and talent needed to “win in AI.”
“Today, while some countries marginalize AI and its economic potential, the U.S. government is able to pave the way for its AI industry to continue the country’s global leadership in innovation while protecting national security,” Lehane wrote.
OpenAI has repeatedly Named The US government has to take more Objective procedure On artificial intelligence and infrastructure to support technology development. The federal government has largely left regulation of AI to the states, a position OpenAI described in the blueprint as untenable.
In 2024 alone, state legislators foot Nearly 700 AI-related bills, some in conflict with others. The Texas Responsible AI Governance Act, for example, Imposes onerous liability requirements On developers Open source AI models.
Sam Altman, CEO of OpenAI, also criticized current federal laws on the books, such as Act Potato chip lawwhich aims to revitalize the semiconductor industry in the United States by attracting domestic investments from the world’s largest chip makers. In a recent interview with Bloomberg, Altman He said That the CHIPS Act “(wasn’t) as effective as any of us had hoped,” and that he believed there was a “real opportunity” for the Trump administration to “do something much better as a follow-up.”
“The thing I strongly agree with (Trump) is that it’s surprising how difficult it is to build things in the United States,” Altman said in the interview. “Power plants, data centers, any of that kind of stuff. I understand how bureaucracy piles up, but it’s not good for the country overall. It’s not particularly helpful to think about what would need to happen for the United States to lead in AI. And the United States really needs AI leadership.
To fuel the data centers needed to develop and operate AI, the OpenAI blueprint recommends “dramatically” increasing federal spending on energy and data transmission, and purposefully building “new energy sources,” such as solar, wind farms, and nuclear power. OpenAI — along with Amnesty International Competitors – He has previously It threw its support behind nuclear energy projects, He argues They are necessary to meet the electricity requirements of next-generation server farms.
Tech giants Meta and AWS have faced obstacles in their nuclear efforts, Even for reasons Which has nothing to do with nuclear energy itself.
In the nearer term, the OpenAI Blueprint suggests that the government “develop best practices” for deploying models to protect against misuse, “simplify” the AI industry’s engagement with national security agencies, and develop export controls that enable sharing of models with allies while “limiting Exported to “hostile states” Additionally, the scheme encourages the government to share certain national security-related information, such as briefings on threats facing the AI industry, with vendors, and help vendors secure resources to evaluate their risk models.
“The federal government’s approach to typical safety and security should simplify the requirements,” the blueprint said. “Responsibly exporting models to our allies and partners will help them support their own AI systems, including their own developer communities that innovate with AI and distribute its benefits, while also building AI on American technology, not technology funded by the Chinese Communist Party.” “
OpenAI already counts a few US government departments as partners, and if its scheme gains traction among policymakers it is expected to add more. The company has deals with the Pentagon for cybersecurity work and other related projects, and has done so cooperated With defense startup Anduril to supply AI technology to systems used by the US military to counter drone attacks.
In its blueprint, OpenAI calls for the formulation of standards that are “recognized and respected” by other countries and international bodies on behalf of the American private sector. But the company stopped short of agreeing to mandatory rules or ordinances. “(The government could create) a specific voluntary pathway for companies developing (artificial intelligence) to work with the government to identify model assessments, test models, and share information to support corporate safeguards,” the blueprint said.
Biden administration It took a similar tack with its executive order on artificial intelligencewhich sought to enact several high-level voluntary safety and security standards for artificial intelligence. The executive order established the American Institute for Artificial Intelligence Safety (AISI), a federal government body that studies risks in artificial intelligence systems Partnership with companies including OpenAI to evaluate the integrity of the model. But Trump and his followers Allies He owns He pledged to rescind Biden’s executive orderputting its codification – and AISI – at risk of decline.
The OpenAI scheme also addresses copyright as it relates to artificial intelligence, a Hot topic. The company stresses that AI developers should be able to use “publicly available information,” including copyrighted content, to develop models.
OpenAI, along with several other AI companies, trains models on it General data From all over the web. The company has Licensing Agreements Available with a number of platforms, publishers and offers Limited methods For creators to “opt out” of model development. But OpenAI did too He said That it would be “impossible” to train AI models without using copyrighted materials, and a number to Creative people He owns File a lawsuit against The company allegedly trained on their business without permission.
“(Other) actors, including developers in other countries, make no effort to respect or engage with intellectual property rights holders,” the scheme reads. “If the United States and like-minded countries do not address this imbalance through sensible measures that help advance AI in the long term, the same content for AI training will continue to be used elsewhere, but to the benefit of other economies. (And the government should ensure) that AI has the ability to learn from publicly available global information, just as humans do, while also protecting creators from unauthorized digital copies.
It remains to be seen which parts of the OpenAI scheme might impact legislation, if any. But the proposals are a signal that OpenAI intends to remain a major player in the race to standardize AI policy in the United States.
In the first half of last year, OpenAI more than tripled its lobbying expenses, spending $800,000 versus $260,000 in all of 2023. The company has also brought former government leaders into its executive ranks, including former Defense Department official Sasha Baker and the head of the National Security Agency. Paul Nakasoneand Arun Chatterjee, former chief economist at the Commerce Department under President Joe Biden.
It also makes hires and Expands For example, its global affairs division, OpenAI, has been more vocal about the AI laws and rules it favors Throw your weight behind Senate bills that would create a federal rule-making body for artificial intelligence and provide federal scholarships for AI research and development. The company also has bidder Billing, particularly in California SB 1047Arguing that it would stifle innovation in AI and drive out talent.
https://techcrunch.com/wp-content/uploads/2024/11/GettyImages-2153474303-e.jpg?resize=1200,800
Source link