The Tinkerer’s Dilemma: Chris Lehane and OpenAI’s Mission Impossible

Photo of author

By [email protected]


Chris Lehane is one of the best in the business at hiding bad news. Al Gore’s press secretary during the Clinton years, and Airbnb’s chief crisis manager during every regulatory nightmare from here to Brussels — Lehane knows how to juggle. He’s now two years into what may be his most impossible assignment yet: As OpenAI’s vice president of global policy, his mission is to convince the world that OpenAI really cares about democratizing AI while the company increasingly behaves like every other tech giant has claimed to be different.

I spent 20 minutes with him on stage to lift Conference in Toronto earlier this week – 20 minutes to go over the talking points and real contradictions that undermine OpenAI’s carefully constructed picture. It wasn’t easy or entirely successful. Lehane is really good at his job. He’s adorable. It seems reasonable. Acknowledges uncertainty. He even talks about waking up at 3 a.m. worrying about whether any of this will actually benefit humanity.

But good intentions mean little when your company is calling out detractors, draining water and electricity in economically depressed cities, and bringing dead celebrities back to life to assert your market dominance.

The company’s Sora problem is actually the root cause of everything else. The video creation tool launched last week with copyrighted material seemingly hidden right in it. It was a bold move for a company that had already been sued by The New York Times, the Toronto Star, and half the publishing industry. From a business and marketing standpoint, it was great too. The invitation-only application rose to Top App Store As people create digital versions of themselves, Sam Altman, CEO of OpenAI, said; Characters like Pikachu, Mario and Cartman from “South Park”; And dead celebrities like Tupac Shakur.

When asked why OpenAI decided to launch this newest version of Sora with these characters, Lehane gave me the standard pitch: Sora is a “general-purpose technology” like electricity or a printing press, democratizing creativity for people with no talent or resources. He said on stage that even he – a self-described “creative zero” – can now produce videos.

What it’s about is that OpenAI is initially “allowing” rights holders to opt out of using their works to train Sora, which is not how copyright is typically used. Then, after OpenAI noticed that people really liked using copyrighted images, the company “evolved” toward Subscription form. This doesn’t really happen again. This tests your ability to get away with it. (Incidentally, although the Motion Picture Association Make some noise Last week in terms of legal threats, it seems like OpenAI got away with a lot.)

Naturally, the situation brings to mind the worsening role of publishers who accuse OpenAI of training for their work without sharing the financial spoils. When I pressed Lehane about excluding publishers from the economy, he cited fair use, the American legal principle that is supposed to balance the rights of creators with the public’s access to knowledge. He described it as the secret weapon of American technological dominance.

TechCrunch event

San Francisco
|
October 27-29, 2025

maybe. But I did it recently He interviewed Al Gore – Lehane’s old boss – And I realized that someone could simply ask ChatGPT about it instead of reading my article on TechCrunch. “It’s repetitive, but it’s also an alternative,” I said.

For the first time, Lehane dropped his words. “We’re all going to need to know that,” he said. “It’s really knee-jerk and easy to sit here on stage and say we need to figure out new economic revenue models. But I think we will do it.” (We’re making it up as we go, in short.)

Then there is the infrastructure question that no one wants to answer frankly. OpenAI already operates a data center campus in Abilene, Texas, and recently began construction on a massive data center in Lordstown, Ohio, in partnership with Oracle and SoftBank. Lehane has likened the accessibility of AI to the advent of electricity — saying those who last accessed it are still playing catch-up — yet OpenAI’s Stargate project appears to be targeting some of those economically challenged places as sites to create utilities with their enormous appetite for water and electricity.

When asked during our sit-in whether these communities would benefit or just foot the bill, Lehane turned to gigawatts and geopolitics. He noted that OpenAI needs about a gigawatt of power per week. China brought in 450 gigawatts last year, in addition to 33 nuclear facilities. If democracies want democratic AI, they must compete. “The optimist in me says this will modernize our energy systems,” he said, painting a picture of a remanufactured America with transformed energy grids.

It was inspiring. But it wasn’t an answer to whether people in Lordstown and Abilene will watch their utility bills rise while OpenAI creates videos of John F. Kennedy and The Notorious B.I.G. (video generation is the perfect solution). The most energy-intensive artificial intelligence there.)

Which brings me to the most disturbing example. Zelda Williams spent the day before our interview begging strangers on Instagram to stop sending her AI-generated videos of her late father, Robin Williams. “You don’t make art“You make disgusting, over-processed sausages out of human lives,” she wrote.

When I asked how she reconciled this kind of intimate harm with her mission, Lehane responded by talking about processes, including responsible design, testing frameworks, and government partnerships. “There’s no playbook for this stuff, right?”

Lehane showed weakness at times, saying he wakes up at 3 a.m. every night, worrying about democracy, geopolitics, and infrastructure. “There are tremendous responsibilities that come with this.”

Whether those moments were designed for the audience or not, I believe him. In fact, I left Toronto thinking I had witnessed a master class in political messaging – Lehane threading an impossible needle while dodging questions about corporate decisions that, as far as I know, he doesn’t even agree with. Then it happened on Friday.

Nathan Calvin, a lawyer who works on AI policy at a non-profit called Encode AI, revealed that at the same time I was speaking with Lehane in Toronto, OpenAI sent a message Deputy Sheriff to his house In Washington, D.C. during dinner to serve him a subpoena. They wanted to send his private messages to California lawmakers, college students, and former OpenAI employees.

Calvin accuses OpenAI of intimidation tactics regarding a new piece of AI regulation, California’s SB 53. He says the company used its legal battle with Elon Musk as an excuse to target critics, meaning Encode was secretly funded by Musk. In fact, Calvin says he fought OpenAI’s opposition to California’s SB 53, an AI safety bill, and when he saw the company claim it had “worked to improve the bill,” he “literally laughed out loud.” In a jibe on social media, he went on to specifically call Lehane a “master of the dark political arts.”

In Washington, that might be a compliment. And at a company like OpenAI whose mission is “to build artificial intelligence that benefits all of humanity,” it seems like a condemnation.

What matters much more is that even OpenAI employees themselves are conflicted about what they have become.

As does my colleague Max I mentioned last weekAfter the release of Sora 2, a number of current and former employees took to social media to express their concerns, including Boaz Barak, an OpenAI researcher and Harvard professor, who Books about Sora 2 It is “technically stunning but it is too early to congratulate ourselves on avoiding the dangers of other social media apps and deepfakes.”

On Friday, Josh Acciam — head of mission coordination at OpenAI — tweeted something more interesting about Calvin’s accusation. Introducing his comments by saying that they were “potentially a risk to my entire career,” Achim went on to write about OpenAI: “We cannot do things that would make us a fearsome force rather than a virtuous one. We have a duty and a mission to all of humanity. The bar to us from pursuing that duty is remarkably high.”

this . . .something. One OpenAI executive publicly wonders whether his company has become “a fearsome force rather than a virtuous one,” not on par with a competitor taking photos or a reporter asking questions. This is the person who chose to work at OpenAI, who believes in its mission, and who now admits to having a crisis of conscience despite the professional risks.

It’s a crystallization moment. You can be the best political activist in tech, or adept at navigating impossible situations, but end up working for a company whose actions are increasingly at odds with its stated values ​​– contradictions that may intensify as OpenAI’s race toward artificial general intelligence.

It got me thinking that the real question isn’t whether Chris Lehane can sell the OpenAI mission. It’s about whether others – including, other people who work there – still believe it.



https://techcrunch.com/wp-content/uploads/2025/10/Screenshot-2025-10-10-at-10.35.15PM.png?resize=1200,789

Source link

Leave a Comment