Something big happened in the world of AI the other day: Sam Altman, founder and CEO of OpenAI, and probably the person who’s most commonly regarded as the face of the industry, declared that the purpose of AI is not to take people’s jobs:

And he recently called AI CEOs “tone-deaf” for declaring that AI is going to take people’s jobs:

YouTube video

In fact, this shift represents more evolution than revolution. Years ago, Altman did seem to generally agree with the folk consensus that AI’s purpose is to make most or all humans obsolete. In 2014, he warned that we could be faced with “a new idle class”, and explored the idea of Universal Basic Income as a remedy. In 2021, he wrote that “The price of many kinds of labor…will fall toward zero.”

But in recent years, Altman has consistently stated that although AI will destroy many occupations, it will create new tasks for humans to do. In 2024, he wrote that “I have no fear that we’ll run out of things to do (even if they don’t look like “real jobs” to us today)”, and in 2025 he declared that “We will find new things to do, new ways to be useful to each other, and new ways to compete, but they may not look very much like the jobs of today.” He has reiterated that prediction in interviews.

OpenAI’s mission statement, meanwhile, continues to define the company’s goal as the creation of Artificial General Intelligence (AGI), which it defines as “highly autonomous systems that outperform humans at most economically valuable work.”

That “most” does leave some wiggle room. But perhaps more importantly, the company is talking about AGI less and less — its 2026 statement of principles mentions the term only twice, as compared with 12 times in the 2018 version. OpenAI also removed a clause about AGI in its agreement with Microsoft, meaning that the term no longer defines its contractual business obligations.

So although Altman has never been quite as doomer-ish as some of his colleagues when it comes to AI and jobs, you can definitely feel the winds shifting. In fact, there has always been a contingent of tech leaders who have been broadly optimistic about AI and jobs, and who are now speaking up more vociferously.

Nvidia’s Jensen Huang has consistently predicted that AI will create more jobs than it destroys, but recently he has harshly criticized AI CEOs who go around saying that their technology is a job-killer:

YouTube video

Venture capital titan Marc Andreessen, meanwhile, has come out swinging against the AI job loss narrative:

Cynical observers will see this all as just a messaging pivot, in response to the AI industry’s deteriorating popularity. Back in March, I wrote about how the AI industry’s sales pitch was basically “Our product’s purpose is to put you and your descendants on welfare forever, and it may also wipe out your whole species.”

That was a bad sales pitch, to put it mildly, and it’s not surprising that voters have reacted negatively to this message. Basically, every recent poll shows the American public turning very strongly against AI. Here’s a representative example from Pew:

Source: Pew

In fact, the anti-AI turn seems especially strong among Independents:

Source: Echelon Insights via Kristen Soltis Anderson

This raises the possibility that AI will become the focus of populist rage, and that politicians from both parties will compete to win swing voters over by promising to take action against the industry.

This may already be happening. Bernie Sanders has moved past traditional progressive concerns about data center water use and copyright infringement, and has instead been warning about catastrophic AI risk:

YouTube video

Meanwhile, Donald Trump is reportedly considering a policy of having the White House vet AI models before they’re released, due to concerns about new models’ cyber capabilities:

President Trump, who promoted a hands-off approach to artificial intelligence and gave Silicon Valley free rein to roll out the technology, is considering the introduction of government oversight over new A.I. models, according to U.S. officials and people briefed on the deliberations…The administration is discussing an executive order to create an A.I. working group that would bring together tech executives and government officials to examine potential oversight procedures…Among the potential plans is a formal government review process for new A.I. models…The discussions signal a stark reversal in the Trump administration’s approach to A.I…[Trump’s] noninterventionist policy began changing last month after the start-up Anthropic announced a new A.I. model called Mythos. Mythos is so powerful at identifying security vulnerabilities in software that it could lead to a cybersecurity “reckoning,” said Anthropic[.] [emphasis mine]

Neither Bernie’s concern nor Trump’s is explicitly about protecting jobs; both are about the risk of misuse. But it’s hard not to see the generally souring mood on AI, especially among Independents, as an invitation to populists like Trump and Bernie to make political hay by reining in the industry.

Meanwhile, some politicians and industry figures are starting to talk openly about the possibility of nationalizing the big AI labs. Matteo Wong and Lila Shroff report:

Washington is getting antsy about the power imbalance [between AI companies and the government]. Over the past year, multiple senators have proposed legislation that would order federal agencies to explore “potential nationalization” of AI…In recent weeks, Elon Musk, OpenAI’s CEO Sam Altman, and Palantir’s CEO Alex Karp have publicly spoken about the possibility of nationalization…

The government could regulate AI companies like it does utilities…[S]hould AI models displace large swaths of the labor market, such that a handful of companies run most of the economy, “then some kind of nationalization becomes potentially imperative,” Samuel Hammond [of FAI] told us—to distribute wealth and simply ensure the proper functioning of society. Both Anthropic and OpenAI have already suggested possible versions of such redistributive measures…

Perhaps the most likely fate for American AI companies is a future of soft nationalization—a world in which the government doesn’t fully control AI labs and their models, but instead enacts an escalating series of policies and establishe[s] close partnerships with private companies to shape the technology.

Different figures in the industry want quasi-nationalization to different degrees. Jensen Huang, who has fought hard against export controls, is probably more anti-nationalization, as is Marc Andreessen, who makes his living from funding startups (and would thus probably not like to see government ties entrench the market position of incumbent players).

But even folks like Altman and Amodei who might be inclined to accept quasi-nationalization would certainly like to negotiate favorable terms for that partnership. To that end, it helps to have the government not view your industry as a dangerous job-killer.

So basically, it makes sense for leading figures in the industry to alter the basic sales pitch and reassure anxious humans that they’ll still have jobs.

In Altman’s case, there also might be some element of competitive positioning here. The loudest voice predicting human obsolescence has certainly been Anthropic founder and CEO Dario Amodei, who has been shouting from the rooftops about a coming job-pocalypse:

YouTube video

To a seasoned observer, Anthropic’s perspective here is pretty clear. They basically think AI progress is inevitable, and that AGI is eventually going to put most human beings on the welfare rolls.

Thus, they see themselves as sounding the alarm — warning society to beef up its welfare state and its redistributionary mechanisms before the inevitable coming of job-annihilating AGI.

If you accept that AI progress is as inevitable as the tides, then this is an eminently reasonable position. But most people probably do not accept this. They probably see AI progress as something that we — human society — choose to do or not to do. And so to them, Dario isn’t sounding a warning — he’s making a threat.[1] 

The average person probably hears Dario as saying something along the lines of “Hi, my colleagues and I are working very hard to make sure you are never gainfully employed again.” And that probably makes them feel fairly negatively toward Anthropic.

It’s possible that Altman and OpenAI see an opening here. Anthropic has recently overtaken OpenAI in revenue and market valuation.[2] If OpenAI presents itself to the nation as the guys who are trying to create AI that augments your job, then maybe they can sell themselves as the human-friendly alternative to those scurrilous folks over at Anthropic who just want to replace you.

This is one theory I’ve seen thrown around, in any case:

But OK, saying “AI will increase the value of human labor” is one thing; providing a compelling explanation for this assertion is another. The notion that AI is fundamentally a human-remover is deeply ingrained into our national discourse — we’ve heard it so many times that it’s become not just the conventional wisdom, but an article of faith for many. It’ll be an uphill battle for pro-AI voices to dislodge and replace that notion.

So what arguments are they using?

One is the idea of task creation. So far, most technologies throughout history have created new kinds of work for humans to do. Some AI proponents assert that AI will be the same.

A second is the idea of induced demand, either from income effects (AI makes us richer so we buy more stuff) or from complementarities. This often goes by the name of Jevons’ Paradox.

Here’s Aaron Levie, CEO of Box, employing both ideas:

There are far more categories where AI agents making things more efficient will induce demand for that skill than spaces where agents eliminate the work. This is why the AI jobs predictions will not play out as advertised.

AI making it easy to produce more code will mean we start to apply code to far more parts of our businesses. We will build automation and software for things that wouldn’t have made sense before. Marketing automation, client onboarding, modernizing old systems, doing far more research on existing data, and more…Far more software will mean vastly more security risks. This will mean far more people thinking through system security, compliance, and governance…

AI will make it so more companies care about this (and maybe can do something about it), causing more security roles…Companies will now be doing 10X more with video and graphics, and will need people to manage that work. More media. We’re going to have a near unlimited set of legal challenges in a world of AI as AI helps write even more bespoke and complicated legal docs. More lawyers.

This is probably correct — at least for now. Technologies have always destroyed some occupations, but they’ve usually created more demand for human labor than they replaced. At least for a while, it seems clear that AI will behave similarly.

But a lot of people have the intuitive sense that this solution works until it doesn’t. If AI becomes better than humans at all tasks, then humans’ only remaining value would come from comparative advantage — and as data centers proliferate and compete with humans for land and food and energy, the economic value of comparative advantage goes down and down.

So the pro-AI people naturally need to give the public some reassurance that even after the coming of AGI, humans will still be valued. The answer that more people are converging on is that humans will still pay for the human touch. Alex Imas has a good post about this at Ghosts of Electricity.

Imas writes:

If the model is right, the durable jobs of the future won’t be about monitoring AI systems or prompt engineering. Those are transitional roles in the automated sector. The durable jobs will be in the relational sector, where the human element is the product itself.

Some already exist and are growing: nurses, therapists, teachers, boutique fitness instructors, personal chefs, bespoke tailors, craft brewers, live performers, spiritual guides, childcare workers and many varieties of hospitality and care work.

Others are emerging: experience designers, human-AI collaboration artists, provenance certifiers, community curators. Many haven’t been invented yet, just as six out of ten jobs people hold today didn’t exist in 1940.

Ezra Klein recently wrote an article in the New York Times endorsing Imas’ idea.[3]

So this is shaping up to be the new AI sales pitch. In the short term, AI will give people more work to do, and in the long term we’ll still get paid just to be human to each other. And our real wages will go up and up, because of the abundance AI creates.

From a public relations perspective, this pitch is WORLDS better than the previous one. Shouting about replacing humanity might play well with corporate customers and investors salivating over the dream of eliminating labor costs, but eventually you get the rakes and pitchforks, followed by some form of nationalization. Describing AI as a normal technology — a successor to the steam engine and the automobile and the computer — is much smarter politics.

The question is: Is it just politics and PR? Certainly, there are plenty of AI researchers and entrepreneurs who will keep quietly believing that AGI is going to make humans obsolete; they’ve heard (and repeated) this line for too many years to suddenly believe something else overnight.

But as they continue to repeat the line that “AI will augment humans” for the sake of their industry’s public image, I think there’s a chance that they’ll start to believe it — or at least to think about how they might be able to make it true.

Daron Acemoglu has written that society should try to steer AI development toward technologies that complement humans rather than replacing them. I just don’t think that’s feasible — society simply can’t mandate the economic value of a technology before it exists.

But I do think it might be possible for AI researchers to concentrate their efforts on AI applications that give humans superpowers, rather than on trying to copy what humans already do. Once they stop thinking “This technology is a replacement for the human species”, and start thinking “This technology is a tool for humans to use”, the direction of their research programs might subtly evolve in a more labor-augmenting direction.

So yes, I’m happy with the new AI sales pitch, even if some of the people saying it don’t necessarily believe it yet.

Notes

1 Please note that I overused this type of sentence construction long before it became a notorious hallmark of “AI writing.”

2 Actually, there is some uncertainty about this, given that both of these are hard to compare for closely held companies. But the trend line here is certainly clear. Anthropic is winning.

3 Personally, I’m a bit skeptical — I’ve already seen people pay Waymo a premium to avoid having to interact with a human Uber driver, and I suspect that future generations who grow up with AI tutors and chatbot companions will have less intrinsic desire for the human touch. I guess we’ll see.

This article was first published on Noah Smith’s Noahpinion Substack and is republished with kind permission. Become a Noahopinion subscriber here.