Scan a few headlines and you’ll find, broadly speaking, that there are two schools of thought around the implications of artificial intelligence: the Cassandras and the Polyannas. But Reid Hoffman, a leading voice on the subject, wouldn’t classify himself as either. “There are dramatics,” he tells me, “on both sides.”
Hoffman, a cofounder of LinkedIn, has been deeply ensconced in the field of machine learning since 2015, when he became a founding investor in OpenAI, originally a nonprofit lab that burst into public consciousness when it hard-launched ChatGPT seven years later. Since then, AI, once consigned to the realm of science fiction, has become a subject of endless allure and agita.
AI fans divine that the technology will revolutionize industries like health care, retail, law, and manufacturing. Critics fear that it will douse fuel onto society’s proverbial fires, from misinformation to privacy violations to economic disruption; some naysayers even worry that humanity itself will become obsolete. That’s also to say nothing of the AI arms race that’s simmering across the Atlantic: US markets were rattled Monday by the latest from Chinese start-up DeepSeek, which now offers breakthrough AI technology at a fraction of the cost.
Perhaps the only thing we do know for certain about AI is that its future is uncertain. And that, Hoffman tells me, is the impetus of his forthcoming book, Superagency: What Could Possibly Go Right With Our AI Future. In it, Hoffman and his coauthor, Greg Beato, make a full-throated case for AI as “something that society explores and discovers collectively.” They encourage readers to engage with AI—rather than shy away from it—and contend that too much regulatory oversight will only entrench economic inequities and delay the inevitable march of technological progress. “Once set in motion, new technologies exert a gravity of their own,” Hoffman and Beato write. “That is precisely why prohibition or constraint alone are never enough: they offer stasis and resistance at the very moment we should be pushing forward in pursuit of the brightest possible future.”
In an interview that has been edited for length and clarity, Hoffman explains how AI will usher in a “cognitive industrial revolution,” opens up about the “painful parts” of the transition, and explains why he thinks Silicon Valley’s political proximity to Donald Trump is, in fact, in the public’s best interest. “I actually have a higher worry about governments that are so ignorant about technology,” he says, “that by the fact that they’re so separated, [they] basically miscall the play, including regulating in really bad ways.” Following our interview, Hoffman also spoke to this week’s concerns around DeepSeek, saying in a statement that the development “demonstrates how immediate and strong the competitive talent from China is and why it’s crucial for America to continue to be at the forefront of AI development.”
Vanity Fair: You were a founding investor of OpenAI, a company that was pretty much an unknown quantity to people outside Silicon Valley back when it was founded in 2015. Seven years later, it becomes a global phenomenon after rolling out ChatGPT to the public. What’s it been like watching society’s introduction to AI, something that you’ve long believed in but was thought by probably a lot of people to be in the realm of science fiction?
Reid Hoffman: I would say a little amusing on a couple of vectors. One is, part of the reason I wrote the book is because a lot of people are responding out of fear and uncertainty. I wrote the book to say, hey, we only get a really positive future by steering toward it and not by just trying to avoid the futures we don’t like.
Another one was that you’re constantly getting a combination of skepticism and, to some degree, frankly, overhype. And that doesn’t mean that I am not a massive believer, and that this is going to be the cognitive industrial revolution, and that it’s going to make a difference in individuals’ lives on the order of the industrial revolution. So I think it’s going to be very big. On the other hand, you end up getting in a lot of science fiction conversations, which is a little bemusing. There’s dramatics on both sides. There’s dramatics on, “Well, in three years AI will be inventing fusion for us and climate change will be solved!” And you’re like, “Well, I hope so. I don’t think so.” Or, “The killer robots are coming for us and we should be bombing all the AI development factories right now.” And so it’s like, “No, I don’t think that’s in the cards right now either.”
And then the other reason is that people talk a lot about it and don’t use it nearly as much as they should. Part of a party trick that I do these days is I’ll put my phone down on the table, launch an AI chatbot, and I’ll show some examples of using it to people so they kind of go, “Oh, right, I can start playing with it. I don’t need to be prepared. I don’t need to take the certification class. I can just start getting engaged and explore it.”
In your book, you actually talk about how mass engagement is its own kind of self-regulation and that AI will continue to improve and iterate on itself. How does that actually work in practice?
Well, a lot of technology governance is not actually that you have a regulatory agency that says, “Here are the features that you’re allowed to introduce to your software platform,” et cetera. That is actually very little of actual software technology governance. And yet governance has worked pretty well. And why is that? We have lots of interlocking networks of accountability and responsibility, and the most obvious one is customers—people using it. But then you’ve also got responsibilities where the press calls you out, and companies prefer better press—not worse. You’ve got inquiries from government. You have the employees and their families and the communities that you’re in.
It’s part of the reason why when you look at, for example, the chatbot agents today, the vast majority of them are—look, there’s errors, but they’re working really hard to avoid any massive errors and they’re working hard to fix minor errors. That’s why we have a form of iterative governance in the practice of how we’re developing already. And it doesn’t mean that we shouldn’t lean into it more and engage with that. I think it’s one of the reasons I really welcome critics and dialogue on this stuff.
I’m still a little fuzzy, though. It’s very easy for me to understand how you’re scaling trust with LinkedIn because you can see people’s mutuals, you can authenticate that they’re an actual person and that their pedigree is real, et cetera. There’s a ratings system with Yelp, another ratings system for Airbnb. So with AI products, what’s the equivalent to that?
Well, I don’t know if we—we’re very early days. I don’t know if we fully have them yet, but I think that the nearest equivalent is that we’ve got hundreds of millions of people who are exposed to it, who are exploring what things work, what things don’t work. There’s a lot of pressure testing going on. It’s one of the reasons why it’s pretty quickly reported when something bad happens, and then people start working on fixing it—like if you have an AI misidentifying the faces of people of color versus white people’s faces. Now I don’t mind when people say, “Hey, there’s a particular thing that is really bad, like bioterrorism or cybercrime. We should kind of step up the regulation more intensely.” Because a failure point there is so bad that whatever goods you might be eliminating by putting a firm regulation on, that’s fine.
You mentioned terrorism, and it makes me wonder if you think there are any ethical no-nos when it comes to certain use cases of AI. I’m sure you’ve heard of Anduril and Palantir and all these companies that are ushering in a new era of AI-enabled warfare and national security and reconnaissance. What do you make of all that?
There are easy things—I help participate in various safety groups and try to get spread to all of the model developers and try to get the government and the military all informed on. Which is: Where are the areas that AI could have a massive impact on society? So obviously bioterrorism; one of the key things is to make sure that you’re not enabling people by having essentially a PhD assistant who can help you with your bioterrorism plans. Similarly, issues around cybercrime or cyberterrorism—both of those could end up doing really bad things. So how do you try to make the general tools of AI less effectively available, and how do you make sure the defenses are there?
Then when you get to the kind of questions around autonomous drones patrolling a border, you think, Well, actually, impacted autonomous drones doing a lot of searching—that’s probably a good thing on border security. You probably wouldn’t want to have it be armed with an independent loop of capability of violence because the benefit versus the human cost is not in the right shape. But I’m not opposed to the use of technology for various forms of safety and security including national security. So it’s an area you have to be more careful about, but it’s not like it’s an area I think we need to avoid altogether.
I want to change gears. There’s a lot of hysteria that labor might be automated away—that AI won’t improve their agency, but it’ll actually undermine it. What would you say to people that are worried about AI taking their jobs?
Three things. The first is that this is the cognitive industrial revolution. So, like the industrial revolution, there will be a bunch of difficulties and transition periods. So I do think that there will be a bunch of current human jobs being replaced by “humans plus AIs,” and then there will also be some jobs that are just replaced by AIs. And I think that that transition period is a challenging thing and part of the reason why I wrote the book. We as human beings navigate these transition periods very badly. Before the printing press, you couldn’t have anything of modern society; you can’t have science, can’t have medicine, can’t have education, middle class, et cetera. Yet, when the printing press is introduced, we have nearly a century of religious war. So we as human beings respond poorly.
Two is the question of: Why is it important to do this? Those countries that embraced the industrial revolution had a massive flourishing of their societies—growth of the middle classes and wealth throughout the whole society. I think the same thing is going to be true of the cognitive industrial revolution. It’s one of the reasons why it’s important to be early, to kind of shape it for what kind of ways that we can improve our own industries, improve the lives of our citizens, et cetera.
The third is about AI in particular—although this is a general truth about technology—is that whenever you see a problem, can you have technology also be the solution? So you say, Well, the skillsets for a wide variety of jobs are going to be evolving and changing. Well, how do we learn those? AI could teach them. How do we get assistance in doing them in practice? Well, AI can be an assistant. So obviously part of the transition that I would most want to see happening is how we have AIs that are helping human beings make these transitions. There’s ways that not only we can learn from the past and try to be better as a society, but I think there’s ways we can deploy technology to make it much more graceful as well.
You’re saying AI, while it might be disruptive, its use may also be able to offset some of that disruption as well?
Yes. And help with the transition. I mean, the disruption will be there. The painful parts of the transition are, “Oh my gosh, the nature of what this job I’m in has changed. I’m no longer qualified for this job.” What happens? Can you learn what the new qualifications are? Can you find a different job that you also need to learn, but AI can help you with either of them and help you find one?
When listening to you talk about it, it really comes down to letting yourself imagine what AI could be and what opportunities it could open up. Do you feel like there’s a failure of imagination when it comes to a lot of the hysteria?
Yeah, and you don’t tend to engage your imagination when you’re in a state of fear. You’re less willing to try something and take a risk, and that’s part of the fundamental thing I try to encourage people to do: Start using AI. Using it for real things—not just for the sonnet, for your family member’s birthday party or your recipe for dinner tonight, but things that matter. For example, for you as a journalist, like, Okay, what are the things I can use to help me do research assistance or pressure test my ideas or say, here, this is a thesis I’m thinking about arguing, have other people argued it? What are the counterarguments? That kind of thing. And that enables you to be much, much more effective.
I think that’s what everyone should be doing in one way or the other. And by the way, we have more time than most of the technologists who are beating the “we’re moving fast” drum, because human societies do have some clock of change. One of the things I’ve mentioned for autonomous vehicles is to say, Look, if today every single manufacturing center in the world only started manufacturing autonomous trucks, in 10 years, we might get to 50% of the trucks on the road being autonomous.
I want to ask you about some of the more cultural aspects of Silicon Valley. One of the things that worries me about the people our readers think of when we talk about Silicon Valley—Elon Musk, Peter Thiel, Jeff Bezos, Mark Zuckerberg—is that these people have some pretty strange, downright scary beliefs or theories of power. Others have engaged in questionable business practices. How are we as a society expected to trust that AI will be developed safely and responsibly in their hands?
Well, I think one of the really important things is to make sure that we have multiple different efforts trying to build AI so that we can see contrasts and consumers can opt differently and the general discourse can be about which things matter and which things don’t seem to matter as much. Frequently, this dialogue is kind of like, “Just trust us.” Trust Sam Altman, trust Satya Nadella, trust Kevin Scott, trust Mustafa Suleyman, trust Dario Amodei. And it seems a little naive except that, by the way, these are organizations that employ a whole bunch of people. The consent of the governed includes networks of employees, their families, communities.
There are a number of these efforts that are, in fact, very well-intentioned—that are making an effort to learn from different constituencies. When your technology goes to hundreds of millions of people, there’s a lot of people who find that their particular concern is not addressed. This is true of every technology that’s going to hundreds of millions of people, because it’s a very broad-based way of doing it. But my sense is that a bunch of these people actually, in fact, care about humanity, care about society, are taking feedback or engaging. And I know that because I’m in a bunch of those dialogues.
You’re saying the people that I mentioned, some of these people truly do care, but maybe their messaging or the way that they’re going about developing AI might be wrongheaded? Is that what you’re saying? I don’t want to twist your words.
Well, I would say a number of the really important leading labs care and are good humanists who are working on it. And I can certainly say that’s true of any of the ones that I’m directly working with, whether it ranges from OpenAI to Microsoft to Inflection AI. But I also know that’s true at Google as well. Now, on your list, Elon is advancing a particular theory of AI where he thinks wokeness is the big issue. I don’t think that that’s actually that big of an issue. And then Peter Thiel is actually not doing that much within any of the circles that I’m involved in AI.
You’ve aligned yourself historically with the Democratic Party and have been critical of Trump in the past. What do you make of the rightward shift that outlets have been reporting on among some of the leading technologists in Silicon Valley—and the way that they have been cozying up to Trump?
I think it’s really important for the American government to be closely connected to the tech industry. I think the tech industry is what’s creating a whole bunch of opportunity in the future—it’s not just AI but also fusion and nuclear power and a bunch of other things. And so I think it’s the right function for business leaders to say, “Hey, we’re working with the government. We’re continuing to try to build our technological futures.” And I think it’s a good thing for any government, American or other, to be very techno-forward. So I see how, generally speaking, people want to make a big, dramatic case about it, but I actually think that companies are supposed to be responsive to their government. So, I mean, there’s obviously things in the government I agree and disagree with and all the rest, but I think the collective mission for us in 2025 is how do we build the future?
But does it not worry you that their proximity to Donald Trump undermines the government’s ability to regulate some of these leaders’ companies?
No, I actually have a higher worry about governments that are so ignorant about technology that, by the fact that they’re so separated, they basically miscall the play, including regulating in really bad ways. I think it’s much better to be deeply informed. There’s a general kind of governance framework where you say, “Well, you can’t have anybody from the industry that you’re trying to regulate come serve in government.” Actually, then you have no knowledge of what key things are part of the prosperity. You’ve got to select people who say, “Great, I’m putting on my governance hat and I’m navigating for the benefit of society, and I’m leveraging the fact that I’m deeply knowledgeable here.”
More Great Stories From Vanity Fair
-
See the 2025 Oscar Nominations
-
The 10 Biggest Snubs and Surprises From the 2025 Oscar Nominations
-
Inside Prince Harry and Meghan Markle’s Big Business Ambitions, 5 Years After Their Royal Exit
-
Who Really Took the Famous “Napalm Girl” Photograph?
-
The Mary Poppins of Mulholland Drive
-
The Sex Abuse Scandal That’s Rocking an Elite Boarding School in the Berkshires
-
The Brutalist’s AI Controversy, Explained
-
Your Ultimate Netflix Watch Guide for February
-
Beware the Serial Squatter of Point Dume
-
Infighting. Panic. Blame. Inside the Democratic Party’s Epic Hangover
-
The Best Rom-Coms of All Time
-
From the Archive: Make America Grape Again
The post Reid Hoffman: The AI Revolution Will Be “Painful”—but Worth It appeared first on Vanity Fair.