Join us
The Media Today

Q&A: Steven Levy on AI and the Evolving Relationship Between Tech Companies and the Press

“What you find when you get into a place is always more interesting than any preconception you have.”

May 14, 2025
Steven Levy. Credit: Michael Bulbenko.

Sign up for The Media Today, CJR’s daily newsletter.

All this week, CJR is running a series of pieces, on our website and in this newsletter, about how AI is transforming the news media ecosystem. Already, Mike Ananny and Matt Pearce canvassed industry leaders on how they’re using the technology, and Laura Preston reflected on experimenting with getting her news from ChatGPT. Today, Yona TR Golding explores whether tools designed to detect AI-generated content actually work. You can read the piece here. 

“There was a time when Mark Zuckerberg didn’t regard mainstream media as the enemy,” Steven Levy, a veteran tech journalist, wrote in a column for Wired earlier this year. Levy was referring to an invitation to Zuckerberg’s home in 2018, while he was writing a book about Facebook. That sort of access now feels like something from another era. Zuckerberg has shut out journalists almost entirely, and also expressed open hostility to the industry. On an appearance on Joe Rogan’s podcast in January, during which he talked about discontinuing fact-checking, Zuckerberg dismissed the legitimacy of journalistic outlets, equating them to misinformation. As Zuckerberg, Elon Musk, and many of Silicon Valley’s most influential leaders have fallen in behind President Trump, they’ve replicated his strategy of discrediting sources of reporting that they consider unfavorable. 

At the same time, many journalists are afraid that artificial intelligence products present an existential threat to the occupation. Major AI companies have generally not taken a hard-line stance against the news media—at least not yet. (“They want their stories told,” Levy said recently. “They’re involved in something pretty scary, and it’s in their interest to let a journalist in, somewhat.”) But AI software is starting to replace and reduce jobs in journalism and to weaken the media economy as a whole. The future does not look good.

As the relationship between the tech industry and the press deteriorates, I called up Levy to ask about his perspective as a journalist who’s covered tech for more than four decades. He has weathered many storms. How can journalists survive the ones to come? Our conversation has been edited for length and clarity.


CB: How would you say access to Silicon Valley folks as a whole has changed over the course of your career? 

SL: I started in the PC era. The first story I wrote about the technology world was about computer hackers, for Rolling Stone in 1982. These were small companies. The first time I went to Microsoft’s office in Seattle, in 1984, I just called up and said, “I’m coming,” and they said, “Okay.” I went in and talked to Steve Ballmer, then they put me in Bill Gates’s office. There was never a PR person in the room; they wouldn’t tape your interviews. I think they picked that up later from Washington, DC, where talking to a politician was always on background. But in technology it was very casual, it was on the record, and there were very few barriers between you and the CEOs. The first time I interviewed Steve Jobs, we just went out to dinner. That doesn’t happen too much these days. Now even the CEO of a four-person startup might have a PR company that wants to sit in on the interview.

Earlier this year, you wrote about Mark Zuckerberg’s comments on Joe Rogan that seemed to discredit or dismiss journalism. How have you witnessed tech companies becoming more hostile to the media?

Zuckerberg is on the extreme edge of what’s happening more generally. The traditional places that a CEO normally would want to appear are just not as important to him; they’re more troublesome for him than they once were. When I worked at Newsweek, everyone wanted to be on the cover. To be on the cover of Wired was a big deal, or even to get feature space. A lot of people still value that, but to people like Mark, who are going to get questioned critically by a place like Wired, they came to the conclusion that they don’t need it. It’s possible for him to not do those kinds of interviews without suffering from the lack of it. He gets his message across by talking to a podcaster who is not going to press him on a tough issue and not give him trouble. He recently did a podcast with someone who literally asked him where he went to college. And that was fine with Mark. He doesn’t need us anymore. I haven’t interviewed him since my book came out five years ago. I know he doesn’t have anything against me; we had some nice texts after the book. But afterward Facebook told me, You did a fair job, Steven, and no one will ever get this access again.

Sign up for CJR’s daily email

You wrote a profile of OpenAI a couple of years ago, and a recent profile of a different AI company, Anthropic. They seem more eager for exposure… 

They want their stories told. They’re involved in something pretty scary, and it’s in their interest to let a journalist in, somewhat. I sat in on some meetings at Anthropic and did multiple interviews with Dario Amodei, the CEO, and the company thought that was useful for them. I think I had the advantage of having a reputation for telling fair stories. I’m always transparent with them about where I’m going with the story. Last year I did a story about Microsoft. Until recently, people thought Microsoft was on a slow decline, just harvesting their dominance in Windows and Office. But then with their investment in OpenAI things went in the other direction, and Microsoft was, at least for a time, the most valuable company on earth. What happened there? I had a long history with Microsoft, so I was invited in, and I presented my case. On the whole the story was about their success, but there was stuff in there that they wouldn’t have chosen, not they had a chance to choose. I asked for some interviews related to their anticompetitive behavior with Slack, for which they were under investigation by the European Union, and about other anticompetitive behavior that the Federal Trade Commission is looking into. The balance, I think, gives the whole package credibility. Maybe because they’re a traditional company, that kind of coverage mattered to them.

Some journalists would make the argument that any coverage of these companies that’s not explicitly critical or exposing wrongdoing ends up serving the company by promoting their own narrative. I hear this in particular with AI companies. There are a lot of concerns about overhyping AI by repeating the claims that the executives of these companies make. How do you navigate this? 

I actually don’t think that AI is overhyped. That doesn’t mean that everything about AI is good. That doesn’t mean there aren’t real dangers and worries about it. But I don’t need a PR person from Anthropic to tell me that large language models (LLMs) are a big deal; all you have to do is use them and you see this is unbelievable stuff. Computers could not do this a few years ago. I do feel that when I go into a company like Anthropic, I’m not going in there to trash them. If I get access, I’m going to give them a fair hearing; I’m going to let them tell their story. I’m also going to talk, obviously, to the people outside the company, and I’m going to talk to the critics. But I think that the point of a story, if you get access, is not to make it a giant editorial attacking the sector, but to provide the details of who they are and how they work so that critics can refer to that piece and say, Look, when Steven Levy went in there, they said this to him. It adds to the knowledge of these companies in a way that doesn’t necessarily argue for them or against them. Whatever side you’re on when it comes to the future of AI, you could use what you want from the story to bolster your argument. 

In your Anthropic story, I was interested to read about how the company was using Claude, its AI assistant, internally, and treating it sort of as another employee. That detail was memorable. 

I have an ironclad rule that what you find when you get into a place is always more interesting than any preconception you have going in. I didn’t know when I began the Anthropic story that Claude would become a theme of the story, but hanging out there, sitting in meetings, and watching them talk, Claude came up again and again. 

Who would you like to talk to in Silicon Valley that you haven’t been able to?

I used to talk to Marc Andreessen quite often, and he doesn’t like to talk to journalists anymore. He has his own podcast. He’s an interesting guy, and I think he’s taking a dark turn. I’d like to talk with him about some of the stances he’s taken in recent years. I ran into him at a play about AI a few months ago. We gave a friendly greeting afterward. He told me he liked the play.

Are you worried about how the relationship between media and tech is deteriorating more broadly? Much of Silicon Valley is threatening the existence of journalistic media, either explicitly or implicitly—in the products that they are building. 

This has been a constant for decades. I worked at Newsweek during the internet boom, and I saw this stuff coming. This has been a threatening trend since the Web started. I think it’s up to us in the media to figure out how to evolve to continue to serve a real need. I think the need for what we do will always be there—it just gets tougher to figure out how to deliver it and reach our audience and get paid for it. I think things are actually, in a way, better than they used to be. More people are paying for the actual content. The media industry has learned that it’s a fool’s errand to depend on these companies to deliver an audience—you’ve got to get your own audience. 

What do you think about media companies including The Atlantic and Vox Media making deals with AI companies to license their content and archives? 

I’m on the council of the Authors Guild, and we’re suing OpenAI and Microsoft. I didn’t vote when it came time to put our hands up, because I write about these places. But I think that ultimately there has to be money passing hands. Books are arguably the most valuable tranche of content that exists. I feel it’s to everyone’s advantage if there’s compensation for using this content for training input. AI companies are talking about spending hundreds of billions of dollars for data centers. Certainly there should be some billions lying around for the data. It’s up to them also to construct the way their systems work so that they don’t infringe on the output side.

Do you think the lawsuit will be successful? 

I think it’s a jump ball. The way that the Authors Guild suit against Google came out is that the judge ruled Google Books to be covered by the “fair use” provision of copyright law, and so the authors and publishers lost their bet. Ultimately, it’s really up to Congress to come up with an explicit interpretation of the rule that’s going to be fair for everyone, but that’s too much to ask for Congress these days.

You’ve had a career span of four decades. What do you think is coming in the next five to ten years?

I don’t like to put time frames on it. People are saying that artificial general intelligence [a term used to describe the idea of an AI system that can match or exceed humans at any cognitive task] will be here in two years, in five years, in ten years. Even if it’s twenty years, that’s not long. That’s a flash. If it happens in forty years, that’s still huge. There’s an argument over whether it will happen at all, but I think that in forty years, things will be more unrecognizable to us now than the current day would be to people in 1985.


Other notable stories:

  • In recent days, details have started to hit the media from an explosive new book, by Jake Tapper of CNN and Alex Thompson of Axios, about Joe Biden’s decline and disastrous reelection campaign. The US edition of The Guardian—which has a long-standing reputation for obtaining and reporting on newsy political books—ran a story leading with apparently on-the-record remarks in which David Plouffe, a former top aide to Kamala Harris’s eventual campaign, told the authors that Biden “totally fucked us” by staying in the race too long; then, The New Yorker ran an excerpt reporting on “significant issues” that complicated the latter phases of Biden’s presidency, and a fundraiser at which he appeared not to recognize George Clooney.
  • C-SPAN is also trying to navigate the changing TV landscape—Paul Farhi reports for Vanity Fair that the broadcaster’s revenue, which depends on fees from cable and satellite providers, has plummeted as viewers have migrated elsewhere, and that major multichannel streaming platforms including YouTube TV and Fubo have so far declined to host it. “C-SPAN has been unusually aggressive (for C-SPAN at least) in lobbying to get on the streaming services,” Farhi writes, including via a Web pop-up calling on viewers to pester streaming services to “Step Up to Make C-SPAN Great Again.”
  • And it’s not just US news organizations that have struck licensing deals with American AI companies—overseas ones have, too. Yesterday, the French newspaper Le Monde, which already reached an agreement with OpenAI, announced a deal with Perplexity that will see Le Monde’s journalism integrated into the company’s AI search tools and Le Monde gain access to technology for its own use. Unlike the paper’s deal with OpenAI, Le Monde content will not be used to train an LLM under the Perplexity agreement.

Check out more coverage from our AI issue and our campaign in collaboration with TBWA\Chiat\Day here.

Has America ever needed a media defender more than now? Help us by joining CJR today.

Camille Bromley is a freelance writer and editor based in New York.