New York Times’ AI decision highlights publisher concerns with the tech
Content Strategies Blog | 25 March 2025
Eyebrows shot to chandeliers recently when it was revealed the mighty New York Times is going “all-in on AI” for its newsroom and journalists.
A newspaper of any size using AI isn’t really a shock, but, with The New York Times being the most prominent litigant against AI firms for copyright theft, many a commenter pointed out the obvious dilemma the title might face in embracing AI tools.
Really though, the challenge is no different than for any media outlet.

Behind the headlines, though, is something a bit more revealing: that is how cautious and limited the company is choosing to be in its use of AI. This shows that despite — or perhaps even because of — its stature, it is in a similar boat to other media organisations grappling with the question of how to use AI safely. Industry standards and reader perceptions still take precedent.
As the most read and (according to some surveys) most trusted newspaper in the world, one might think the NYT would launch into AI with a grand flourish of ground-breaking innovations.
In fact, it’s quite the opposite: Caution wins the day. And, according to reports, the acceptable use of AI in the newsroom is remarkably low stakes. It is treading very carefully.
Humans in control
Semafor reported guidelines for NYT editorial staff included experimenting with approved AI tools for things like headline options, article summaries, research, suggesting interview questions, compiling quizzes, writing social copy, FAQs, and polite prompts such as:
- How many times was Al mentioned [in a podcast]?
- Can you revise this paragraph to make it tighter?
- Pretend you are posting this Times article to Facebook. How would you promote it?
- Can you summarise this play written by Shakespeare?
Reporters still bear the burden of responsibility for anything published and are also advised not to use AI “to draft or significantly revise an article, input third-party copyrighted materials (particularly confidential source information), use AI to circumvent a paywall, or publish machine-generated images or videos, except to demonstrate the technology and with proper labeling.”
Like so many other titles, including many we work with, NYT chiefs have looked at the assortment of available AI tools and alighted on a similar level of AI use, ruling that caution is more important than boldness when it comes to using AI in content.
In the newsroom, guardrails are especially highlighted for the creation and information assimilation phases. Plus, there is a ban on the use of some AI tools, such as for the upload of confidential documents supplied by a whistleblower — source protection being an issue, reports say.
As with countless other publications, AI content created and delivered direct-to-reader without human oversight is an absolute no for the time being. This is clearly influenced by the high trust placed by readers on NYT stories and the belief that AI content is still not as trustworthy as what is created by people.
Perfectly timed to shed light on this was a new survey by Australian policy and research group APO, into public perceptions of generative AI in media and journalism. It’s a good read and echoes similar surveys, showing public mood toward AI news isn’t changing much and audiences are still very circumspect on AI-generated journalism or the use of AI in news.
A theme has emerged
Equally well timed and just a few days before news of the NYT’s AI adoption, an AWS event for publishers in New York, attended by Glide and a host of other media and publishing luminaries, saw senior industry figures echo the issue of trust in AI as still being the prime concern in their cautious uptake of the technology.
An entire panel was dedicated to the question of maintaining quality, trust, and institutional authenticity amid the growth of AI content. It reinforced the view that publishers now are focused much more on predicting and managing the negative or positive domino effects the technology can bring than simply using it for workflow tasks.
Figures from organisations as varied as specialist research database publishers, publishing standards authorities, news and book publishers, and specialist publishing law all weighed in on the issue of trust in media in an AI age:
- Todd Carpenter, executive director of standards authority of NISO (National Information Standards Organization), explained attribution and sourcing are critical to user belief that systems and content can be trusted.
- Manu Singh from News Corp described how journalists using the technology are among the first to spot where it goes awry and end up becoming the firebreak preventing AI errors from going any farther.
- Andrew Jones director of AI and Machine Learning operations at Wiley, described how tracking and audit trails for AI content has become super important — not just for facts to remain trustworthy, but for legal protection in the future.
- And, publishing law specialist Ed Klaris spoke about the perils of trying to assert copyright on AI-produced work. Klaris also suggested using anti-piracy software to double check that AI-generated work is not simply an inadvertent copy of someone else’s originals.
While The New York Times is a benchmark, it is telling that the title is not heading into uncharted waters for its AI play. I don’t think this has anything at all to do with its court cases against OpenAI and Microsoft or that it will markedly change based on any judgement in the case.
Reader trust still plays a huge part in the company’s DNA. For now, it is evident that whatever ambition there is within the title to leverage AI, it will not do so at the risk of its audience’s regard for its content.