Skepticism, legislation concerns are blocking AI adoption by news companies

By Richard Fairbairn

Glide Publishing Platform

London, United Kingdom

Connect      

Pumping out AI content crammed with SEO keywords and burgled ideas is depressingly easy. And using it to game algorithms and make programmatic cash looks equally simple. So why aren’t more publishers doing it?

I had this very conversation with a few AI startups and technologists recently, who are all aware of the capability to use AI to conjure content from nothing and use it to farm for clicks. But why, they wondered, are publishers so against using it? Isn’t instant free content just what publishers want?

Image credit: Glenn Gabe
Image credit: Glenn Gabe

Setting aside the distaste for their idea of what AI content could be, I felt it was worth walking them through the challenges publishers face which are not shared by, say, social media firms. More than one insisted to me that the part of U.S. Section 230 absolving social media firms from liability for content published on their platforms was relevant to publishers in Europe, for example.

The things blocking publishers from using AI have nothing to do with lack of capability or understanding of the tech.

It’s because of things at the core of what being a publisher is. And, arguably, what marks the line between being a serious media organisation and an opportunistic site letting AI loose on its audiences without being aware of the risks.

Reader reaction still the main barrier

In meetings with publishers and media owners, they say a top concern preventing the use of AI for anything more than workflow efficiencies is readership perception of AI-generated content as being low quality and low trust.

Let’s be honest, it’s not just a perception: Right now, it is low quality, and it cannot be trusted. And, the first people to spot weaknesses are those whose job it is to write content in the first place.

While the public may be optimistic about AI in areas such as healthcare and sciences, it has very low opinions of AI-generated content in journalism.

There are now a raft of studies on AI in journalism, the largest being Richard Fletcher’s and Rasmus Kleis Nielsen’s work for Reuters Institute/Oxford University, which surveyed more than 12,000 people in six countries on the question of AI in news. I thoroughly recommend studying it; the relevant takeaway for publishers is that AI in news is not trusted by readers — especially not to generate news. That opinion cuts right to the heart of audience trust.

Even if AI content was not prone to fiction, as long as readers think it is ick, why would any publisher unilaterally take up the mantle of challenging that view?

The tide has turned in search

Skepticism around AI content has risen further since publisher data specialists studied the turbulent journey of start-up sites using machine-generated content to game the search systems and gain meteoric visibility.

SEO expert Glenn Gabe of G-Squared Interactive, presenting at the recent NESS News and SEO Summit for news media, explained his study of opportunistic sites which leaned on AI content: “Publishing tons of AI content without human involvement will work — until it doesn’t,” Gabe said.

Now Google has started enforcing its scaled content abuse policy and penalising sites stuffing pages with AI content. The fear of sending AI content to the front-end is seen as a major risk by publishers that have spent years building visibility and authority.

Ridicule vs. credibility

Media loves to read about itself, and media scandals outpace most others for being whispered about and shared by members of the sector.

There have been multiple embarrassing moments for outlets appearing to lean on AI to write terrible articles or using fake headshots of non-existent authors. These have been leapt upon by the public and industry, and proved highly embarrassing.

Titles like CNET and Sports Illustrated found out their AI content isn’t a good look, and Google drew scorn and laughter for wildly inaccurate facts generated by its AI tools. It’s one of the main reasons our own AI tools do not work without human oversight.

Personal and corporate liability

In a presentation I recently made to publishers in Spain, the opening slide is titled “AIs don’t go to jail … humans do.” This is a concept that had not figured much at all into the thinking of the AI firms we spoke with.

Tech platforms pay little attention to content liability. Rightly or wrongly, most believe they work under the umbrella of Section 230 of the U.S. Communications Act and thus don’t need to worry too much about what their machines generate, and they tend not to consider your liability for content at all.

This may change under proposed changes to the laws in the United States, to Section 230 with regard to “publishing” content, and also to the liability of AI firms for outcomes their systems enable. In the European Union, this comes in the form of the EU AI Act, which is the first serious legislative package to bring AI decisions on content into play. It is also forthcoming U.K. legislation.

For now, your off-the-shelf AI tools will happily generate libelous or contemptuous content, in breach of laws in jurisdictions worldwide. And, it is you or your colleagues who will be on the hook for it.

Funnily enough, publishers aren’t too comfortable with that.

Intellectual property (IP) ownership and data transfer

Publishers quickly realised their content is an asset that AI tools should not use without reward. The nature of the reward is for publishers to decide by legal challenge or negotiation, but turning up with an AI toolset and telling publishers their content will be sent to a random AI tool is already becoming a hard no scenario.

Aside from the transfer of their own assets, there are often vast troves of private and proprietary data within workflows which publishers may not have the right to send to an AI. This includes customer-related content, supplied content like images or data, or other content supplied under its own negotiated rights.

Any AI tool should default to being a closed model that does not pass any data back to the “mothership” for training. That’s how we deploy it within our tools for publishers — with data being escrowed. Additionally, it can be turned off fully without hindering workflows.

Private language-learning models (LLMs) are being explored by a rapidly growing segment of media and publishing companies. This is happening in the short term as a solution to IP leakage and transfer concerns, and in the longer term to prepare for a time when ownership of their own AI becomes more common and a potential revenue stream in its own right.

Fear of being trapped by one provider

The dominant AI supplier behind many off-the-shelf connections is OpenAI. For now, this is a far ahead in adoption compared to any other single LLM.

While this can result in its own problems (OpenAI is hardly the most popular figure in the world of publishing), even setting the brand aside, the scenario is ringing alarm bells among technologists who do not want to repeat the “Google experience.” In other words, there is concern of one tech giant holding all the reins. This is at the core of why we offer LLM choice — 21 at present.

Not all AIs are going to be here in a year or two, so building in flexibility is required.

Legislation

Up until now, the barrier talked about the least (but racing up the priority list and perhaps destined to become the biggest barrier of all), are the requirements for AI companies and those using the tech to adhere to new and fast-evolving legislation.

Our advice is to prepare in a manner that treats AI regulations as being as impactful to your business as General Data Protection Regulation (GDPR). You need to understand what your obligations are and what your exposure is, and be prepared to amend your workflows and technology usage. You will need to look at everything from your content deals to employment contracts.

The primary rule set you can refer to already is the EU AI Act, which comes into force over the next 21 months and will heap new requirements on firms using AI. The United Kingdom is also considering new AI rules, as are U.S. federal legislators — and there may be some state-specific U.S. regulations too.

The top concern for publishers is the suggestion they will need to signal to readers any instance of AI content, or any use of AI in content creation. That could have a huge technology impact if it becomes the standard.

The key word is “could.” Why don’t we know definitively? Because the text of the Act still has a way to go in defining what is classified as a qualifying AI system, where the lines will be drawn, and what will be amended between now and enforcement of the Act in mid-2026.

One crucial paragraph considering “transparency obligations for providers and deployers of certain AI systems” says: “Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video, or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated.”

Also noteworthy in the text: “Deployers of an AI system that generates or manipulates image, audio, or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated.”

There are carve outs for AI tools being used in the editing of content versus its creation, but lots of the Act text is still quite woolly and presumes industry-specific codes of conduct will do much of the heavy lifting by detailing the exact interpretation of the Act at implementation level.

However, you look at it, looking at how GDPR panned out, one should assume “protection of the individual” will trump all. That would mean an obligation to be able to tell a reader if they are reading AI content or not.

About Richard Fairbairn

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT