Readership, leadership are the top priorities when it comes to AI content

By Rich Fairbairn

Glide Publishing Platform

London, United Kingdom

Connect      

Few workplaces have greater reason to be torn about the rise of AI than newsrooms and publishing businesses.

If one were writing a story about an industry conflicted over the technology’s ability to assist and accelerate, or to disrupt and diminish, a newsroom could be the perfect setting.

As much as the industry is in a rush to embrace AI, it’s often paralysed with indecision about how. Add to that fears over intellectual property (IP) theft and becoming over-reliant on one provider and you can see why progress is fragmented.

The desire is for AI to lighten the logistical load so that humans can focus on creativity and the tasks they are best suited for.
The desire is for AI to lighten the logistical load so that humans can focus on creativity and the tasks they are best suited for.

Here are two parameters to consider when contemplating how to use AI: readership and leadership. Will your readership recoil if they learn you use AI? Will you risk the jobs or, worse, the leadership?

All other concerns are solvable, including IP protection, corporate expirations, future legislation around AI usage, and other specifics like attribution.

Readers: unconvinced by AI

While journalists are quite open-minded about AI tools, audiences at large are still distinctly cold on AI content.

Surveys by Reuters, the BBC, the EBU, and UK survey body YouGov show that, overwhelmingly, the public dislikes even the idea of AI-generated news and content. AI can have a place in production — and even that is contentious for many — but the further away from what’s published, the better.

Publishers as diverse as Sports Illustrated, CNET, and some Gannett titles were all put on blast by readers and journalists when AI-generated content was sent directly to audiences, intentionally or not. As bad as being “caught in the act” was the quality of what was published. Once you’ve been laughed at, it’s hard to win back credibility.

For now, the debate seems settled: Even the suggestion that content has been written by AI can be damaging. So, whatever your use of AI, it seems clear you must be able to say a human created the actual story.

Leadership: still on the hook for what’s published

Aside from the potential damage to brand reputation and revenues, what other risks are there in using AI to create content? Well, jail time and financial ruin are attention grabbers.

News businesses, in particular, are aware of the risk of libel and defamation, as well as more serious charges of contempt of Court, Parliament, or Congress. Plus, there are other less clear rules such as accidentally inciting or becoming the target of hate.

There are countless examples whereby publishing even a provable fact could get you in hot water. Regardless of publishing lies or errors, which have their own forms of redress, statements of truth that interfere with judicial processes, breach an order, or offend the dignity of a person or institution of power can also land you in court or worse.

When a media company publishes a story, ultimately the writer, editor, and leaders (including the owners) all know they could face civil or criminal prosecution, or other threats, for what is published. It’s their choice to do so.

This is why journalism training includes large sections on law and why any news organisation of note has lawyers on permanent call to vet the most contentious articles and images.

It’s not infallible, but the risks are known and processes are in place to mitigate against them. It’s one of the main reasons workflows in news organisations are the way they are: Be careful about sweeping them all away in the name of efficiency.

It is also why using AI to create and publish content directly to audiences without human oversight is a recipe for disaster. Leadership must ask what it is prepared to risk over something written, illustrated, or published by an AI tool.

Humans are vital to the loop

So, the question is how can we safely use AI in a way that puts readers at ease and puts no one at risk of satire, arrest, or worse.

For Glide, we have a mantra we have used for nearly a decade: “Augmenting human creativity.” It’s more relevant now than ever. For us, AI is about augmenting intelligence, not Artificial Intelligence.

It echoes author Joanna Maciejewska’s much-cited comment about the wish for AI to create time by doing the boring stuff, rather than creating more time to do the boring stuff.

Keeping in mind that AI is just another tool and not a goal in itself, we advise clients that whatever the tool is, it exists to help the humans get on with making great content by decluttering automatable drudge and adding some insight.

Our view of how AI can be safely and consistently implemented, which serves as the backbone for how we built the Glide AI Assistant (GAIA) has been driven by these key factors derived from industry feedback and demand:

  • Ensure every workflow has people deciding what’s published. AI content pushed directly to audiences puts your reputation (and more) on the line. Expect things like browser plugins, or search engine ratings, to flag what they think is AI-generated content and have a plan in place to reject that assertion if it is wrong. Never give your audience — especially a paying audience — reason to question their investment.
  • AI should not rely on a handover of IP, and especially not audience data. Check terms of service (ToS) and any suggested implementation. This is particularly relevant to publishers, many of whom are suing large-language model (LLM) providers for content theft.
  • Be able to easily swap AI tools as they advance. If it takes lots of effort to implement, picture the effort to rollback and unpick what you did.
  • LLM providers are in a war, and not all will survive. Avoid getting locked into one. One reason GAIA has more than 20 LLMs is because publishers want choice. Remember, Italy temporarily banned ChatGPT on a whim.
  • Beware the bespoke: It’s expensive and likely out of date by the time it lands.
  • Swaths of AI legislation will arrive eventually. Be realistic on what it will likely demand, such as the flagging of AI content, exposure of ToS, audit trails, attribution, and user opt-outs. Think about the sort of detail and options the GDPR cookie declaration requires, but for AI. This particularly applies to your use of AI on the frontend of your sites and apps.
  • Ensure you have AI-free content workflows. That exclusive book extract you negotiated for? Or your superstar columnist? Expect some to reject any AI extraction of their work. Being able to guarantee being AI-free could be your leverage to getting the deal done quicker.
  • Be able to track your AI usage in things like imagery. It’s reasonable to assume legislation will demand it, and it could save you headaches against real or spurious copyright claims. Too many services lean on other company’s ToS, and if one domino falls, it could be a lawyer’s dream. Our work with the International Press Telecommunications Council (IPTC) means we go well beyond the current requirement for marking up how AI was used.
  • Humans aren’t going out of fashion. As AI tools gravitate toward similar cadences and styles, having actual people write in their own voice will become a signal in its own right.
  • Humans still pay the bills.

About Rich Fairbairn

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT