AI shows promise, but news companies should be wary of overreliance

By Marcus Billingham-Yuen

News Corp Australia

Sydney, Australia

Connect      

I have avoided writing about Artificial Intelligence (AI) because the headlines are saturated with constant updates and I lacked a concrete observation to share.

Trade press is still heavy on AI, but now I do have a view to share. It’s not a very original one, but it’s an important one when it comes to media: AI shows promise, but we need to be wary of overreliance.

Artificial Intelligence has come a long way in just a short period of time, as evidenced by this warped AI version of Will Smith eating spaghetti.
Artificial Intelligence has come a long way in just a short period of time, as evidenced by this warped AI version of Will Smith eating spaghetti.

The achievements ChatGPT, Gemini, Perplexity, and the other players have made in the last 12 months will continue to be surpassed in the next 12 months.

What we expected to happen in years, has happened in months. Consider video generation a year ago when the warped AI version of Will Smith eating spaghetti was laughable. Now take a look at Sora, with its capabilities to create realistic and imaginative scenes from a prompt.

It’s accepted AI will supercharge the way newsrooms do things

It will help us reach a good output, faster, and at scale. The obvious benefits in text generation, grammatical proofing, videography, and concept creation are already present and available right now.

We are seeing a democratisation of creative tools, empowering the average person with skills we had only previously seen reserved for expert specialists.

Even more valuable are the research, ideation, and refinement capabilities they can provide to a commercial team when it comes to being out in the market and producing strategy, solutions, and products that solve client problems.

But to get from good to great is a still a human job because the models are only as good as the data they are trained on (which is mostly created by humans in the first place).

There are also perils in the AI-generated content for malicious uses

Take deepfakes, for example. This concept that has been around for years has intensified — what arose from the South Korea elections earlier this year epitomised this.

The public was inundated with false information through a deepfake of senior politicians endorsing people they never did and spouting policies that were not verified. It spread across social networks like wildfire. While it was pulled down and declared wrong, the damage to politicians’ reputations was already done.

It might appear funny to have a popular figure saying things they shouldn’t, but it fundamentally infringes upon their image, voice rights, and dignity.

It gets harder to verify what is true versus what is fiction

From articles and business proposals to mockups and cover photos, how we set a precedent in using and declaring AI as news businesses will be the example many will look up to across industries, because we are the most exposed and the most seen.

TikTok has outlined rules for labelling AI-generated content.
TikTok has outlined rules for labelling AI-generated content.

Whether it’s through a citation, an acknowledgement, or even downright declaring AI use, we need to remain conscious of how we convey this to avoid misinforming, or even setting unrealistic standards in media when it comes to creation and output both in newsrooms and commercially.

So, before you jump on the tools and get excited about your magic AI-powered genie that answers every wish, consider the intent of why and how you’re using these tools. And, consider what level of transparency you will offer readers and clients about its role in your process.

About Marcus Billingham-Yuen

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT