AI’s “human factor” complicates its capabilities, realities

By Stijn Vercamer

Mediahuis Belgium

Antwerp, Belgium

Connect      

My main takeaway from the recent INMA Media Innovation Week in Helsinki is that we should not forget the human factor in both AI and the digital transformation as a whole. While AI is rapidly transforming the media industry (and other industries), it is critical to keep humans in the loop.

I spoke about the need for human oversight in change and AI-driven processes with my Mediahuis colleagues Heino Schaght, group innovation manager, and Sarah Owsik, business partner in AI.

Media companies should not forget that AI is not the same as a human, and, as such, any human-like tendencies of AI needed to be carefully considered.
Media companies should not forget that AI is not the same as a human, and, as such, any human-like tendencies of AI needed to be carefully considered.

We spoke about the fact that the success of digital transformation (including AI) depends on both technological advancement and empowering people to work with these innovations.

Here are some insights I got from this conversation.

AI tools

Owsik made a solid point on the adoption of AI tools: “This involves more than just mastering a learning curve regarding the tool itself — something we can manage with training,” she said.

Context is also important: “It can impact your way of working, processes, and the skills required.” A broader approach to change management requires bringing people on board, she said.

I couldn’t agree more. It is exactly the reason why I chose to start a large change track for the big transition to a new platform at Mediahuis Belgium – something I detailed in an earlier blog post co-authored with Sarah Faict.

Managing expectations

Owsik also pointed to the importance of managing expectations.

“Certainly, with AI, perfection shouldn’t be expected. People make mistakes, and AI can make different types of errors that may feel less intuitive than human mistakes,” she said. “But, it’s a matter of continuing to iterate and searching for the boundaries of the technology.”

She noted that, especially in regard to generative AI, she likes to keep a quote from a workshop at Google in mind: “It’s a language model, not a truth model.”

“That helps to understand why the results are not always as accurate as you expect,” Owsik said, “which also brings me to a crucial question for the media industry: Where do we draw the line? When is it good enough? How will we keep building trust — one of the big challenges we have as an industry?”

Product considerations

This is all interesting, but how do we use these insights when building our products?

It’s a great question for Schaght, who offered the development of the internal tool, Scribe, as an example. Scribe converts audio to text and is used quite a lot by our journalists to convert recorded interviews to text. It saves a lot of time when transcribing an interview.

“We work with direct feedback from our editorial teams. In that way, Scribe was developed with short releases, gathering user feedback, and then adjusting or removing features accordingly,” Schaght said. “Additionally, we adopt a collaborative approach with constant feedback loops. These guide the development to ensure the technology remains relevant, effective, and aligned with the needs of the users and the editorial objectives.”

I can add that, for our future editorial AI products, we also work closely with newsrooms. But to keep it manageable, we mostly put one or two newsrooms in the lead. This means we can have a proof-of-concept before we roll it out on new group platforms.

Tech doesn’t think

This conversation brought us to a broader discussion that I don’t want to keep from you.

First, we delved into the environmental impact of AI. Its high energy consumption contributes to increasing carbon footprints, which is a growing concern for sustainable practices. As AI technology continues to advance and expand, the need for sustainable solutions becomes even more pressing. By focusing on energy-efficient algorithms and integrating renewable energy sources, we aim to mitigate future environmental impacts.

This is the path we are committed to as we move forward and implement AI tools not only in our internal operations but also in the products we offer to our customers.

Even more intriguing was the point Owsik raised about how we often attribute human characteristics to technology: “The human brain is naturally inclined to attribute human traits to objects, and big tech capitalises on this. This applies not only to avatars that closely resemble human faces and expressions but also to language.

“For instance, when you type a question into ChatGPT, you’ll often see words like thinking or engaging appear. Our language also mirrors this tendency, like when we say ‘AI hallucinates’ when models generate inaccurate or fabricated responses, creating information that wasn’t in the original data. I hope everybody understands that there is no way AI can actually hallucinate.”

It makes me think of another conversation I had with one of our analysts. He told me he does not understand why people thank ChatGPT or add “please” after a prompt. He does not see the use.

And, indeed, there might not be any use, but it’s just human to do so. I am still not sure if this is something good or something bad. It seems to me that if we attribute human characteristics to AI, we start to see this technology as something it’s not — and maybe never should be.

About Stijn Vercamer

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT