As AI becomes more prevalent, knowing how to use it beyond small-scale experiments will help newsrooms separate themselves from competitors. Lyn-Yi Chung, deputy chief editor at MediaCorp’s CNA based in Singapore, shared her learnings from the company’s AI initiative.
The company did its first major newsroom-wide AI project in 2018, and Chung said AI can be a game-changer for newsrooms.
During the recent INMA Asia/Pacific News Media Summit, Chung clarified that she is not part of the newsroom AI team but rather is part of the growth team to help the newsroom automate. She has three obsessions:
Making it easier for MediaCorp publishers to publish stories and videos faster.
Building systems outside of a few people.
Innovation: “Mainly because it makes the job fun. It frees you up to do meaningful things,” she said.
MediaCorp began working on live video transcription in 2018 and rolled it out the following year. Last year, it began training AI to clip out news video reports from our TV news bulletins. It also has done text-to-speech — also known as robo reading — trained on the voices of the company’s presenters and daily news roundups summarised by AI.
“But whatever it is we do, there’s a human in the loop, and everything that we produce with AI is vetted and checked before it’s published,” she said.
Presently, MediaCorp is exploring using AI tools for practices like verification, such as doing a network analysis to detect inauthentic behaviour, DeepFakes, manipulation of photos and videos, etc. It is also looking at various automated video editing solutions to use on short-form videos found on TikTok, YouTube, Instagram, etc.
“We want to be able to take rushes and have them put together automatically in a rough cut and then we just slap on captions,” Chung explained. “And because we need to produce more explainers fronted by journalists on camera, we’re working on a solution where we can activate a green screen anywhere at the click of a button.”
Taking on large-scale projects
Its first major customised AI project in the newsroom was transcribing live streams for big events.
“We needed to timestamp the speech so we could produce stories and videos at record speed with breaking news,” she said. “The deadline is always five minutes ago, so this is essential.”
The AI tool needed to have voice recognition speaker identification of newsmaker voices and be able to automatically detect speech, among other things. Existing transcription tools have problems with accuracy in certain live situations, so MediaCorp created its own.
The AI needed to be trained — in particular, it needed to understand Singaporean English — so MediaCorp worked with a partner and fed 150 hours of pure speech from its archives into the AI. Then it explained to the system integrator how it wanted to be able to copy lines quickly or play back a video to check the transcript. The vendor made it operational and built-in collaborative editing, which Chung explained is “kind of like what you see in Google documents.”
This live video transcription tool is now used by many people in the newsroom and offers automatic speech recognition, voice recognition, facial recognition, and natural language processing.
Making the cut
Its second big AI project, AI Smart Cut, provides a way to generate more digital clips from TV news. This allows for better reach and monetisation by creating more video inventory that can run ads.
“At the heart of it, I believe that TV news is ephemeral and we need to reinvent it,” Chung said. “We need to reinvent it as ultimately a digital product. We want to extend the shelf life and footprint of our TV journalist work by putting it online quickly.”
MediaCorp doesn’t divide itself along the lines of TV and digital; instead, it sees itself as a cohesive newsroom in which digital is the end product.
With help from a vendor, it trains AI to recognise the voices of its presenters and reporters. It can now cut interviews or process video with 80% accuracy and has about a 70% to 75% accuracy rate on video packages with voiceovers. It can be uploaded to the CMS or YouTube within minutes of being broadcast.
“This is a huge improvement from 30% accuracy when we first started,” she said.
They also had to train AI not to be sexist: “When we started a year ago, it thought all women sounded the same,” Chung said. “Instead of one presenter introduction and one video news report, it basically strung together a one long clip of a few news stories just because it was female voices. So the fact that we managed to improve it is a huge achievement on our end.”
Lessons and takeaways
These two projects have provided a vast number of lessons and takeaways, and Chung shared her top seven tips for publishers looking to take on wide-scale AI projects.
- Find your pain points and scope the problem correctly. “Your AI idea has to be very rooted in reality. You want to be able to make sure that you have a very strong use case and it’s really worth the effort.”
- Decide if you should buy off-the-shelf and customise it or build your own.
- Find people to validate the data. “You just can’t buy a solution or bring in a solution and then have nobody to test it,” she said. “Emulating editorial judgment is super hard for machines. So finding the right people is important.”
- Agree on an acceptable success rate. It might be 80% accuracy or 85%, but anything above that is “almost impossible.”
- Socialise the idea and create buy-in. “You don’t want [employees] to feel threatened or nervous about whatever you are experimenting with. For example, if you are cloning someone’s voice or image, you need to know that person is on board with what an avatar is doing in their name and in their image.” Not getting that buy-in upfront could lead to having to pull the plug on a project at the last minute.
- Deploy it with the next phases in mind. “When you are ready to deploy the solution to a wider pool, be very clear on what’s going to be in phase two and three and so on. Because for a true iterative learning system, you need to improve it. People overlook that.”
- Make innovation part of your company’s DNA. “AI cannot be a side project or hobby to us. You need a pipeline of ideas for people to be excited about using it in general.” She suggested incentivising people to bring out the best ideas and make sure they’re part of the solution whenever possible.
Moving forward, Chung is excited to see how AI can be leveraged to make journalism jobs more attractive to people and help retain journalists, content creators, and data and product people.
“The news industry is a punishing one,” she said. “So to us, AI is not just about efficiency and productivity, it is also about business viability and visibility.”