AI intersects the newsroom in many positive, challenging ways
Conference Blog | 31 July 2023
Singaporean tech journalist Eileen Yu discussed some of the ways AI has already influenced journalism — and what impact these constantly evolving tools may have on news publishers — during the recent INMA Asia/Pacific News Media Summit.
While AI has been around for awhile, taken on different forms, there’s a sudden fascination with the technology in all facets of work and life, Yu said. This is due, in part, to the advent of generative AI (GenAI), “deep-learning models that focus on human languages and are able to generate human-like responses with context.”
What’s driving AI
OpenAI, the creator of the popular AI tool ChatGPT, is also “starting to analyse images and videos” to go beyond text, so the tool will advance even more in the coming months. This will only increase the ability for users to produce professional-looking graphics or written content even if they’re not professionally trained in that craft.
But Yu said this alone wouldn’t have been enough to turn ChatGPT into a household word overnight. Credit for the surge in popularity, she said, should go to the tool’s simplicity and accessibility.
“Now, you can ask a simple question and get answers that actually make sense,” she said. “And because companies like OpenAI have made these tools accessible to everyone via the cloud, anyone can use it.”
What’s driving the newsroom
Regardless the platform, Yu said, many of the things that motivate newsrooms remain consistent.
Newsrooms want to be the first to get news out, and “love exclusive stories you can’t read anywhere else.” But much of what drives journalists and editors is about informing readers in the best possible ways. Creating a historical record by reporting on what’s happening in the world is important, as is giving readers the big picture.
“It’s not enough to just say that a new prime minister was elected,” she said. “We want to put it in context — talk about why it’s important and what impact the election has on people.”
A day in the life of a journalist
There are essentially four tasks under the journalism umbrella, and the first three “take up 80-90% of our time,” Yu said:
-
Tracking the news: “We can’t cover every news story, so we want to make sure the stories we’re covering are important for our readers.”
-
Talking to people. “We join briefings and conferences, and we interview lots of people, so we can find out what’s happening on the ground.”
-
Uncovering the truth. “We want to believe we’re truth seekers, that we can find out if someone is hiding something, to make sure we’re serving our readers by finding the truth behind every press statement.”
-
Writing the story.
How AI helps
Since the bulk of a journalist’s time is spent on work that isn’t actually writing the story, Yu said there are several ways GenAI tools can be useful.
Transcribing audio or video to text using AI is a huge time-saver, she said. AI allows journalists to “quickly generate images, graphics, and videos” themselves, even when they’re not graphic designers. There are also AI tools for tracking and analysing chatter on social media, helping journalists “cut through the noise to find trends on what people are talking about.”
Some places have been using AI tools to write articles, Yu said, but these tend to be “short pieces with information that’s pretty standard and formulaic, like earnings reports.” Using AI for stories like these means journalists don’t have to spend a lot of time trying to read and decipher earnings reports.
But there are some caveats
Since ChatGPT scrapes and analyses data, Yu said, repurposing and repackaging it rather than creating completely new content, there are copyright infringement issues to contend with.
Journalists should be concerned about this, too, she continued, “because when we use ChatGPT to, for instance, transcribe or summarise an interview, that information can be used as training material for OpenAI — including personal information” of the journalist or the interview subject, and “we don’t know how that information will be used or regenerated.”
And while “there’s a lot of talk about how AI may get smarter than humans, we should also be focused on whether AI is actually smart enough to know when something is fake.” Right now, she said, it “doesn’t seem to be able to differentiate between truth and lies, legitimate news and fake news — and if it’s not smart enough to do that, it won’t be able to distinguish between real and fake news when it’s analysing social media chatter,” thereby skewing analytics and misinforming the journalists tracking them.
The good news
A journalist’s productivity can be “sky high” by incorporating some AI tools into the workflow, Yu said. “We spend a lot of time summarising all the information we collect,” and AI can make quick work of something that would otherwise occupy a large part of the day.
AI tools can also act as a silent brainstorm partner by generating a list of potential questions for an interview subject, she said, or suggesting headlines.
And, although you need to be good at prompting tools like ChatGPT, “anyone can use it.”
The bad news
The bad news? “Anyone can use it,” Yu said.
“It raises a lot of questions about what, then, is the value of a journalist,” she continued, though it remains to be seen “how this plays out, whether AI can really replace what a journalist does.”
Since, as mentioned earlier, GenAI tools can’t always tell what’s real and what’s fake, “you run the risk of publishing misinformation” if you’re using AI tools — and “accuracy is incredibly important for journalists.”
Additionally, because AI isn’t creating new content, merely repurposing pieces of existing work, “when we use AI to generate stories or headlines, we don’t know if it’s already been used. We should be very concerned about this,” she said.
This doesn’t mean journalists shouldn’t use AI at all, she added: “It means we should mitigate the risks. We need to know where the risks are and try to put measures in place to mitigate their impact.”
Use cases (not just chatbots)
Yu acknowledged ChatGPT is the “flavour of the month right now,” but the ways in which AI tools can be put to work in the newsroom go beyond chatbots. At a basic level, AI can “power up the newsroom,” doing “pull, produce, push” work of gathering, producing, and distributing the news.
AI tools can also help keep up with social chatter, so “we know where to send journalists to chase a story.” There are analytics tools that allow journalists to keep tabs on what people are talking about in real-time, giving them insights into what matters to their audience.
Still other analytics tools help newsrooms measure what matters based on pageviews, shares, and views. Sometimes, journalists “think a story will get a lot of eyeballs, but it doesn’t always translate,” so AI “helps us better identify in the future the news that matters to our readers.”
For readers, AI tools allow them to easily curate the experience to see content that’s tailored specifically to their interests — like the “for you” pages on social media.
Some publications have been vocal about what they’re doing with AI, though they’ve had varying degrees of success. BuzzFeed, for instance, has utilised some of the user information gathered from its popular quizzes to generate headlines and SEO. The Associated Press uses AI to automate sports stories and to analyse social feeds, though they’ve said they won’t use GenAI to write stories.
CNET was open about using AI to write some of their articles, and then shared some of what they’ve learned — after they published an AI article with multiple factual errors and plagiarised phrases. Yu said they brought up the importance of human oversight, and that “if you’re going to use AI to generate stories, you need to tell readers it’s an AI-generated story.” They also said that AI tools “need to be built with citations so users know that the material isn’t original.”
And although it wasn’t a news publisher, Yu said the story of the lawyers who used ChatGPT to produce a legal brief — a brief that cited several completely made-up cases — is further evidence for the need for human oversight.
Regulations and legal issues
The skyrocketing rate of GenAI usage has led several nations to begin work on AI regulations. There are “regulations in draft mode” in the United States, EU, UK, and Singapore, as well as the UN, “about how data and content can be used to train AI models, how companies can use AI, and how it should be deployed.” China has already begun implementing some regulations about the use of GenAI.
Even without regulations currently in place, there are ongoing lawsuits about data scraping, Yu said, with authors suing OpenAI for scraping their material and using it to generate AI articles — not to mention writers and actors on strike right now in Hollywood because they “want to make sure AI isn’t going to be used to replace them.”
Despite increasing calls for regulation and oversight (or perhaps because of them), she said “there are a handful of tech giants continuing with their GenAI plans, at high speed — possibly to complete work before regulations kick in.”
That transparency, full disclosure of the use of AI tools, is the best way forward. “AI isn’t inherently bad,” Yu said, “as long as people know how you’re using it and what you use in trainer models.” But there remains a “severe lack of disclosure in the market” without regulation.
What AI doesn’t do well (yet)
Even with all the impressive things AI can already do, there are still several areas where it falls short — at least for the time being.
Adapting to situations in real-time isn’t something AI can handle. “I moderate panels, and someone asked me why I don’t use ChatGPT to generate my questions,” Yu said. “But those conversations and my questions aren’t static. I react to what’s been said, and my questions will sometimes change. AI can’t do that.”
There’s also a “lack of training for non-English data” in AI. It “doesn’t always understand non-English speech, or accented English speech, and it can’t translate English into another language very well.” That’s a big barrier for many potential users around the world, Yu said.
Some research has already been done into how much people trust AI-generated content, and the results aren’t great for AI. “When someone reads a piece that they know is AI-generated, they trust it less,” Yu said.
Because of this, she continued, “there’s always cause for human oversight. We can’t just let AI run on its own — we want to know that the human behind it is adhering to policies and regulations, to make sure that it won’t harm the public.”
Yu believes “journalism is a craft that needs practice.” If journalists are “using AI tools to replace writing, for instance, you’re going to get out of the practice of writing and potentially not be as good at it in the future. If we don’t continue to practise, our craft will continue to degenerate over time.