Rappler goes from early AI adapter to preparing for a responsible future
Conference Blog | 04 September 2024
As AI moves into news media organisations, companies are looking at how it fits in with their approach and what it can do to improve operations. At the Filipino online news Web site Rappler, that means looking at not just how it is a tool for storytelling but also seeing how it fits into the company’s mission and DNA.
During the recent INMA Asia/Pacific News Media Summit, Gemma Mendoza, head of digital services and lead researcher for disinformation and platforms at Rappler, explained how the company developed its policy to govern AI — and how it is continuing to explore the ways AI can benefit the organisation.
Experimenting with AI
To move into the future, she said, it is important to understand Rappler’s foundation: “A significant part of our DNA is these three pillars: journalism, technology, and the wisdom of the crowd. It’s all about innovative experiments — not just about the stories but also about innovative experiments around the intersections of journalism, tech, and community engagement.”
Much of what the company focuses on is how to get people involved, who it can help, and how it can create change in society.
“So naturally when AI came to town and became part of mainstream, we jumped into the fray,” Mendoza said. OpenAI was looking for “innovative experiments around deliberative technologies” on how to set up democratic processes for deciding rules governing AI systems. Of the 1,000 or so entries OpenAI received, Rappler was one of just 10 companies selected to participate.
“One of the things that was really interesting is the capacity of AI to synthesise [information] and to, based on that, generate questions,” she said. “We started with this because this is a little bit of a low-hanging fruit.”
As part of this project, Rappler built an AI dialogue tool called Rai that uses prompt engineering to develop conversations around policy ideas. But, Mendoza pointed out, this is not the first time the company has experimented with AI.
During the 2022 elections, the company used AI to generate profiles of local officials or people running for various public positions. This was a significant task, with almost 50,000 profiles needed.
“Those are things that bore journalists. I mean, if I ask a journalist to create these profiles for us, they’ll kill me,” Mendoza said.
Using ChatGPT, the team was able to input data on all the candidates and use AI to generate the profiles.
“We just needed to make sure that we’re disclosing that [we were] using AI,” she said, adding that the company created a process for validating the information: “Our research team vetted the data and was making sure that the content is accurate.”
The disinformation dilemma
More recently, Rappler has been conducting extensive research around AI deepfakes and disinformation. Mendoza said that over the past few months, deepfakes generated through AI technologies have gone mainstream and are becoming increasingly believable.
“What is very significant about fakes now is how believable they are,” she said, playing an example of a deepfake that was circulated featuring Rappler’s CEO, Maria Ressa. “The audio is really uncanny. It really makes people think it’s the same person’s voice. This is something we’ve never, never seen before.”
Deepfakes are also being used to mislead the public about military operations. The Philippines is in a territorial dispute with China, and deepfakes have been used to falsely show the president ordering a military attack.
“So we see how this technology could really have an impact on the information space,” she said.
Preparing for the future
As AI use becomes commonplace, Mendoza said it was important to make sure the company takes responsibility for its use of AI. Last year, it published its AI guidelines and pledged to remain transparent with its usage.
“We’ll make sure that it’s used with rigorous tests and the premium is on the supremacy of human critical thinking and judgments,” she said. “And we will never replace our journalists with AI. The AI is there to support and make the journalists’ work easier. That’s the whole idea.”
Rappler is also experimenting with how to use AI to transform text into video, enabling it to reach audiences on platforms like TikTok and YouTube. It also is looking at how to solve one of the biggest challenges facing newsrooms: shrinking distribution.
In the past, it has used Facebook to engage with users, but in 2023, Rappler rolled out a community-based app that uses a “man and machine” approach to moderation. It ensures secure communications — something Facebook couldn’t provide — and has clearly established community guidelines that are enforced by AI but overseen by humans.
It is also testing a conversational chatbot, which will be able to surface information hidden in the stories. With over 300,000 stories already published, the chatbot will help users find specific information from these stories.
“The foundation here is that we are only using content that Rappler journalists and researchers have already gathered and vetted, essentially making sure that along the way we’re adhering to our ethical standards,” Mendoza explained. “And again, that combined man and machine approach to using AI is making sure that we maximise its use while minimising risks.”