What ethical questions will generative AI bring up for media?

By Ariane Bernard


New York, Paris


Wrapping up my discussion of the three tentpoles of this year’s INMA Smart Data Initiative — testing, growing the data team, and AI/machine learning, I come to the latter on the list.

The first two priorities are a bit more like homework, and this one is the recess. And look, yes, I’ve fallen into the hype pot of breathless headlines about all the new fancy AI tools.

But if I can defend this, it is precisely because of the breathless headlines that it is worth taking on: “Will GPT3 save journalism?” (Back at ONA 2018, the peerless Robert Hernandez made us play “[Blank] will save journalism” — and, of course, AI was already heavily featured for comedic appeal.)

As robots become more like humans, the ethical and democratic challenges continue to grow.
As robots become more like humans, the ethical and democratic challenges continue to grow.

More soberly, yes the number of software libraries that can be used to build machine-learned products continues to grow. And we already see them with tools like auto-moderating comments, smarter paywalls, product and content recommendations, automated content generation. 

These libraries and frameworks get better, and more applications are built around them so a greater number of organisations can leverage them. But INMA is a business organisation, not a research university, and I want to make sure we don’t lose track of what we need to look at:

What are production-grade applications for these technologies? 

While I enjoy reading about the latest attempt at creating Thanksgiving recipes using AI (sorry, fact check, I did not enjoy this — the stuffing recipe in this video should go to jail), this isn’t where our media industry is going to solidify its business, reach new audiences, scale itself up, etc. 

Production-grade means two things: Taking on core challenges of our industry, but also doing this in a manner that truly adds value. So while fun experiments make for good headlines, we need to identify the things we can build that are there to stay.

As the robots become more human-like, there are ethical and democratic challenges that could only get bigger

Later this year, GPT4 is going to be released and this may be a new turn for generative AI. Already, some recommendations or outright legislation are pushing for AI transparency, the possibility to audit algorithms. And these trends at the intersection of AI technology and legislation have to be on our radar, as much as the new capabilities that new libraries will tout.

In particular because our business is rooted in the distribution of verified (and therefore verifiable) information, our industry should take a particular interest in how AI-powered technologies that produce content are able to augment what we do, but in what way they may also create confusion among our readers and users. 

If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.

About Ariane Bernard

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.