AI, tech literacy for journalists must be a priority
Smart Data Initiative Newsletter Blog | 05 October 2023
Hi everyone.
Now it’s really fall because the conference circuit is back on. So, with just days before I turned my full attention to hosting our Smart Data Master Class series here at INMA, I first loaded up on the insights of other folks at Newsgeist — both at Lake Maggiore in Italy, and Phoenix for the U.S. edition. But I’m keeping the latter for another newsletter.
Also, and this has absolutely nothing to do with anything: Can I say that I had figured out weeks ago who the killer was in the season finale of Only Murders in the Building? I never get this stuff right. I fall for all the red herrings. But I got it this time, and I just need to lord this over everyone.
Swerving right back into my lane, and finally reminding you, if e-mail blasts got redirected to the wrong filters, that Smart Data Master Class is just starting (today, October 5), and you can still join. The replay will be available to you even if you join after our first module.
Speak soon, Ariane
AI literacy for journalists
I just attended Newsgeist Europe, held this year on the beautiful shores of Lake Maggiore in Italy (yeah, it was a hardship). As usual, the unconference — which is supported by Google and gathers journalists and technologists to discuss the state of the intersection of media and technology — offered plenty of insightful perspectives from some excellent folks.
Because the event is held under Chatham House rule, I will, of course, respect this part of the deal. But there is still much to share with you, even without naming names.
At pretty much each of the eight programming time slots, there was at least one session which addressed the various angles of generative AI entering the news industry (or making further inroads there, as the case maybe). And yours truly attended as many as she could. Appreciate my sacrifice here because I had a lot of FOMO of the other sessions, but I soldiered on.
But the anxiety is palpable, and it really expressed itself whether we discussed copyrights, ethics, applications of AI, or developing an AI strategy.
To be clear: I get it. The amount of inflationary headlines on AI — about the threats, the power, the possibly sinister outcomes and misuses — are just very difficult to keep in check. In fact, the sheer amount of the headlines themselves is really part of this. Journalists report on the news, but they cannot pretend the news that concerns them also doesn’t affect them.
But some of these anxieties come from our industry’s past history of taking on technological changes, more so than actual complexity or perceptible danger. And among those anxieties, I’d like us to look at what we are talking about when we talk about the need to “train journalists for AI.”
In the newly released report from the London School of Economics Journalism AI project, we hear that “almost 43% of responses emphasised the importance of training journalists and other personnel in AI literacy and other nascent skills like prompt engineering.”
By the way, I casually mention this report like it’s just another report, but it’s truly a gem. The team at JournalismAI, the AI-focused project of Polis at the London School of Economics, released an important report on our industry’s use and embrace of AI — generative and otherwise. It’s a deep, useful read into several of the facets of the growing use and usefulness of the technologies that make up AI and I highly recommend you read it.
But so, I’d like to pick up on this sense of urgency expressed at both Newsgeist and in the JournalismAI report and look at the topic of AI literacy, as well as the applicative skills of AI like prompt engineering.
AI literacy is absolutely a necessity for journalists — as it would, really, be for any other productive member of society at this point. But in one of the sessions I attended, a Newsgeist participant made this remark that “we’re journalists, we’re not very good at math” as an explanation for why they foresaw that training journalists on AI literacy would be an uphill climb.
I have given many a briefing to C-suite folks on AI in the past few months, and I can tell you this: It’s not a story about math. It is, often, a story about logic — but that’s not math. And, crucially, you often have to explain other parts of the Internet or computers to be able to explain certain aspects of AI.
In this respect, AI literacy is really just a chapter of technical literacy.
I certainly agree with the general concern that we may have an issue training journalists for AI literacy because I think we have done a poor job of integrating technical literacy as part of our job requirements and on-the-job training for journalists.
And this is by no means new. I can tell you my old team at a large publisher used to regularly complain about how much dumb work we had to take on simply because it seemed we couldn’t get buy-in on making sure the non-technical users of the tools we were building would have even the most minimal technical knowledge.
I’m not talking about anything crazy here: I personally was on the phone once for 20 minutes with a person who couldn’t carry out the very simple step I was describing because they didn’t know what the “address bar” was in their Internet browser.
And the problem, to be clear, wasn’t that this person didn’t understand what I was describing: It was that there was no remedial pathway that we could enforce where this person just had to level up on what I would then have called extremely basic technical literacy. As an institution, the thinking was that journalists were journalists and their expertise lay in creating good content.
The fact that this doesn’t happen in a vacuum but rather happens through the use of various tools, some of which happen to be technical, is why being a journalist does have a requirement of some technical literacy.
So in many ways, I am happy to hear that 43% of respondents to JournalismAI’s survey see the need for AI literacy as an important one.
Sidebar: If you want to hear in detail from the respondents of the JournalismAI survey on issues of training, you can turn to pages 30, 33, 52, 53, which all feature responses to this specific challenge.
I would only stress how much, overall, this is a component of technical literacy and understanding how search works or how content recommendations are built is also very important. I am not talking in detail here — that is, to respond to my fellow Newsgeist participant, “not the math bit,” because, indeed, a journalist has no need for this and won’t be building such systems, which is where math would come into play.
But understanding the system intelligence of our technical world is a very large part — not just of making good decisions when it comes to how or when to use AI, but, in general, how to take on our future in a technical society with our heads screwed on, keeping in check the chaos of change.
Another Newsgeist participant from the DACH region observed that journalists wanted to be involved in the technical strategy of their organisation. The fact that AI is touching content creation is probably a large part of the interest here. I certainly wouldn’t assume that journalists want to be involved in designing a paywall, for example. But this means this is a great opportunity to level up our organisation’s training programmes and overall technical literacy.
Writing just last week in a column published by the Reynolds Journalism Institute, Paul Cheung, a technologist and CEO of the Center for Public Integrity, suggested this about approaching AI education and training for journalists:
“Emphasising comprehensive AI education and training for journalists, we need to examine the frameworks required for training and advocate for transparent collaborations between newsrooms and tech companies. Overcoming skepticism and fear of AI is also vital, as is addressing ethical considerations in journalism.”
Cheung offered this as a way to get started in pursuing this goal: “Compile a list of essential AI knowledge that all members of your organisation, from editorial to business, should currently possess.”
I certainly would agree with Cheung that this is a helpful, constructive way to get going, but let the words of the Newsgeist participant also contextualise how much this should be part of an overall effort to understand AI within the context of overall technical literacy. We think of tech as “math” and, in newsrooms at least, push it far away. This makes the work of technical teams considerably harder — in the products we build and in our conversations across the organisations because we don’t come from a solid common base of understanding.
As it were, it’s hard enough to know where we may be taking AI into our organisations. Poor technical literacy, and then, yes, AI literacy, will only compound the chaos.
Further afield on the wide, wide Web
Some good reads from the wider world of data:
- There were lots and lots of new developments from the tech platforms last week. ChatGPT can now search with Bing (for paid users), Meta launched its AI chatbot (her name is Meta AI), ChatGPT can now interrogate photos or videos, and, in a separate new feature, it can now speak.
- Amazon announced it was bringing generative AI to its Echo devices, although I hope it won’t be an excuse to abandon the core features here because all I’m asking her at this point is to turn off my ceiling lamp and she’s not always into it. (Reuters, Digiday)
- In Platformer, Casey Newton explains how Google’s Bard chatbot is trying to improve AI’s issue with hallucinations (when chatbots make up stuff because it is in their literary nature to generate content that “sounds” good). Google, of course, has access to that search engine thing they built, and Bard essentially checks its own work by asking Google about it. This approach, which makes use of of different agents whose methods have little to do with each other, is often how technical progress happens. (Platformer)
- Closer to us, Poynter is reporting on the current state of collective bargaining from news organisations as AI seems more likely to “displace” (a rather gentle euphemism) news workers. It’s worth noting that there is always some ambiguity in job reductions where automation is invoked as a reason: “We don’t need this many people to do this because we have machines” is sometimes just “we wanted to layoff people to make more money but we couldn’t just drop this without a reason, and the robots are as good a reason as any.” (Poynter)
- Now, there may be other avenues for journalists who’ve been displaced by AI … . Ok, sorry, I was joking, but it sounds like the AI companies may be in the market for high quality writers. Rest of the World is reporting on the companies who provide training data for AI, who have been hiring writers, or poets, to improve the quality of the output in the final product. This is, essentially, a kind of reinforcement learning — where programmers of deep-learned algorithms observe a short-coming or want to improve an outcome by providing carefully selected data to re-shape the algorithm to a different outcome. (Rest of the World)
About this newsletter
Today’s newsletter is written by Ariane Bernard, a Paris- and New York-based consultant who focuses on publishing utilities and data products, and is the CEO of a young incubated company, Helio.cloud.
This newsletter is part of the INMA Smart Data Initiative. You can e-mail me at Ariane.Bernard@inma.org with thoughts, suggestions, and questions. Also, sign up to our Slack channel.