The implications of Artificial Intelligence — especially the new breed of “generative” AI tools with their spooky ability to give us answers that approximate journalism — reach across the newsroom and all departments in publishing.
There are fantastic opportunities to create better, more accurate information, more efficiently — yet they carry some risks.
This is just the start, as I detailed in a recent blog. But some of it is going to feel familiar and may expose gaps in the ways journalists, newsrooms, and publishers deal with rapid technological change, as well as how quickly and imaginatively we may have to respond to this fundamental upheaval in search.
But let’s start at the beginning:
What does “generative Artificial Intelligence” mean?
Generative AI systems fall under machine learning. Through machine learning, practitioners develop Artificial Intelligence through models that can “learn” from data patterns without human direction. These models then generate answers, having digested an enormous corpus of information to create relationships from a Large Language Model, in ChatGPT’s case said to be 45 terabytes of text data. (Source McKinsey.com/Wikipedia.)
What is ChatGPT?
ChatGPT, or generative pre-trained transformer, is a chatbot interface to a model created by OpenAI to showcase the potential of generative AI to curate coherent answers to complex questions or tasks — from a piece of journalism to a piece of software code. OpenAI has also released the Dall-E tool to produce images with generative AI. There are others available. (Sources McKinsey.com/Wikipedia/OpenAI.)
Who owns OpenAI and what exactly is it?
OpenAI is a San Francisco-based Artificial Intelligence research and product-creation organisation under the umbrella of a non-profit foundation with a for-profit company to monetise what it develops. It was founded in 2015 by a group including Sam Altman (now CEO of OpenAI and formerly President at Y-Conbinator), Reid Hoffman (founder of LinkedIn), Jessica Livingston (a founder of Y-Combinator), Elon Musk (PayPal, Tesla, Space-X, Twitter), Ilya Sutskever (computer scientist and chief scientist at OpenAI), Peter Thiel (PayPal, Palantir Technologies, and Founders Fund). Microsoft has invested several billion dollars in OpenAI. (Sources Wikipedia/OpenAI/CNBC.)
Should I be worried about using ChatGPT in journalism or about my journalists using it?
Transparency may be the key, especially if you are publishing anything derived from ChatGPT or another large language model purporting to act as AI-driven search. It might be smart to insist that journalists disclose to editors and readers when they are using it, even on an experimental basis. The best answer may lie in the warning from Microsoft in the Bing FAQ: “AI can make mistakes, and third-party content on the Internet may not always be accurate or reliable. Bing will sometimes misrepresent the information it finds, and you may see responses that sound convincing but are incomplete, inaccurate, or inappropriate. Use your own judgment and double check (sic) the facts…” CNET and Bankrate may not have been transparent.
We already use Artificial Intelligence in some of our reporting. Should we stop?
Artificial Intelligence has already proven immensely valuable in data journalism, some forms of rote journalism (like sports results and stock market reports), let alone analysing enormous data sets in investigative journalism or health reporting, for example. It is a well-established process in many newsrooms and should not be confused with evolving “instant answer” or the emulation of journalism of these early generative AI applications. However, it also seems that generative AI as it develops may be able to assist or replace some reporting tasks and coding.
If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.