In 2019, Artificial Intelligence (AI) will begin selecting the news stories you meet on major digital news brands. This development marks the humble beginning of what will become a revolution in news publishing. It will have major consequences, which can end up good or bad, depending on how we as an industry decide to use the technology.
What is AI today?
The current breakthrough in AI’s application is driven by the fact that supporting technologies are becoming more readily available. Also, it has been thoroughly shown AI yields concrete results.
Today, AI is already selecting much of the content you see on Facebook, Google, or Netflix. Information technology is not selected by self-governing robots but rather algorithms that learn by themselves via machine learning and through the mix of content in Facebook’s News Feed, Google’s search results, and Netflix’s content recommendations, based on factors like your personal behaviour, friends’ preferences, time of day, and geographical location. In a few years, this same type of self-learning algorithm will drive your car, diagnose your X-rays, and recognise your voice when you speak to Alexa, Siri, or Google Home via your phone, TV, or living room speaker.
The algorithmic news media editor
At first, you probably won’t notice much difference, just like you didn’t when using Google, Facebook, or Netflix. When using your favourite news brand in 2019, you will continue to get a broad selection of news stories. Some of them will be chosen by an editor while others will be selected by a self-learning algorithm.
However, the news you get from the self-learning algorithm will be selected specifically for you based on all the data about you, your interests, and your behaviour the algorithm has access to.
When self-learning algorithms are given power over what news stories you are served, the news brand is likely to be more relevant, alive, and efficient. You are also likely to experience it as better, just like you probably have experienced that Netflix and Google or Alexa and Siri have become better. At the same time, new things will happen when self-algorithms start editing news media.
Most basically, neither news users nor media will any longer know with certainty why you are served the mix of news articles you are served. The reason is the self-learning algorithm by itself decides what news stories to show you based on data about you and the optimisation criteria a publisher has told the algorithm to optimise toward. Consequently, it is the self-learning algorithm that edits the news brands rather than the editor. And even though we might be able to identify the direct consequences of this change for the users’ reading behaviour, for example, we will have a hard time identifying the potential side effects.
The future is not given
That AI will find its way to news media is inevitable, and front-running publishers such as The Washington Post have already been experimenting for some time. The reason is both the quality and efficiency improvements it will offer and the competitive pressure the news media is exposed to — including from Google and Facebook.
However, the way in which AI is applied is not given in advance. We will also likely be able to avoid some of the possible pitfalls occasionally spotted in initial years of self-learning algorithms usage to steer companies like Google and Facebook. These pitfalls include reinforcement of filter bubbles, the making biased decisions about what news articles to serve because of biased historical/training data, and serving increasingly extreme content because self-learning algorithms are sometimes given optimisation criteria that is too simplistic.
The ways in which AI has been used so far among the international giants can rightly raise a discussion about whether we are risking a new and collective form of artificial stupidity rather than intelligence. However, with awareness about pros, cons, and potential pitfalls at the right levels of decision making, we are able to make informed decisions about how self-learning algorithms should be used to create the news brands and support the public debate we desire.
Will 2019 be the year we begin discussing AI ethics in news publishing?
Given the potential consequences of the AI introduction, it is easy to extrapolate both Zuckerbergian utopias and Muskian dystopias for the years following 2019. However, even though self-learning algorithms will make its first inroads in the news media, it will not (yet) replace human and journalistic editing among major mainstream news publishers.
Most likely, many major news publishers in 2019 will continue to have humans editing the front pages of their Web sites and apps where most traffic exists and newsworthiness plays a large role. Self-learning algorithms will be allowed to edit the flows of news stories further down the front page and on articles pages. Accordingly, in 2019 we are still a long way from self-governing editors, robot overlords, and Kurzweil’s singularity.
However, even with the singularity still far off, it is important we begin discussing the use of AI in news media. This discussion is relevant both because news publishers are beginning to apply self-learning algorithms and because the use of AI in news publishing is likely to expand news content production in the years to come. However, at least as importantly, the discussion is relevant because giants such as Facebook and Google have been using self-learning algorithms to serve news content to you for years.