Editorial teams can relax about personalisation and here’s why
Smart Data Initiative Blog | 07 November 2022
I worked for several years as a homepage producer at The New York Times. A lot of how I think about the opportunity of personalisation is informed by the challenges of making a product on such a large one-to-many basis, trying to serve very different goals.
When your entire pool of content is good and the number of visitors so large, you wish you had a way to create many more different homepages that allowed a more perfect fit by using some subsets of your users. They could still be “well informed” by the judgment of an editor, but the relative ranking of items may be different and some items may participate in the rankings for a particular group or individual user, which would not be there for a different group or person.
My interest in personalisation started from a place of both frustration in what I couldn’t do, but also in the knowledge that we could do so much better both for the user and also for the news organisation.
Several years later at the New York Times (I worked on the CMS team then), I had the chance to work on a long project that sought to explore novel ways to seed personalisation. I can’t tell you how much I loved that project, and my interest in personalisation was so keen that I decided I only wanted to work on data and personalisation-related projects at that time.
This is why I left The Times and joined Taboola, which knows a few things about massive data and personalisation at scale.
But in working across different projects that touched the question of scalable and efficient content discovery, I also often wondered about the nature of the misgivings of other people in the room. It feels so entirely self-evident to me that personalising news and news experience is a huge opportunity and that the sharp edges of this endeavor can be efficiently polished.
Yet there are misgivings, of course, and they are worth thinking about. Now, the thoughts below are very much my opinion, and I’ll be happy to hear opposing views.
Filter bubbles and bad algorithms
Arguments against personalisation in our editorial products that lean on the risk of filter bubbles or bad algorithms are, in my view, strawmen.
Filter bubbles occur naturally anyway. We self-select our friends and the circles we run in in our communities. We filter our own media diets. The echo chamber is what we already organise for ourselves (I’ll refer to Axel Bruns excellent book, Are Filter Bubbles Real).
And the argument on bad algorithms assumes far more unbounded datasets than what news publishers work with. The entire universe of YouTube video is very different than the production of a news publisher, which by definition vets every article in the set. In addition, personalisation strategies can go from the truly free-range (deep learning) to the far more tamed and managed (rules-based).
So, where is the anxiety to personalise the news truly anchored?
In personalised news, the role of the editor is to judge articles on their own, but rankings and selections are derivative.
Selection, judgment, ranking
One argument I have heard many times is far more fragile than the other, but is actually what I think is the heart of the anxiety: “The editor knows better.”
In three words — selection, judgment, ranking — it captures the perception and fear that a role long at the heart of our industry and profession is being challenged at the highest level of its authority. Which is judgment, knowledge.
In this challenge, there is a question being asked about what could be changing roles and therefore different identities attached to the folks in editing roles that are closest to selection, judgment, and ranking.
When we look at personalising news, we are shaking some long-standing assumptions about the way journalism is crafted and distributed. We are not necessarily changing journalism, though. But change management and transformation are never easy — and personalisation is not a trivial amount of change to manage and transformation to make in people’s skills, careers, and, therefore, identities.
Personalisation asks a question about the role of editors in ranking content. If the set of articles that is delivered to user A is different from the set of articles for user B, we know rankings are not absolute. And ranking has been part of the role of the editor since the beginning of modern news making.
The consequence of seeing the role of the editor as the “ranker” of editorial content means introducing a new automated function that takes ranking away, creating an existential crisis.
I think a key reason for the unease of many journalists around personalisation is a deep-seated anxiety that their job is being made obsolete by a robot. While many journalists may in fact say that they are not worried about this, I believe this anxiety is latent. Editors will say “I know better,” “personalisation can make mistakes.” They will point to filter bubbles.
And I don’t mean to suggest these aren’t actual objections they are having. But I believe the main issue is really the perception that it changes their job and puts it at risk.
But there is a bit of an intellectual equivalency that has been incorrectly drawn here between judgment and ranking, and this is where journalists need not worry.
Rankings are the derivative expression of a judgment (which can be absolute) filtered through the matrix of the personal preference of a reader. When a news editor in New York tells her boss editor, “This is a key piece to put on our front page in the New York edition but it should go inside for our out-of-state edition,” they are doing a form of personalised ranking. Their judgment is the same — the piece is great — but their ranking is affected by the circumstances and the people who will get these rankings.
Professional journalists already separate out judgment from ranking. Personalisation is not about doing without the value judgements of editor — the dimensions by which we structurally assess the qualities of our content (scores, of which I am a big big proponent: “best by” reading dates, sentiment analysis).
The level of granularity of what editors see and judge is far more refined than what robots can do. Good personalisation algorithms will be good precisely if they lean into leveraging value judgments made by humans (and stored in such a way that the algorithm can read them). Companies that are leading the way in this space are building personalisation algorithms that do this.
Because, importantly, a place where editorial judgment will be unmatched (for some time, at least) is anticipation. Anticipation of what the user needs or wants, or large-scale questions in society, in “playing out this scandal three steps ahead.” This is the kind of learning computers are nowhere near being able to do.
Which is why the role of an editor as a judge is quite safe.
If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.