AI should be understood for the operational — not magical — gifts it brings
Smart Data Initiative Newsletter Blog | 14 December 2023
Hi everyone.
We’re coming to the final newsletter for the INMA Smart Data Initiative. The project is wrapping up at the end of the year, and my contract is ending as well so it is also my final newsletter to you all.
Not to worry, INMA will continue to look at issues in the AI space under a different umbrella and with a new consultant. As for me, it’s been a terrific two years at the helm of this project, and I’m sure I’ll continue intersecting with many of you through my consulting work in the product and data space. You can find me at ariane@upsideslabs.com.
And so, in the spirit of this end-of-the-year, final bookend to the project, I’m leaving you with a few thoughts and ideas on some of the big topics we’ve encountered in the past two years.
Thank you so much for coming along on this journey with me, and I wish you all holidays filled with light and joy.
All my best, Ariane
The “what if” of AI
There are lots of ways for good ideas to fail. There are lots of ways for useful technologies to be used badly — and I don’t mean “used for evil.” I mean applied to the wrong issues or applied in poor ways to issues.
We then look at these experiments and think of them as failures, but a post-mortem would actually show that our disappointment comes from unchecked expectations, from mismanaged egos and political stakes that skewed both what got built and how it was perceived.
This new age of AI introduces whole new families of tools and capabilities to our industries. Beyond specific tools and capabilities, this new age of AI opens us up to taking on so many “what if” type of questions that we wouldn’t have thought of just a few years ago.
How might we:
… personalise our news reports to a far greater extent while keeping the ethos of an editor’s guidance in our mix?
… make our content accessible to users who are low-information, have a disability, have a preference for certain specific formats or speak languages we don’t use?
… make a lot of the processing around the work of journalism be much easier, cheaper, or even automate certain things entirely?
… create a lot of companion features for our content to be always available with everything we create: timelines, transcripts, summaries, catch-up features?
… have perfectly useful archives that really leverage the depth of our organisations and the context we can add to issues?
These days, there is a bit of a cacophony in many organisations, with the C-suite asking themselves, “What are the opportunities and risks of AI,” and with folks closer to operations looking for applications where AI may move the needle. I’m going to sound like a real naysayer for a bit — bear with me — because more than a technological hopeful. I am a technological realist.
So I’d like to use this last newsletter to contextualise the perception that our world is about to change because of AI. I fear so much disappointment from some of the narratives I read about. The perception that an experiment failed or disappointed. That a technology didn’t help when we hoped for too much or hoped for something it wasn’t particularly suited to deliver in the first place.
Ask yourself how you feel about the metaverse or the blockchain. To be sure, these are more narrow technological trends than AI, but I would wager that a lot of how you are feeling underwhelmed is in part because of how it was sold to you in the media in the first place.
For AI, I want you to be excited for what may come but sober in terms of how much we can expect to change and how quickly.
Broadly, generative AI is too fledging to be an organisation changer in the way that I have seen some C-suite imagine. That is, at a three- to five-year horizon, it’s not going to significantly change the work and staffing of entire departments. It’s not going to allow anyone to wield an axe around and save a bunch of costs.
The place where AI actually lives is very tactical. But from small changes and improvements, we can make to our workflows. From better features and improved user experience, we can put before our users and collectively build stronger organisations with better products.
This doesn’t mean the C-suite shouldn’t take an interest in how generative AI is going to enter the organisation because these tactical changes all require investment, require that their goal and outcomes be evaluated. Most companies cannot just sign blank checks to any department asking for extra funds to try out all the new tools and making judicious bets as to where we want to experiment is an important responsibility.
Most of the everyday story about AI entering our organisations is really at the operation level. The reality of the changes we can and should hope to see as we’re able to shift and automate, as we’re able to scale ourselves in new directions, is about productivity and output — not a paradigm shift.
Remember, AI has a catchy name, but its synonymous name is machine learning. If it weren’t for the catchy name and the promise of something that flatters us (intelligent, i.e. made onto our image!). And I hope the C-suite teams that bring up AI as a talkable topic will soon let it breathe in the space where it should be breathing — with operation teams and their leaders rather than being overly top-managed because it’s the fashionable thing to chat about.
If this stays too much a fancy thing to talk about, I’m afraid it will inevitably disappoint the lofty expectations that you peg to things when they remain abstractions and the stuff of strategic white papers.
I say this because much as I have enjoyed the past year of fawning headlines selling the “New Age of AI,” I really want to stress how much this is a narrative that comes from the parts of Silicon Valley that are crucially dependent on hype to bring new investors (and billions of dollars) into the space.
I am not saying that we are not headed, eventually, into a new age of AI, but these headlines have a tendency to imply that this brave new world (good and bad) is just around the corner and that’s simply not true. Even as new technology matures and the speed at which it matures accelerates, its actual age of maturity is years from the moment when we could already imagine it — to say nothing of what we cannot even imagine of it yet.
Right now, each new version of someone’s LLM appears to make visible progress toward this bright new future. But when you start to use it, you quickly see the very large blind spots that make the technology only useful within such specific usage and with such high overhead that what you may imagine using it for is going to be much more modest than what these bright headlines seemed to suggest.
I have a bit of a personal history when it comes to very deep R&D:
My dad led R&D for the French national railroad. In the 1980s and 1990s, he architected what eventually became the European rail traffic system, ERTMS. Astrée, its French ancestor, was kicked off in 1985. The first French train with ETCS (the first step of ERTMS) came in 2006 (if I remember correctly, the Germans actually rolled it out earlier on their network).
Before you tell me that mechanical engineering doesn’t have the same rollout as software engineering, I’ll tell you that the significant parts of this project that were not about wrangling Europeans around a shared system were actually all telecom and software based. Such a system is, primarily, an expert system — aka, rule-based AI. Extremely complex but still, it took 30 years and it’s not yet fully realised in 2023.
My dad enjoyed explaining things to his 10-year-old daughter (and clearly the 10-year-old enjoyed systems even then), so there are various learnings I have decanted from these years — even if, of course, a lot of it was much beyond my reasoning:
The first is that while we talk about our vision and goals, and they feel very real, the timeline for anything of very high complexity is much more significant than it usually sounds. In part because specialists just understand this implied reality. They know it all takes a lot of time to bring very complex systems to maturity or production.
They don’t spend all their time rehashing that bit though. And, importantly, they understand that lay folks like ourselves need to be sold and explained the vision while the implementation details of it, including all the gory work to make everything ship-shape, is just too granular and minute to bring to the non-specialists.
I vividly remember asking my dad, maybe when I was 5 or 6, if he was working on the TGV, France’s famous high-speed train. He looked at me like I was nuts and told me, “Of course not. The people in my department who were working on the TGV were doing this 20 years ago.”
The gap between my question and my dad’s answer — I didn’t understand it then, of course — is that to someone who understands the complexity of something, it is readily obvious how much time the full roll-out of that complex technology actually requires.
This long-winded point is to say:
Today, we are all lil’ 5 year-old Ariane, who imagined that to work on something means we’re going to see the something very soon. But in fact — and while we may eventually see a version of what we so vividly could explain and plan for from a very early stage of vision — the actual deployment of large-scale, highly complex engineering at an industrial scale is a far longer timeline than what it sounded like on paper.
I close this little side trip into my personal history and get back to our more immediate industry problems:
Just because the terminal vision for something is a ways away doesn’t mean, of course, that the way stations don’t have something to offer. I can put my technological-hopeful hat back on. And that’s where it is so exciting to see publishers look for these way stations.
They are the “how might we’s” I started this post with. None of them are paradigm-shifting, but each of them potentially adds value and, importantly, helps us mature our understanding of how AI is going to progressively help us do more, better, and, hopefully, make what we do more relevant, usable, and valuable.
I am not forgetting all the myriad of ways that AI represents a challenge (at best) or a threat to both our industry and to society at large. But if the timeline is slower than what we may imagine, it also gives us some time to both strengthen what we do and a chance to see in what ways, more precisely, AI will make our world more challenging.
But when we actually mature with technology, we sharpen and become more realistic and efficient in our understanding of both the upsides and the risks of what we are working. We go from “broadly hoping and broadly fearing” to being both more specific in our vision and hopes — but also having more specific and a better-calibrated understanding of the threats we face and the work we need to deploy to limit them.
This, too, is part of the hype story and why so many voices will be throwing themselves at the topic, looking for the optics of being part of the conversation.
Don’t let yourself be swallowed by doomsday AI.
The folks who agitate this want your Linkedin likes (best case scenario), to sell you something (middle scenario), or to wag the dog away from too much or too little legislation (worst scenario).
Treat generative AI — in fact, all of AI — like the technologies they are: conduits to build better tools and, piecemeal, improve this or that, or serve a user a bit better.
Let a thousand AI experiment flowers bloom — in the newsroom, in the marketing team, in the product team, in the subscription team, and, of course, in the data team. Let these leaders make their own experiment, and don’t manage it too much if you’re the person holding the checkbook. Don’t worry about it too much either. Just put a money limit on how far experiments are allowed to run before they need to show ROI.
The rest of it is a very long journey, much longer than the headlines suggest.
Further afield on the wide, wide Web
Some good reads from the wider world of data:
- The European Union last week announced a deal on a draft for the EU AI Act, which proposes to regulate various parts of AI as applied or developed in the EU. I naturally looked for papers that could boil down that ocean a bit (the draft law, as of my writing this on Monday evening, is not yet available). And I come to you with efficient at-a-glance summary, via Yann LeCunn onLinkedIn (LinkedIn).
- A lay person’s explanation of news feeds and the personalisation that lives on top of it. Some real clear explanation of what the graph is, too, from a data engineer formerly at Twitter and LinkedIn. Send this feature to the non-data person in your life if they like nerdy things but still don’t want to join the dark side you’re on (Mike Cvet via Medium).
- Just a few weeks late on this, but this is from Laura Ellis, the head of technology forecasting at the BBC, and it’s a good one: Project Origin looks at media provenance, looking for ways to mark, authenticate and otherwise protect the content produced by white hat organisations such as ours. Origin is a consortium, so you should reach out to Laura if you want to add to the group, which already includes Microsoft, the CBC, and the New York Times (BBC).
- Digiday reports on how the Trusted Media Brands group went from a dedicated task force on AI to turning the responsibility to experiment into a shared goal across teams. Not to be too self-referential here, but I’d say that definitely fits in with “let a thousand AI flowers bloom” spirit I was talking about in my newsletter this week (Digiday).
- Felix M. Simon, a researcher on media at Oxford University, published a new paper on the intersection of platforms and publishers in the age of AI. News publishers see the dependency they may build by leveraging the AI systems built by large platforms, but they also don’t necessarily see alternatives: the necessary investment to have anything viable puts such systems out of the reach of most, and the need to focus on news-specific mission is also a place where the publisher uniquely has to focus (Digital Journalism).
- It’s a new season for Harvard’s Nieman Lab Predictions for Journalism annual feature. This year, a few will deal with AI but in general, it’s always a good read from a range of good people. Here’s The Rise of the AI Class as an entry point into the series (Nieman Foundation at Harvard University).
… And that’s a wrap for this final newsletter! Thanks for traveling the Internets with me. Find me there — Ariane
About this newsletter
Today’s newsletter is written by Ariane Bernard, a Paris- and New York-based consultant who focuses on publishing utilities and data products, and is the CEO of a young incubated company, Helio.cloud.
This newsletter is part of the INMA Smart Data Initiative. You can e-mail me at Ariane.Bernard@inma.org with thoughts, suggestions, and questions. Also, sign up to our Slack channel.