It’s a big business, the analytics industry. It seems that, in this day and age, you’ll find a company offering “Big Data” analysis under every bush, which shouldn’t be surprising given the importance most modern companies place on crunching the numbers. However, based on my experience working with organisations that have succumbed to the urge to explore their data, I’d argue that very few know what to do with it once it has been mined. If they haven’t already splashed out on a “data scientist” (who, more often than not, can be found sat in a corner waiting for questions that nobody really knows how to ask), then ‘analytics’ is often something the youngest office Millennial gets dumped with, the idea being that they probably “learnt this Google crap at school”.
“Ever wondered what a race to the bottom looks like? Plonk an editor or a journalist in front of a screen of real-time analytics and watch their reaction”
In the world of editorial, a general confusion around data, analytics and their uses seems to be particularly prevalent. Newsrooms hoist aloft massive screens showing real-time analytics, primed to chase pageviews and other single metrics down any rabbit hole they might spot. Editors find themselves studying spreadsheets, desperate to spot something – anything – that looks like a clue to hitting their targets. If you’ve ever wondered what a race to the bottom looks like, plonk an editor or a journalist in front of a screen of real-time analytics and watch their reaction.
There are at least two problems here: firstly, that editors are fundamentally storytellers rather than data analysts, and secondly, that – until recently – the analytics packages on offer were never intended for editorial use, operating instead on single metrics (pageviews, dwell time, and other metrics that only point to isolated parts of the overall story) that were developed first and foremost for marketers. The frustrating scenario we’re left with finds a time-starved editor pouring over a Google Analytics screen that was created with marketing in mind.
“Editors are fundamentally storytellers rather than data analysts”
Having spent several years working initially as one of those frustrated editors, and then with one of the first companies to serve up an analytics package truly focused on editorial, I have often wondered: how did we get here, and why is our industry lagging so far behind? So I did a little exploring.
The three eras of analytics
While you could argue that people have been measuring and analysing things ever since Fred Flinstone and Barney Rubble competed over the speed of their ankle-powered vehicles, data analysis with regard to modern business began to evolve in the early part of the 20th Century. Thomas Davenport of the International Institute for Analytics recognises three distinct eras, and as we’ll see, you have to wonder whether he had any idea that editorial analytics was even a thing. While he sees most analytics-related businesses bounding off into a happy future, at least some of you will recognise that the world of editorial is partially stuck some way in the past.
1.0: Traditional analytics
Davenport believes that the first of these became recognisable in the mid 1950s and faded in the early part of the new millennium. To many people working in editorial in 2017, some of what he describes from the early days will sound alarmingly familiar. He explains that, “analysts were kept in ‘back rooms’ segregated from business people and decisions”, that, “few organizations ‘competed on analytics’ – analytics were marginal to strategy”, and that, “decisions were made based on experience and intuition”. Of course, editorial decisions still need to be made with experience and ‘gut instinct’ to the fore, but it’s clear that, for many publishers, analytics still play little more than a “marginal role” in strategising, often because analysts remain – mentally, perhaps, rather than physically – “in back rooms”.
Davenport continues, explaining that in the early days, “the majority of analytical activity was descriptive analytics, or reporting”. Editors of a certain age will know that, until very recently, this was very much the case in our industry. Editorial analytics have always existed in the most basic sense – you won’t find an editor in history who didn’t have a reasonable idea of what his sales figures were. And if sales figures were the pageviews of their day (by which I mean data that lacks detail and is somewhat misleading), at least we had focus groups and letters to the editor to give us some sense of how we were being received. Some of us still do. Indeed, one might argue that editorial analytics actually preceded the analytical endeavours that Davenport is observing here, since evidence exists of focus groups in editorial action as far back as 1920s as a form of market research and as a way of developing propaganda techniques during the 2nd world war.
The 1.0 ethos was apparently, “internal, painstaking, backwards-looking and slow”. In this, I’m not sure we’re much further on. Internal? Check. Painstaking? Double-check. Slow? Certainly, making sense of editorial analytics is a lot slower than it ought to be. Only “backwards-looking” feels out of place. We have a tendency nowadays to leap on anything that claims to be “real-time”, which I suspect is, in part, because we like having our egos massaged (“look how many people are reading my article”) and, in part, because it appears to absolve us of enquiring much further (“I got half a million pageviews! What more do you want?”). Of course, this lands us in repetitive circles. We end up in the much-maligned echo chamber when we rinse and repeat without asking ourselves what it is that we’re doing. All of that said, in editorial analytics, backwards looking is not that much of a crime if you take the term literally: looking back over what you’ve done is a good thing; learn from the trends that your editorial analytics display rather than trying for real-time oneupmanship.
“We have a tendency nowadays to leap on anything that claims to be “real-time”, which I suspect is, in part, because we like having our egos massaged”
Other points in Davenport’s description of 1.0 that still ring true include, “Take your time – nobody’s that interested in your results anyway”, and, “if possible, spend much more time getting data ready for analysis than actually analysing it”. I’ve lost count of the amount of times I’ve seen a company spend good money on a data analyst, only to find that their employment is an example of keeping up with the Joneses.A lot of editors tell me, mostly (and understandably) off record, that they’re not really sure what questions to ask of their data, they don’t really understand what they’re being shown, and they’re not entirely sure how to action the results anyway. So the analyst toils away, wondering what it’s all for, just as they did 50 or 60 years ago. For an industry that prides itself on to being first to the breaking news, we’re way behind the times in so many areas.
2.0 – Big Data
While the term “Big Data” was only coined around 2010, the interest, for many industries, in what could be digitally harvested and how that data could be used goes a bit further back. As an example, Google Analytics began life as Urchin, the code for which was written around 1997. It became Google Analytics in 2005, and as of 2015 it was being used by just over half of the world’s websites (taking an 83.4% share of the analytics market). Again, in my experience, those that use GA most effectively do tend to be those with something tangible to sell – a sale at the end of the funnel that can be predicted, tracked and reported – and, at the risk of repeating myself, this is what the tool was set up for. However, with its plethora of single-metric bells and whistles, it’s extremely misleading to the modern editor, offering very little that can point to genuine reader engagement or behaviour. And yet, there it is – the most commonly-used (and misunderstood) analytics tool in the industry.
This period also coincides with the explosion of social media – platforms that were built on top of algorithms that required data in order for them to work effectively. This was a pivotal point in the evolution of analytics for a number of reasons. Firstly, because we were increasingly making use of systems that fed off instant data rather than human-analysed spreadsheets, and secondly because it made data ‘sexy’. On social media, the data surrounding your own profile had a direct effect on how you felt. It’s where we see the beginnings of that self-gratifying, worrying real-time trend I mentioned earlier, and where you start to see the rise of things like Klout and other celebrations of follower figures. It gave rise to an alternative universe that few of us could have imagined, in which bizarro situations and an obsession with the self became the norm, such as this…
For most ‘normal’ people, the sudden realisation that you could somehow measure your own popularity (however dubiously) marked the first time that statistics and analytics had entered their lives in an exciting way. Your teachers always told you that one day maths would be useful, but they never so much as hinted that it could have a positive effect on your dopamine levels. And businesses realised this, too. As Davenport points out, Big Data, analytics and the way the consumer could trigger and influence them, “not only informed internal decisions in those organisations, but also formed the basis for customer-facing products, services and features.”
“Your teachers always told you that one day maths would be useful, but they never so much as hinted that it could have a positive effect on your dopamine levels”
Again, it took some time for this to filter down into the editorial industry, and when it did, it initially came in the form of “most popular” articles widgets – feeding, once again, on that most dastardly of metrics – pageviews – rather than anything that could actually be called a genuine indication of successful engagement. And, to some extent, that’s where we’re stuck. While it would be unfair to say we haven’t made anything useful on the back of the data we use (I’m thinking specifically about personalisation algorithms and the ability to feed the reader what we think they’ll most appreciate), to most journalists on the newsroom floor, data still starts and ends with their pageviews, real-time or otherwise.
Meanwhile, other data-reliant industries raced ahead. “There was a high degree of impatience,” says Davenport. “One Big Data startup CEO said, ‘We tried agile [development methods], but it was too slow.’ The Big Data industry was viewed as a ‘land grab’ and companies sought to acquire customers and capabilities very quickly.” The data geeks were allowed out of their back rooms, given pay rises and hailed as ‘data scientists’. The word ‘algorithm’ entered the language of the everyman. These were heady days.
Not to be left entirely behind, in the editorial industry, too, things began to look like the Wild West. Cloud-based data startups popped up claiming to have the answers we were all searching for. But, again, we had the questions all wrong. The tools on offer relied hugely on the same old single metrics, or alternatively encouraged us to navel gaze at our social media performance. Huge importance was placed upon the ubiquitous ‘like’, with few people stopping to wonder what that meant. You got 567 likes? Great! It sure looks like approval, but does it look anything like genuine engagement? What does that mean? What happened next? Did anyone actually read anything? Twitter began touting audience reach and page impressions and everyone got positively high on the sheer size of their numbers. Two million page impressions? Score! But what’s a page impression? Do you have any idea what you’re actually talking about…?
Somewhat extraordinarily, it took until 2016 before anyone dug up any serious data on how Facebook shares invariably leads to zero reader engagement. By this point, however, a few of us had begun to suspect that something was afoot; that the analytics we were using were creating bad habits while revealing nothing of value at the same time.
3.0: Fast impact for the data economy
The third of Davenport’s data eras is where we find ourselves now, only it’s easy to see at the briefest of glances that the editorial industry – as capable of implementing Big Data models as anybody else – has a ton of catching up to do.
Davenport talks of rapid and integral use of data at the highest levels, powering decision making within organisations, yet across newsrooms the world over we see editors struggling to find time to do the jobs they were trained for, let alone take a nosedive into an analytics report. And, as we’ve already seen, there are many that are unsure about how they ought to implement their reports even when they manage to find five minutes to study them.
He talks about multiple data types often combined, but here most of us are still enslaved to analytical tools that encourage dependency on misleading information. (Crazily, many people in the industry appear to be aware of this, but do very little to change that behaviour. Isn’t one definition of madness something about knowing you’re mistaken but doing the same thing again and again?)
Until very recently, few people had given much thought to the possibility of separating out data points and creating something that compares and analyses how they relate to each other. At a WAN-IFRA conference in November 2016, Kritsanarat Khunkham of Die Welt talked about the development of an ‘Article Score’ that went some way towards doing this, and only last week Marc Prichard of Procter & Gamble called for the ad world to “grow up” and start using “industry-standard verification metrics”. Both of these are great steps forward in the right direction (especially for members of the ad industry, who have been pulling the wool over each other’s eyes for too long), and they coincide with the coming of age of CPI, the heart of a groundbreaking indication tool developed by Content Insights, which delivers precisely what these people are beginning to notice they are lacking, and is rapidly being taken up by some of Europe’s leading publishers.
All of this is not to say that we ought to give up and abandon ship. However, while Thomas Davenport depicts the world of business ticking along to the sound of data being successfully farmed, the Boston Globe editor, Brian McGrory, hit the nail on the head in a recent email when he said, “We are swimming in metrics. The goal now is to refine, interpret, and apply them.” Finding a way to manage that is the Holy Grail for most people at his level of management.
So, where are we now, and what’s next for editorial analytics?
While this article may read a little like the report of an errant schoolchild (“must do better!”), we’re actually in a reasonably promising spot. As I pointed out earlier, key figures in the industry are starting to call for a more mature understanding that brings a standard to the way we approach analytics as an industry. It has taken us a while to reach this point, and some of us have been calling for an improvement for several years, but we’re getting there. More and more of us are beginning to understand what’s needed, and at Content Insights we’re already some way along that road.
- It’s time we recognised that many of the tools we are using to measure our content performance were developed for marketers. If we’re going to take editorial analytics seriously, we need to recalibrate how we approach our data and make use of tools that are set up with editorial in mind.
- The editorial industry needs to fully subscribe to the idea that single metrics alone aren’t going to tell us enough. We need to look instead to scored metrics that understand how the strands of data that we gather relate to one another.
- Editors need to admit more readily that having access to data is not enough. We need to know what questions to ask of the data we collect, better understand what it means and how we can implement it to improve performance, otherwise the little time we have to spare is being wasted.
- Just as we need to move away from single metrics, we need to recognise when our dopamine levels are being played. Real-time data is a wonderful thing for parts of the newsroom – specifically the parts that deal with real-time frontpage layout and squeezing every last click out of your potential readership – but it doesn’t tell you anything about how your audience reads your articles, or indeed how to build on your success.
Many of us have known all of this for long enough. Isn’t it about time we acted together rather than sat stroking our chins? Until we start looking at these things as an industry, it seems to me that our own Era 3.0 remains some way off.
First published on the blog of Content Insights, specialists in editorial analytics.