A recently published Reuters Report helped to further conversation around the notion of a separate set of analytics for online editorial. While there are freely available ways of measuring the success of your articles or blog posts, not least Google Analytics, there’s a growing feeling that these tools are first and foremost built for marketing departments, and that they simply aren’t capable of recording user behaviours in a way that is useful to editors.
Several publishing houses have begun investing in their own bespoke reporting platforms, most prominently The Guardian with their ‘Ophan’. However, the cost of doing this makes it prohibitive to many media companies and independent bloggers. So I caught up with Dejan Nikolic, the CEO of Content Insights, who has spent the best part of the last two years looking at ways to improve editorial analytics and how editors can make better use of them.
You’re not an advocate of single metrics in any way, but I know that recently you’ve been looking at loyalty and engagement in particular. First of all, do you think it’s really possible to measure something like a sense of loyalty through analytics?
I do think there is a place for single metrics – they are essential for the marketing and sales part of our jobs, especially when put in context. But for editorial purposes, they are misleading. Even fantastic new metrics like ‘attention time’ or ‘article reads’, when taken alone only tell you a small part of the story.
Ultimately, I think it’s ridiculous even to think that you can describe such complex behaviours as loyalty or engagement with such limited resources. What we are seeing from a lot of analysts is that ‘returning visitors’ are seen as equating loyalty; that ‘attention time’ or, even worse, ‘page views per visit’ are described as engagement. Let’s not forget, these are metrics. Metrics, by definition, measure something and give you an output. We might say that attention time indicates some kind of engagement, but it sure isn’t measuring it.
The closest these analytics get to measuring something like loyalty is in showing that there are certain reader behaviours that, if repeated, hint at that emotion. Take ‘dwell time’, for example. That alone says very little. What if the person has left their browser open while they head off to make a cup of tea and then close it when they get back? Taken alone, it’s an extremely misleading metric.
So, taken together – let’s say ‘married’ rather than ‘single metrics’ – do you think they form a better picture of something like engagement?
More or less, yes. For every platform, you may have a different definition of desired reader behaviour, so you try to see the patterns that in most cases lead to those desired behaviours. You put all these measures in a table and try to figure out the context. It’s complex maths from there on in, if you are honest about it. But that’s how (and why) we developed the CPI measurements that we use in Content Insights. You can use it to look at CPI engagement or CPI loyalty. You don’t need a degree in applied mathematics to understand it. The maths have already been done for you.
Let’s go back a step and talk about attention metrics.
Well, attention time is a beautiful metric because it measures the reader’s actions rather than browser’s or sessions, so it’s the first step towards human analytics. It has to be taken in context. Is the reader attentive to the content, the ads, or perhaps just lost in crappy UX, trying to figure out how to get where they want to go?
It’s only when we combine the attention time with article reads that we get a sense of engagement with content. And the more you’re able to observe such metrics within their contexts, the closer you get to being able to say you can see a pattern of behaviour that leads to an engaged reader.
How are you able to see if the reader is ‘lost in navigation’?
You can’t, but we can see that he is engaged with the content. That would, with a fair amount of certainty, indicate that he’s not lost. So you discard what you know they are not doing.
If it’s possible to know this, and the digital publishing and marketing industries pride themselves on being so cutting edge and forward-thinking, why is everyone content to rely on unreliable metrics? Why isn’t everyone shouting loudly about this? Why hasn’t ‘The Attention Movement had more of an effect?
Unfortunately, I think it’s to do with ingrained business models as well as inertia. The content-producing part of the industry only mastered page views a few years ago. In the grand scheme of things, they’ve only recently got to grips with the idea of performance being measured. Whether they’re comfortable with it yet is another question entirely.
In short, analytics came from marketing and and were set up to service sales optimisation. Conversion rates are essential to that side of the industry, and single metrics are great for marketers. But it’s not the way forward for the editorial side, and the fact that they use these marketing-oriented tools… well, it’s no wonder there’s confusion.
Do you think the digital publishing world is becoming more aware of the problem?
Yes, absolutely, but they work with what they have and what they have been taught. To cry bullshit you need to get more into that alien body and really understand it. And there are those that do cry out – rare lifeforms that understand both sides of the situation!
You were disparaging of the Reuters Report on Editorial Analytics recently. What disappointed you about it?
In truth, it was an eye witness report – not analysis or thesis – so I guess there was no place for disappointment on my part. And, of course, the reporters were only doing their jobs, so I don’t want to criticise them too heavily. However, the conclusions were disconcerting, confirming the single metric fallacy as a valid measurement for loyalty or engagement.
And that fallacy is…?
At the end of the report, there was a table with recommended metrics for editors to follow, and it clearly stated that returning visitors are equal to loyalty. That’s frustrating.
Why do you think they keep repeating this mistake?
I think that ‘finding’ comes from the analytics community claiming to be editorial, not to mention the fact that there isn’t another adequate metric around. Add to that the fact that, as we’ve just said, analytics were set up for marketing and sales purposes…
Let me elaborate on that last point a little. In marketing and sales, the most important attribute of a metric should be that it is actionable, by which I mean that if a metric tells you something you need to be able to act upon it. Single metrics, when put in context, do just that. For example, if you see disproportionate traffic coming from Twitter, you act upon it by investing in Twitter advertising.
Sure. I get that.
Conversely, compounded metrics, or ‘married metrics’ as you put it earlier – i.e. multiple metrics taken as a calculation together – aren’t actionable in the same way. You might find that likes + page views + unique visitors = the sum of how awesome your website is, but what can you actually do about it? The whole industry is framed in such a way that it always looks for actionable metrics. That makes a tonne of sense, but there is also a lot of bullshit in there that you can fob off as being useful when it comes to marketing reports. There’s no formula for ‘website awesomeness’, by the way. Don’t expect to find one!
So, in terms of content marketing – slightly off-topic, I know – what metrics would you be looking to analyse?
Honestly, it totally depends on what your client wants their content to achieve, which is tough because they usually have no idea. More often than not, they resort to the same old thing – page views, likes, impressions.
Yes – that’s the first thing I usually talk to my clients about. ‘What do you actually want this content to do for you? Is it brand awareness? Do you expect conversions?’
They can’t say, ‘we don’t know’, can they? So they repeat what they’ve heard at conferences and read on blogs.
A case in point – someone wrote to me an hour ago and said, ‘I want to start blogging for my brand website. Can you show me how?’ I asked them what they hoped to achieve and they had no idea. They actually said, ‘because I think I should be blogging.’
Exactly! 90 percent of cases are like that.
To go back to the Reuters Report, were you expecting something less… shall we say… naive?
I wasn’t expecting anything, to be honest. I got upset because it underlines where we are now – choking on analytics bullshit. Predominately, my emotional reactions to this kind of thing come from the editor in me, because that’s who I am really. Business-wise, all of this just confirms the world needs something like Content Insights, which I couldn’t be happier about. But it will get harder and harder to shift people’s perceptions from fallacy if they get too entrenched, like they did with page views in advertising.
Obviously the likes of Chartbeat are also keen to change the way people think about this. Are there any others out there that you think are flying the right flag?
Not to my knowledge. At least, not in editorial. In sales, a lot of them are trying to move past page views – Mixpanel, Kissmetrics, Metrillo…
How about the publications themselves? The Guardian have built their own tool, haven’t they?
That seems to be the trend with bigger publishers, trying to build analytics around what they find important. But when you look at Ophan, the tool that The Guardian built, it has some fantastic features but it’s essentially just an old song sung more beautifully. The main difference is that all the journalists can access it and see their results in real time, but in terms of metrics it’s the same old story. These companies keep building beautiful castles on the same rotten soil.
I think it’s admirable what the FT are doing in trying to charge advertisers for attention rather than page views, but it does seem to be down to the advertisers ultimately. They hold the purse strings. To be honest, I can’t really understand why they’re not jumping on the attention thing. Wouldn’t you prefer to know that people actually spent a bit of time in the vicinity of your advert, rather than just that they clicked on the same page and may or may not have been anywhere near it?
I think it’s simply that they think they have something that works, and there’s no appetite to change. That would mean learning something new and changing the way reporting is done. It seems like too much work.
So, ‘too much work’ trumps ‘useful data’? Isn’t that ridiculous?
Welcome to digital!
So engagement and loyalty are clearly important to these industries in different ways. For editorial, how do you propose the problem gets solved? What is the answer to the problem of single metrics?
Single metrics like page-views, unique visitors and even attention time – widely accepted as the editorial metrics for measuring readers’ behaviour – are easy to understand because they describe only one event.
To sum all of this up, ‘page-views’ describe how many times the analytics tracking code was loaded into a browser, while ‘unique visitors’ show how many unique sessions were open by browsers in a period of time. ‘Attention time’ is the only one that actually describes the reader’s behaviour, because it shows how long a visitor has been engaged with the page.
In no way can we accept that those metrics describe behaviours like loyalty, engagement or even exposure (by which I mean the reach of content or ads, and whether or not people are actually exposed to them in any meaningful way), because those behaviours are much more complex than that. Loyalty is so much more than what the returning visitor metrics describe, and yet it’s being widely accepted as a metric for loyalty. I think that’s crazy. If a visitor is returning to your website by clicking on promoted links in her newsfeed on Facebook, is she loyal or just skillfully click-baited? This type of fallacy just needs to stop.
In one of his essays against single metrics, New York Times columnist Jeff Jarvis said that the true metrics for an article must be about the impact of the story. Did it achieve its goals? Is the issue covered understood and is there an action as a result – something in the real world, rather than a Google Analytics goal triggered?
The ultimate goal of editorial analytics is to be able to get as close to that as possible – to show the impact of every piece of published content on any platform. While ‘married metrics’ were always a no-no in analytics because they are not actionable, in editorial we don’t have that issue. Our drivers should be the quality of content, engagement of readers and building the audience. Those are complex behaviours, and any tool claiming to show that this kind of behaviour is actually taking place has to look at multiple metrics, relations between metrics and types of behaviour in order to be able to say, however approximately, ‘Ok, there is engagement here’, or ‘there are a lot of signals towards building loyalty there’. And then you need to be able to present it in a comparable and understandable way.
And, you know what? Those tools are coming online as we speak. It’s how and why we came up with the Content Performance Indicator (CPI) – the engine that runs Content Insights.
Thanks for your time, Dejan.
Head to the Content Insights website to find out more about the editorial analytics tool created by editors, for editors.