Making Reports

Sometimes the forest is so overwhelming that you don’t see the trees. Or the other way round.

I was discussing DTM and Analytics with a non-analytics colleague yesterday. He is a developer who works on AEM, Adobe’s experience management framework.

In our discussion about data that he’d like to analyse (and he was actually thinking about analysis, not reporting. Kudos!), there was a light bulb moment at some point when we both understood that we had fundamentally different views of what Adobe Analytics would or could do.

He was looking at it from the point of view of a developer who is used to verbose debug logging — throw in tons of data, then when something interesting happens, mine that data until you find clues.

Myself, I was looking at it from the Analytics point of view, which I thought was obvious and easy: an event becomes a metric in a report, and if you want dimensions for your report, send data into eVars.

Those two views are highly incompatible, of course, and I should’ve seen that incompatibility much earlier. The good thing: I realised that I had never actually spelled it out explicitly here, so I shall do that today.

It all starts with the question “how do I make a report?”

Define “State”

In the app that my colleague Ondrej builds, there are certain situations that he’d like to track. So we quickly settled that he should send Success Events whenever those situations arose. They are his KPIs, they’re what the application is built for.

It was more difficult for me to explain why he needed eVars. The example with the shop and the stamps works really well with people who know what a marketing campaign is and why you should track it, but it is totally meaningless for a developer who wants to learn new things about his software.

So in his mind, Ondrej was probably thinking that Analytics would give him at least partial access to the state of his app for the situations he wants to track, sort of a limited stack trace or something like that.

(There is a tool that can do that to a certain extent in the Adobe world: the Data Workbench (fka Insight), a part of Analytics Premium. It is meant for cross-channel analytics, though, so there would be a lot of tweaking involved.)

Adobe Analytics (the standard version, fka SiteCatalyst), works differently.

And, while we’re at it, for those of you who tag apps, I can understand if you think like Ondrej. Especially when the method you use to track a View or an Activity is called trackState!

The important bit to know is that you have to send everything that defines the state explicitly.

“… and then we start building reports”

It’s not just you developers, of course.

People on the business side often know about or have used BI tools in the past. They think in terms of databases, maybe OLAP cubes, and reports that must be built on top of those.

The truth is that Adobe Analytics is much more simple than that. (And I never thought I’d use the word “simple” in this context, so this makes me slightly happy. Geek.) It is almost but not quite as simple as Google Analytics, and both tools follow the same principle: the definition of a report is not negotiable. You send data, and this is what it’ll look like. Period.

Why does that even work?

It works because in web analytics, there are not that many different ways of looking at data. Most reports are indeed looking very much the same: a couple of metrics and some dimensions. On top of that, you have some specialised reports like funnels, pathing, maybe some fancy representations of data like the donuts in Mobile Services to make the data more obvious. And you have segmentation in order to compare different slices of your data.

But that is essentially it.

And the only decisions that have to be made are: what metrics do we need (define them very precisely!) and what dimensions to go with those?

For developers, that means: the reports are defined at implementation time. The “state” that you track is explicitly defined by the data that you send into the system. It doesn’t do much magic beyond that.

Yes, that means that you will likely change your implementation all the time. With new insight come new questions and the need for more dimensions. Absolutely.

For the business, it means: the reports are defined at implementation time. No building of reports later. And again: the need for new insight will lead to changes in the analytics implementation all the time, and that’s a good thing.

Now you also know why tag management is such a good idea…

There is a fine line that you and your friendly marketer are going to walk, the line between needs and complexity.

On one hand, everybody wants as much data as possible, maybe just in case. Given that what you can report is 100% defined by what you implement, there is a great drive to implement as much as possible. I can understand that.

On the other hand, complexity is your enemy, as it always is.

People will turn away from the tool if the choice of data is overwhelming. People will not be able to trust a system that is too complex. The maintenance of a complex implementation is a proper nightmare.

That’s why instead of doing a lot, most people will tell you to do small chunks, then clean up after yourself and do other chunks. Be nimble, agile. Implementation defines what your marketer can see, but the implementation phase is not key. It is just something that happens again and again. You will be making new reports all the time, and throwing away those that are no longer useful or used.

I hope this article made sense and maybe your light bulb went off. That would be great!

2 thoughts on “Making Reports

  1. Lots of great food for thought here, and definitely makes a compelling case for tag management and *aiming* for iterative tagging. I do think we struggle to “throw away reports that are no longer useful” — especially if we have them automated! But, that’s really an aside relative to the core of the post!

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.