Testing is like an Onion

I have tried to justify my approach to testing for digital marketing a couple of times. Given that tools like ObservePoint or HubScan are available, the fundamental question is: do you need my approach at all?

My personal answer to that, obviously, is yes, you do!

If I have formerly convinced you, feel free to stop reading. Thank you for your trust, and why not leave a comment on what testing has done for you?

For everyone else: keep reading, this time I’ll use the omnipotent onion-metaphor! And a nut!

Goals & Layers

We are talking about testing in our business in the context of data quality.

Whether data quality has to do with acceptance and trust (the business willing and wanting to see or act on your analysis or not), or with action (some tool using your data to optimise your site, like Adobe Target or Media Optimizer), it is the fundamental aspect in Analytics.

Your friendly marketer’s goal, therefore, is to make sure data quality stays high. Always.

For development (you), the goal is different: make a site that works, and make sure it’s not too expensive to build and maintain. Nothing in there about data quality.

Since the data that your friendly marketer needs is partly coming from you, we have a potential problem. Which is why I am suggesting automated testing.

I have written about this before, probably way too often by now. Today, I want to actually peel the onion, show what I mean for a very specific example.

Let’s start with a product page on a retail web site.

Peeling the Product Page

If you run a retail site, your friendly marketer needs some data on product pages, most likely (and not only):

  • product data — name, SKU or ID, price, availability, …
  • page data — page name, language, load time, …
  • user data — whether the visitor is a first-timer or not, a good customer versus a casual customer, their lifetime value, …

Let’s go for the bare minimum here, and suggest there must be a tracking call on each product page view, and it must contain page name, product name, product ID, and the visitor’s visit number.

This is something that tools like ObservePoint or HubScan can very easily test. It makes sense to automate those tests, and ideally follow Craig Scribner‘s suggestion and make the test score (“data accuracy: 97%”) available on any dashboard your friendly marketer uses.

So we’re done for the outside of the onion. Tools exist, no need to reinvent the wheel.

I’m more interested in what lies below the skin, anyway.

One Layer Down

I guess we’re using DTM here, right?

In order for the tracking call to happen, there will be a Page Load Rule, probably specific for product pages. In that rule, somebody will have defined how page name, product name, product ID, and visit number are to be determined, likely using Data Elements.

So, we could go one level into the onion and check the following things:

  • is there a PLR for product pages?
  • does that PLR fire on product pages?
  • is there a DE for page name?
  • does the DE for page name contain “Product Details”?
  • is there a DE for product ID?
  • does the product ID DE contain the right ID?
  • and so on

Looking at it at this level, we have taken a step back from a specific Analytics tool. Instead, we are looking at the mechanism that tracks, and the data that is used.

I think it is useful to look at this level, but I’m not entirely happy about it. For once, the PLR and DEs are managed by the friendly marketer, meaning they will likely change frequently. On the other hand, they depend on the page, which is in your (the developer’s) hands. We still have a communications gap to fill here.

More peeling is needed.

Two Layers Down

The second check above was “does the PLR fire on product pages?” This check raises a question: how does DTM know whether it needs to fire the PLR on a given page?

You know how in DTM, you have conditions for PLRs. It could be that the product page PLR fires only if the URL contains “product.html”. Or maybe only if the %Page Name% DE is “Product Details”. Maybe it only fires if the page contains a specific element, maybe <div class="prod_details">

One layer down, I would test that condition, rather than the PLR firing. I would test, for example, that the URL does indeed contain “product.html”. Or that the page does indeed have that div.prod_details element.

The same goes for the DEs. They usually get their data from somewhere — Javascript variables, cookies, DOM elements, custom code, … —, and we could test that!

So instead of checking that the DE %Page Name% is “Product Details”, we go a level down and check whether digitalData.page.pageInfo.pageName is “Product Details”.

So our checks look quite different now:

  • does the page URL contain “product.html”?
  • is there a data layer variable called digitalData?
  • is there a data layer element called digitalData.page.pageInfo.pageName?
  • does it have the value “Product Details”?
  • is there a data layer element called digitalData.product?
  • is it an array and does it have at least one element?
  • does that (or one of the) element(s) contain a digitalData.product[n].productInfo.productID element?
  • does that element contain the correct product ID?
  • and so on

There are more checks to do, but you see where this is going. We’re stripping away one more level of tools, in this case DTM, and looking at the underlying structure, instead.

If a DE reads from the data layer, we’ll check the data layer directly. If it gets information from a cookie, we’ll check the cookie directly. And so on.

Two levels down, we’re now completely tool-agnostic.

Now here’s what’s great: at this level, we can expect a certain stability. Marketing or analytics do not work at this level. Only development make changes here. Which means we can automate those tests, and use them for regression testing before releases.

Nuts

I am really bad at metaphors. I usually try to explain almost everything with some sort of car-metaphor. I don’t even particularly like cars!

This time, though, I have gone even worse! Hear this:

The automated regression testing two layers into the onion is done by development to ensure there are no regressions. This helps your friendly marketer. There will be no bad surprises. In that regard, I think it is important to do it, and not only rely on black-box, end-to-end tools.

Think of a nut.

It is easy to check whether there is a nut, just look at it. Yup, clearly there.

But in order to tell whether you’re going to get to eat a nut, you’ll have to open it. It might be empty after all. Only when you open it can you really see that this nut contains the stuff that you actually want.

Look below the surface.

And imagine if you could have automated regression tests: no more bad surprises when opening nuts! Awesome!

One thought on “Testing is like an Onion

  1. Pingback: 2016 for Developers | Web Analytics for Developers

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s