Adam Greco recently wrote three articles about how you can embed business requirements into Adobe Analytics Workspaces (“Adobe Analytics Requirements and SDR in Workspace” I, II, and III) in order to help data consumers understand.
His method goes all the way from “this is why we added eVarXY” to “78% of requirements are currently tracked correctly”, and it shows all of that in the best possible place — right next to the data.
I like the approach, and the result of it.
Next step: let’s turn it up all the way to 11!
There have been discussions all around for years about aspects of this. I have been part of these, coming from the angle of quality assurance, testing, and adoption, but other people have been looking at the same thing from other angles — documentation, compliance, even automatic code-generation.
I think we essentially all agree that Analytics is still much too much of a manual thing, a craft (“an activity involving skill in making things by hand”), and that we would really like to automate more of it. Parts of our job are mind-numbing (which is why noone wants to do QA, for example), parts are misunderstood, and we still have to fight an uphill battle most days.
So, we dream.
We dream of “automagic”, of all those things the computer can do for us. The computer, abstract, or even our computer, the thing we have in front of us 8+ hours a day, and which has, so far, utterly let us down.
We dream of “self-deployments”, some process that takes business requirements and turns them into reliable deployments and therefore trustworthy data.
We dream of QA that is generated by a “make” command, and that periodically comes over, ensures us all is fine, then sits down with us and listens to our tales of nuggets we found in the data while we share a drink.
We dream of the power of copy&paste, especially when we set up something for the 100th time, manually. Or, while we’re at it, we dream of inheritance, of “computer, just make it like that other site we did the other day!”
We also dream of self-documentation, of data that sort of explains itself while stakeholders query it (with ease, of course).
When we meet, we tell each other about our dreams. We get excited by the beauty of them, and they inspire us to dream more, higher, further.
We often come away from those meetings full of plans. Summits are beautiful for the sheer number of people and ideas that we expose ourselves to. We take them all in, and we resolve to doing something.
Then two things happen: daily tasks take up our time, and, more importantly, we start thinking about the details of our ideas, and the complexity overwhelms us.
Some of us want to build a marvelous beast of a tool, and they think splitting the work will help. I think I agree. What is missing in those plans is some common goal, or even just a tangible goal.
For some of us, automation is about documentation, enablement, about making sure that people can understand the data and therefore use it properly.
A few people would like to see automation help with data quality (yours truly), because, frankly, noone is ever going to want to do it manually, and data quality is a common issue. Like water used to be in old times. Not safe to drink, but what can you do?
The interesting part here is that, as far as I can see, we all think of the business requirements as an integral part of our particular flavour of automation.
Which leads me to think that we should start with the business requirements, sort of as the skeleton of our tool, our beast.
Everyone can build off of this, a bit like Extensions build off of Launch.
Here’s my call to action: let’s define a data model that includes business requirements and everything needed for them to be fullfilled.
In my head, this would be a (potentially fiendishly) complex data structure, including abstract goals, KPIs, Rules, Data Elements, all the way down to specific eVars and events.
I think it would look like a mad spider web if you drew it, but I’m no architect, and I’m happy if you prove me wrong.
New tools sometimes make us happy. Launch is, for us, like a particularly sharp and beautiful knife to a fishmonger. We marvel at it, weigh it in our hands, play with it, and it makes our work easier, and better.
Workspace has come and transformed the way people work with the data in a spectacular fashion and pretty rapidly.
So why not build a tool? Why a data structure?
Because some of us earn money building tools. Some of us build tools for clients, and some build them for themselves. I think we can work together on a data structure, then individually build tools around it much more easily then building a tool.
I would also hope that our spec would be more successful than — say — the CEDDL data layer spec, and that we would actually use it.
There are a lot of opportunities in there that we probably don’t even see now!
I’m looking forward to working more with Launch, because I believe Launch will allow me to stick to standards and out-of-the-box much more than DTM does. I hope that customization in Launch will almost exclusively happen within Data Elements.
If that is the case (experience will tell), then automating Launch setup will be possible, and it will seamlessly lead us to overall simpler deployments, closer to some yet unknown standards, which in turn can only help with agility, flexibility, and adoption.
So, my fellow tinkers, tailors, soldiers, and spies, think about this. Mull it over. Version 1 at Summit 2019, ok?
4 thoughts on “A Standard Data Model for Requirements”
Best article I’ve read in a long time Jan!
LikeLiked by 1 person