Do you remember SAINT Classifications at all?
When I wrote that article over a year ago, they were useful but slightly cumbersome. Manual upload of meta data was the norm. Only a few customers had opted for “SAINT Bernard”.
In the meantime, product management and engineering released two features:
- the Classifications Rule Builder and
- the SAINT API.
Since I have mentioned the Classification Rule Builder in the post about Saving Variables, I’ll concentrate on the SAINT API today. Let me just reiterate that the Classification Rule Builder is an incredibly useful tool for data that can be broken apart by rules.
With the SAINT API, you can do everything you need around classifications, programmatically.
You can check whether a variable is classified. You can create a classification. You can import and export meta data.
I think it is fair to say that for an introductory article, messing around with classifications is a step too far or two. Let’s concentrate on the meta data import and export.
I am German (world champion!) and I like consistency, so I shall use the example from the first article as a base. Here it is:
We will look at how we can use the SAINT API to upload this meta data rather than managing a spread sheet and uploading manually.
While we’re doing that, I thought I’d also introduce the API Explorer, a tool that allows you to test APIs without writing any code.
Once you have copied your API Username and Shared Secret, you can select an API from the drop down, then check the different methods and run them.
Let’s look at the
Note how I added my report suite ID into the Request field?
Now I hit the GetResponse button and I soon see this:
If I do the same for the
CreateExport method, I need to edit a bunch of parameters (but not all of them! They are not all mandatory. Turns out I need 5 of them). As a result, I get a job id.
Then i use the
GetStatus method to see whether the job has completed, like so:
Funny enough, while I was taking the screenshot, Outlook popped up an email and I was able to just notice that the export job had actually failed! Turns out I hadn’t set up any classifications here. D’uh.
GetStatus returns a file_id which you can then use when you call
GetExport, like so:
Note how there are two results! One is the job ID, the other is a file ID. Also note that you might have more than a single page, depending on how much meta data your query unearthed. Which means that you might have to call
GetExport more than once!
Here’s my result:
Not bad, hm? After doing this you know essentially know how to program a SAINT export and what kind of input you need to do it. You (or rather I) have also encountered an error, which is always good: now you know that you need to program around it.
One more thing: Note the “header” structure in the result?
SAINT Classifications are identified by the “header”. If you upload a file manually, the “header” is literally the header of a column in your CSV file.
The interesting thing is: you can omit whole columns or re-order them. All SAINT cares about is that it knows under which “header” a piece of meta data belongs.
Right. Let’s import data now!
And since Germany is now world champion for the fourth time, let’s do it in PHP and name all variables and functions in German! Ja!! So machen wir es!!! GÖTZE!!!! Uh, oops, sorry, got carried away there.
We start with
CreateImport. This method will set up the import. You tell the system which variable this data is for, what to do on conflicts (uploading meta data for a key that already has some), and some other parameters.
If that succeeds, the system gives you a job ID.
Now you use
PopulateImport to add actual meta data to the job. If you want to upload more than 25000 rows of data, you need to make multiple calls to
PopulateImport, each with up to 25k rows.
Once all rows have been sent, you call
CommitImport, again with the same job ID, to tell the system to apply the meta data.
And that’s it! Pretty simple, no?
The trick is to specify the right “header(s)” and place the data into rows accordingly when you call
From our example: if you upload both meta data columns, your header would be
["Key", "Campaign Name", "Channel"]
(Note that my screenshot says “Channel” but the original article said “Campaign Type”. I’ve used “Channel” here in order to not confuse you.)
But you could just as well only upload the “Channel”, by specifying the header like this
In this case, every row of data that you upload must contain the key and the channel, but nothing else. Example
That is all you need to know, really.
For those of you who need code samples and a tutorial (like I do), you can find one here.
Let me finish off with a call to arms.
Your friendly marketer and you work together to collect a lot of data into Adobe Analytics. If you enhance even some of that data with meta data, your friendly marketer gains flexibility. She can make reports that are adequate for the level of detail a stake holder requires.
My hunch (or maybe experience) is that there is a lot of meta data flying about in the other systems your company uses. And chances are you (yes: you!) could write a piece of software that could automatically transport that data into Adobe Analytics.
That’s what APIs are for, aren’t they? Integration!
So go ahead, integrate! Go break down some silos and make your friendly marketer happy!