Bear with me here. This sounds like an experiment with no real-life value or application. But like the makers of ‘Lost’, I have a plan. Well, unlike them, I actually do. It’ll become clear over the next months.
My gut reaction is: sure we can!
But I want to know, and for that, I need to do it. Whilst I am doing it, why not document it as well? I am a blogger, after all, am I not?
- We need to be able to authenticate — we need Base64 encoding
- We will make a
ReportDescriptionand send a request — we need JSON
- We get some data back and display or otherwise process it
Because we’re doing it in the browser, there’s actually one more requirement:
- We must handle the complexity of cross-site scripting
The library handles all of the requirements above beautifully, except the processing and displaying of the results, of course.
Check out the example code that comes with the library, it is ridiculously easy to use.
The next step is to use it for something slightly more complex than reading all report suites. We want to be able to add parameters to the call and get some actual data back!
I can select an “API” and then a “method”. The UI will reflect that choice and tell me what the “method” variable in my code has to be.
Example: I select “Report” as the API and “Run” as the method. My variable has to be “Report.Run”.Yes, yes, you don’t need an API Explorer to tell you that there’ll be a “.” between those two, I know, I know.
But the API Explorer will also generate a list of all possible parameters, and that means you don’t have to look at the documentation.
Let me show you a complete example here. My page is simple enough to be understandable, I hope.
When I load it, it queues a report. If the queueing is successful, it enables the “Get Result” button, which in turn, when clicked, retrieves the data and inserts it into a formerly emtpy
This is what it looks like when the page has loaded:The script has queued the request. If I press the button, I get this: And here is the complete code: Want me to walk you through that? Sure.
Lines 7 — 14 take care of setup. I’m loading jquery and the two libraries I got from github. Then I set variables for credentials and endpoints because we need those in every call to the library.
Lines 15 — 19 are the HTML scaffold for the page. Yes, that is pretty lean.
The script on lines 22 — 32 is executed when the page has loaded. On line 22 I set the method I want to call and on line 23 I define the parameters. Line 24 contains the actual call.
Lines 25 — 31 define a call back function that the library executes when the API has replied. It spits out some debug information, extracts the report ID (line 28), attaches it to the button as a data ttribute (line 30), and enables the button (line 31).
All of this happens when you load the page, without any further interaction. But now the button is enabled and you can press it! So on lines 34 — 81, I attach a click handler to the button.
Same structure here: a click on the button triggers a call to the API via the library (lines 35 — 39) plus a callback function that handles the result.
Lines 53 — 57 deal with the table header, looping over the metrics and getting their names (lines 55 — 57).
Similarly, I am looping over the data set (lines 59 — 66) to get all numbers for each metric.
To put some icing on the cake, I do a final loop over the totals (lines 69 — 71), then add everything to a
<table> element which I add to the page.
And that’s all, really.
You would also likely not use a button and user interaction for retrieving data but rather use a timer and repeat. Keep the data updated.
The result can be a very pretty and compelling visualisation on a big screen somewhere in your office. The more people look at it and discuss it the better!