Oracle Open World 2016 Recap

As I mentioned a week or so ago, I made a last minute appearance at Oracle Open World this year. It was my first time attending and presenting at OOW. I actually didn’t catch too much of the conference as I only flew in on Wednesday and flew out on Thursday. Nevertheless, I had a bit of a whirlwind experience, but a very good one. While I hadn’t planned on it (I’m more of a Kscope guy), I am now looking forward to attending Open World next year.

As for the presentation I was part of, I think it went pretty well. Many thanks to Gabby Rubin of Oracle for coming up with the idea for the presentation and facilitating it. The presentation was on “Essbase Tools and Toys” and was meant to highlight, at a high level, some of the interesting things that folks such as myself are doing that involve the Essbase APIs or otherwise work with Essbase. The presentation discussed items created by me, Tim Tow, and Harry Gates. Additionally, Kumar Ramaiyer (also from Oracle) talked a bit about what’s coming with Essbase Cloud Service (EssCS).

Continue Reading…

Camshaft (Essbase MDX query tool) 1.0.2 released

Apparently I’m having quite the productive Friday, what with showing how easy it is to setup drill-through with Dodeca and that I’m heading to Oracle Open World 2017 to contribute to a presentation on cool Essbase tools.

To these articles I’ll add that I just released a Camshaft point release. This release has a couple of fixes and enhancements. Thanks to André Märki and others for providing feedback.

This version of Camshaft fixes an issue where some data with many digits after the decimal would be rendered in scientific notation. Along with this fix I have added a new command-line switch, --maximum-fraction-digits (used on the command-line such as --maximum-fraction-digits=2) to set the max number of digits to render after a decimal.

Additionally, there was a bug with running a query from a file that is now fixed. You can now specify something like --query=somefile.mdx and Camshaft will look for the given file. If found, it’ll read its entire contents for an MDX query, then execute that. This option can help make command invocations with big gnarly MDX queries a little easier to manage.

Please keep that feedback coming and I’ll add enhancements/fixes to the best of my ability. I have some interesting Camshaft news coming in the near future that some people will really like!

As always the latest Camshaft documentation and download can be found linked from the Camshaft page.

Camshaft MDX tool updated and available

Some of you may recall a tool I released quite some time ago (seemingly to beta-testing purgatory) called Camshaft. Camshaft is a simple Java utility that executes a given MDX query against an Essbase cube and outputs the results. The original version of Camshaft came out around two years ago. This version is built on the same framework but includes various updates and new options. In the interim, the output abilities of the MaxL interpreter have been improved a bit, and with the right incantation it can now output pretty useable data.

The name Camshaft is actually a portmanteau of who the tool is named for, and the feeling that he gets when writing a load rule (especially one loading in MDX data). It’s not every day that a tool is named after a tool, but I digress (I kid, I kid!).

Anyway, Camshaft offers a fairly wide array of options to customize the output from an MDX query. You can suppress headers, choose your column delimiter, how to format #Missing/#NoAccess cells, and more. There’s even an output option to generate an HTML table if you want.

You could run this query, for example:

SELECT
        CROSSJOIN({[Jan], [Feb], [Mar]}, {[Curr Year], [Prev Year]}) ON COLUMNS,
        {[Measures].Levels(0).members} ON ROWS

And you might get this output (depending on options):

	                        Jan, Curr Year          Jan, Prev Year          
	Original Price          #Missing                #Missing                
	Price Paid              #Missing                #Missing                
	Returns                 #Missing                #Missing                
	Units                   #Missing                #Missing

Of course, maybe you want Jan, Curr Year to be on multiple lines. Just pass in the --line-per-header command-line argument and get that output:

	                        Jan                     Jan                     
	                        Curr Year               Prev Year               
	Original Price          #Missing                #Missing                
	Price Paid              #Missing                #Missing                
	Returns                 #Missing                #Missing                
	Units                   #Missing                #Missing  

It’s fairly flexible. You can output to the console or a given text file, and more. You can suppress the whole header if you want. The latest version of the documentation for Camshaft is online (and will be updated from time to time as refinements are added), as well as inside of the Camshaft downloadable file. The Camshaft download site is here (also available on the small Camshaft info page).

Camshaft is a free utility offered with no support or warranty (although feature ideas are welcome), and is closed source (for now), although sometime in the future I may just open the source code up so that some intrepid developers can do what they want with it.

Vess + Dodeca for Substitution Variable Management

I’m gonna go a little crazy today and combine two worlds, just for fun: the Vess “virtual” Essbase JDBC driver, and of course, Dodeca. I’ve written about Vess before, and even talked about it for a bit during Kscope16 earlier this year during a presentation with Tim Tow and Harry Gates on various interesting things we’re doing with Java and the Essbase Java API.

As a quick crash course on Vess, it’s a highly experimental Java JDBC driver that models an Essbase server’s applcations/cubes/properties into variable relational tables (I’ve written about Vess a few times before). At the moment this includes cube outline data, cube data, substitution variables, miscellaneous properties, and more. For example, when you connect Vess to, say, Sample/Basic, one of the tables you’ll get is SAMPLE.BASIC_VARS and it’ll contain four columns: the application, cube, variable name, and variable value. You might think you wouldn’t need to know the application and cube for this table but due to a nuance with Essbase variables (you can have the same variable name at both the cube, application, and server level) it’s actually needed.

In any case, not only you can read values using any SQL you want from these columns, but you can perform operations on the table that in turn affect the Essbase server. So you can do an UPDATE or DELETE and it’ll change the variable’s value, or delete a variable.

With that in mind, I thought to myself, you know what might be interesting – What if we added a Vess driver to Dodeca (since Dodeca supports third-party database drivers) and wire up a simple view that can edit the variables? So that’s exactly what I did and I thought it’d be fun to share.

Adding Vess to Dodeca

The first thing to do is add the Vess library and a couple of other Java libraries that it leans on to the Dodeca servlet. Typically you’d want to add these to your Dodeca WAR file when you build it with the “Click Once Prep Utility”, but since this is just for testing purposes, I can just add the JAR files to the already deployed servlet. I wouldn’t want to do it this way in production because when I went to deploy a new WAR file, I’d lose my Vess drivers. Here’s the drivers added to the /dodeca servlet:

Vess Java JAR files added to Dodeca (dodeca) servlet

Vess Java JAR files added to Dodeca (dodeca) servlet

For good measure I restarted the servlet container (in this case, restarting Tomcat 7 using sudo service tomcat7 restart on this little Ubuntu VM). Then we can login to Dodeca and create a new SQL connection:

A Vess connection is created inside of Dodeca

A Vess connection is created inside of Dodeca

There’s not a lot to see here other than to “show off” that Vess is indeed just a normal JDBC driver as far as other software is concerned – in this case, Dodeca. As you can see, Vess introduces a JDBC URL format. Vess can connect in embedded mode (in this case, indicated in the scheme of the URL. The rest is fairly standard: the address of the server (Vess assumes the default port of 1423 if none is specified), and in this case, a particular app/cube to connect to. Other than the URL, the driver class is specified. As with Oracle/SQL Server/MySQL, the class is just the Java class implementing the Driver Java interface. These typically are thing like com.mysql.Driver or something similar, and Vess is no different in this regard. Lastly for purposes of the Dodeca connection, a username and password are specified. This should be the credentials for an Essbase user, since internally Vess will use them to connect.

With the SQL connection mapped in, I can create the SQL Passthrough DataSet that will contain my SELECT queries, and optionally, parameterized INSERT/UPDATE/DELETE statements if I want to have support for those (which I will).

Configuring the SQL Passthrough DataSet for Vess variables

Configuring the SQL Passthrough DataSet for Vess variables

You can see that unlike some of the other SQL Passthrough DataSet examples I have shown lately, this one has two queries. It’s worth noting, briefly, that a SQLPTDS isn’t an object that just contains one query or otherwise concerns itself with one dataset. It can contain an arbitrary number of [usually related] queries. In this case I have two: one for server wide substitution variables, and one for variables just applicable to Sample/Basic (these actually overlap a bit as I’ll show in a bit).

The definition for the “server variables” query is very straightforward and only contains a SelectSQL configuration:

On the Dodeca query editor, looking at the first query for pulling out global variables from the Essbase server

On the Dodeca query editor, looking at the first query for pulling out global variables from the Essbase server

As noted earlier, Vess creates a table in the schema VESS_SCHEMA called VARS that contains the names and values of server-wide substitution variables. Over on the Sample/Basic variables configuration, there’s a little more to it:

The second query is modeled on a specific table for the Sample/Basic database

The second query is modeled on a specific table for the Sample/Basic database

Here there are queries that model the DELETE, INSERT, UPDATE, and of course SELECT operations. Not pictured (it’s collapsed on the config screen) is that I defined the primary key for this table as the combination of APPLICATION, CUBE, and NAME columns (while the final column, VALUE, is not part of the primary key).

To get a flavor for what the various queries look like, here’s the UpdateSQL configuration:

Dodeca UpdateSQL query for updating a variable's value

Dodeca UpdateSQL query for updating a variable’s value

You can see that the particular variable is identified by three column values (the primary key values), and that the value gets updated for this operation. There are four tokens in play, which will come from the row being edited in the view. There’s no primary key value being generated on the server-side (some of my previous examples had an integer that was generated server-side), so there’s no need for a post-insert select statement.

With all of the SQL Passthrough DataSet configuration out of the way (but a little more to come on the view configuration), I can now build a simple view template for showing the data:

Creating a template to display the variables from the SQL Passthrough DataSet

Creating a template to display the variables from the SQL Passthrough DataSet

If you’ve followed some of my other examples, this should seem pretty basic by now. There are two data ranges on this sheet but I’m just showing one in the preceding screenshot. The dataset has four columns, and so there are four columns on the range. That’s actually all there is to the view template itself. The rest of the configuration is on the view to set its data range and wire it up to the SQL Passthrough DataSet:

The configuration of the View that will display/edit the variables

The configuration of the View that will display/edit the variables

Noting too special here. You can see that I turned off RowAndColumnHeadersVisible to clean up the final appearance of the view a bit, and I have my one DataSet range defined. Over in the DataSet range definition:

The DataSet Range definition for the view

The DataSet Range definition for the view

There are two DataTable ranges defined (again, one for server variables and one for the Sample/Basic variables). Now opening up the configuration for the SampleBasicVarsData range (I’ll skip showing the details on the server variables range since it’s pretty simple):

DataTable Range Editor for the Sample/Basic data set

DataTable Range Editor for the Sample/Basic data set

I’ve turned on the abilities to add, delete, and modify rows (INSERT, DELETE, UPDATE). This is a really nice bit of granularity to have in Dodeca since in this case there’s a very legitimate use-case where I’d perhaps want a user to only be able to change a variable’s value but not otherwise delete it or add a new variable. Other than that bit of configuration, I’ve specified the corresponding range name on the sheet/template, and turned on InsertCells and NoColumnHeaders which is fairly standard for me with data sets like this.

Okay, the SQL Passthrough DataSet is setup, the template is setup, and the view configuration is setup. Let’s build this and see what happens:

Built Substitution Variable view

Built Substitution Variable view

It looks just like I thought it would! I can see my two server-wise substitution variables, and over on the table for Sample/Basic, I can see all of the variables that I have there. You’ll note that the server-wise variables seem to “repeat” in the Sample/Basic table. You simply have to think of the variables for a single database in terms of what variables are applicable to that database, and server-wide variables are applicable. Of course, if there’s a more specific (specified to the database) variable, it’ll trump the server-wide variable.

If you’re particularly astute with your screenshot reading skills you may notice that in preceding shot the cursor is on a cell in the Sample/Basic variables table, and therefore the row editing buttons on the toolbar are active (insert, delete, save). So I can change a value on a variable, hit the save data button, and Dodeca will perform the proper query from the SQLPTDS. Let’s do that and see what happens:

View after updating a variable value

View after updating a variable value

Well, it’s certainly less dramatic with screenshots, you’ll have to take my word for it that the PrevYear variable did indeed update on the server from FY11 to FY10. Under the hood, Dodeca fired off the properly filled in UpdateSQL statement, which of course was handed to the Vess driver, and in turn, Vess translated the call and called the appropriate variable updating logic on the Essbase Java API (magic!).

Summary / Vess Availability & Download

I hope you enjoyed this somewhat unique (or totally unique, I suppose) combination of a couple of different technologies. Vess is a bit of a unique take on things in the Essbase world, whereas Dodeca provides the peas to Essbase’s carrots. And yet, combining the two results in something wildly “interesting”.

I’m not saying that organizations should manage substitution variables this way (and again, the substitution variable aspect of Vess is just one of its facets, but it’s a nice simple one to play with), but this certainly makes it quite possible.

I know of many organizations that specifically or rather, begrudgingly, give EAS to a handful of finance power users that need to be able to tweak variables. Sometimes instead of EAS you’ll see one-off MaxL scripts where the update procedure is to tweak the script or a text file and run it. All too often this also involves plain-text credentials, hassling with installing the MaxL runtime on a ‘regular’ desktop machine, and more. So in this particular case, while the Vess driver actually does a lot more than just substitution variables, it can be leveraged for an innovative solution that is “cleaner” than many alternatives.

As an alternative (and still using Dodeca), we could have actually shelled out to launch a MaxL script and pass along the variable value, achieving much the same effect. This could work but obviously would be much more configuration. And to the extent possible I don’t like to create solutions that are ostensibly web-based that need to “drop down to the file system”, since it usually (in my experience), introduces a somewhat fragile or ‘sensitive’ element to the system that seems to act up or break relatively often.

Vess is still “highly experimental”, which I guess is a nice way of saying “a lot of things can go wrong, there’s no warranty, but it works… mostly. Asterisk.” Anyway, Vess isn’t available as a public download, but if you’d like to play with it, please feel free to contact me and I can provide the file and some basic instructions.

 

PBJ PBCS Client GitHub repository

For those of you wanting to play with PBJ, the full code-base is available over on GitHub in the PBJ repository. Just to be clear, this is for people that want or need to edit, view, or make contributions back to the code. You don’t need to do anything with Git/GitHub if you just want to consume the library out of Maven or otherwise include the JAR file in your projects.

That said, there are a few things missing from the library that are available in the PBCS REST API, such as some items having to do with Planning Units, and some new REST API methods that are going to be available soon (having to do with data slices). But in the meantime, enjoy!

Advanced integration with PBJ Java PBCS REST API library

This week’s blog posts are all about the upcoming Kscope16 conference and relate to the presentations I’m part of. This year I am co-presenting with Cameron Lackpour on on-premise Planning versus PBCS and talking about some different use cases. My particular focus for this presentation is how you might use the PBCS REST API with Java.

Over the last year I have put together a Java library that works with the PBCS REST API. It has the following characteristics:

  • Open Source (Apache Software License version 2.0)
  • Doesn’t depend on any Oracle libraries or code
  • High-quality, readable, fluent API

In my opinion, all of these goals have been met. Additionally, as of today, PBJ (PBCS Java Client) is available in Maven Central. What this means is that if you or your team programs in Java and use Maven for dependency management, you are just a few clicks away from being able to use this library.

For an overview of why you might want to use PBJ and how it compares to other scripting languages you might want to use instead (such as Groovy, Python, and more), come check out the presentation!

Today, however, I want to show how easy it is to incorporate PBJ into a Java program and do something quasi-practical. So the rest of this article will be oriented towards programmers, but for those of you that have employees or teammates that would be more likely to do the programming aspects of things, keep them in mind and send them a link.

First, let’s assume we’ve already created a new empty Maven project in Eclipse. Your experience will vary if you use IntelliJ IDEA or some other IDE. In this example I happen to be using Springsource Tool Suite (STS), which is pretty much the same as Eclipse.

pbj-up-and-running-01-empty-project

Next we need to open up our pom.xml (project definition file) to add a new dependency. You can do this manually by editing the XML file itself, but there’s a nice enough GUI in Eclipse that makes things even easier:

pbj-up-and-running-02-pom-editor

Next we need to go find the PBJ library. As I mentioned earlier, PBJ is available in the global Maven Central repository. If you have Maven set to update its index periodically or upon startup of your IDE, then you should be up to date and can find the PBJ library. As of right now the version of PBJ is just 1.0.1.

pbj-up-and-running-03-pbj-maven-dependency

Select the library, click OK, and then save the file. The PBJ library and its dependencies are added to your project.

pbj-up-and-running-04-pbj-added

Now we can create a new main class to test things out. At this point it’s just life as normal for the Java developer:

pbj-up-and-running-05-new-main-class

Just for good measure (and tradition!) let’s put in a simple code to print out “Hello world”, and run it. Note the output in the bottom middle pane:
pbj-up-and-running-06-hello-world

At this point we have project setup, we have all of the necessary PBJ files (and some additional transitive dependencies), and we are ready to write some Java code that uses methods in the PBJ library. The code to write is shown below:

pbj-up-and-running-07-refresh-cube

In the code you can see the following happen:

  1. A connection details object is created
  2. A connection to PBCS is made
  3. Ask for the list of available apps
  4. Print out the list of available apps
  5. Get a reference to a particular app (“Vision”)
  6. Call the refreshCube method on the app reference to refresh the cube
  7. Print message after cube refresh

It’s hard to imagine this code being much simpler. If might look like greek if you’re not familiar with programming or Java, but to a Java programmer, this will be readily comprehensible and its intent obvious. PBJ supports most of the REST API – importing data, metadata, business rules, and more.

To see a specific use-case and hear wry commentary from myself and Cameron, please swing by our presentation. We’ll cover an example of PBJ (don’t worry, it’s higher level than this!) but more generally some facets of administration of on-prem versus PBCS will be discussed as well. I think the presentation will really appeal to many different user groups.

Kscope16 is almost here!

Here we are again, just less than a week before Kscope16. I say this every year but this year’s Kscope promises to be the best yet, and I’m starting to get really pumped. There are a lot of exciting things going on in the greater Essbase world right now, and even my own little corner of this world is quite exciting (to me at least), including such fun things:

All of these items will be present at the conference, in one form or another. Dodeca will be featured in several presentations this year as customers talk about their implementation successes. Drillbridge has its own presentation again this year, given by yours truly, the outline extractor has a session put on by Tim, and the PBJ library will make up about half of a presentation with Cameron talking about on-prem versus PBCS.

That all said, this week I’m going to blog daily about some of the cool things in the world of Dodeca, Drillbridge, the Outline Extractor, and more. Stay tuned for a busy week!

Drillable Columns in Drillbridge

One of the more often requests that Drillbridge users have is to be able to have drillable columns. That is, they’d like to drill from a column in a Drillbridge report by clicking on the column value and pulling up another report, using the point-of-view (POV) of the original cell. This is most often used in a few different situations:

  • Drill into yet more detail for a given transaction
  • Drill to a PDF or image related to the current item
  • Drill to a custom URL, such as a Sharepoint URL

Drillbridge has supported this feature for quite some time, and it’s pretty easy to setup. Today I’m going to walk through the steps for setting up drillable columns in the latest version of Drillbridge Enterprise.

First of all, let’s take a look at a generic Drillbridge report that executes a query by converting the member from the Years dimension from something like “FY12” into the value of “2012” and then executes a query:

drillbridge-drillable-columns-source-query

Note that for readability I have broken the token up across several lines. This works just fine in the latest versions too and is a nice way to increase readability. Let’s see what this query generates by going over to the Test screen:

drillbridge-drillable-columns-testing-source-query

And building the report:

drillbridge-drillable-columns-source-query-data

Everything looks good so far – note that the drillable column hasn’t been setup yet so we shouldn’t be expecting any special links to appear yet.

Now let’s create a new report definition. This new report type will be a special report type known as a “Forwarding Link Report”. If you think about a normal Drillbridge report, it builds a SQL query by taking the POV and plugging it into a template. A Forwarding Link Report works in much the same way, except instead of building a SQL query, it builds a URL and forwards the browser to it.  In this case I’m just going to build a simple link over to Wikipedia since they have a nice simple URL structure that is good for illustrative purposes.

Here’s our link definition:

drillbridge-drillable-columns-link-query

Note the label above the text box for editing the link definition. Instead of saying it’s a SQL query, it refers to it specifically as the Link Template. This is because the report type itself has been changed (which is needed to make this type of report work). To change the report type we go to the Advanced menu and the click on Class, bringing up this editor:

drillbridge-drillable-columns-target-class-type

Note that in this case, the types available are JDBC drill-through report (the typical report type), Forwarding Link Report, or MDX report (yes, Drillbridge can execute MDX queries too!). The Drillbridge report system is completely modular and developers can add their own report types using a simple Java plugin.

Custom report plugins can also easily “advertise” their available options to Drillbridge so that Drillbridge can provide a nice UI for configuring them. For example, in the case of the forwarding link report, there is just one option available (as opposed to the 20 or so options available on a typical report) that let’s us choose the HTTP “verb” type to use:

drillbridge-drillable-columns-link-options

Let’s head over to the Test tab for this new link report and see what happens:

drillbridge-drillable-columns-testing-link

Let’s build it and see what happens:

drillbridge-drillable-columns-link-wikipedia

As expected, the given input was used to build a link and our browser was redirected there immediately. This is one of the nice things about doing drill-through in a web browser: you can do pretty much anything you want.

Now, what we’d like to do is make one the columns in our first report drill over to the results of this new link report we just setup. To do that, let’s go back to the original report, then go to the Advanced menu, then Drillable Columns:

drillbridge-drillable-columns-drillable-columns-list

Then click on New Drillable Column. The editor will be brought up:

drillbridge-drillable-columns-editing-column

Here we have a chance to give this drillable column a name, tell the report which column should be drillable, and then choose the target report. We can put in anything we want for the name. What’s important in the column index and target. The column indices start at 0 for the first column and then count up. In this case we want the third column to be a link, so this is column index 2 (0 = first column, 1 = second column, 2 = third column, etc.). Then we select our new report as the target report, which I named “Wikipedia”.

Let’s go and test this source report again and see that the third column has links now:

drillbridge-drillable-columns-viewing-drillable-column-links

And sure enough, clicking on one of the links brings up the Wikipedia report – which doesn’t bring up a report at all, per se, it just executes the report, which uses the POV to build a URL and automatically redirect the web browser. That’s it!

Report POV vs. row POV

It’s important to consider what the POV handed to our target report will be. The POV is actually the combination of two things, out of necessity: the POV of the source report merged with the POV of the row itself. Whereas the normal report POV might be something like Scenario = Actual, Year = FY12, Time = June, the POV handed to the target report will be all of those things, plus keys and values from the current row. For example, if the original report only considered a given account but showed details for transactions, and there was a column called TRANSACTION_NUM, then the POV would also include a key named TRANSACTION_NUM with a value of whatever the value from that row is. This is critical for users that want to link to something or execute another query using data from the row.

Drillable Columns is one of my favorite features of Drillbridge because it’s so easy but powerful, but more importantly, it saves people time, and that’s what Drillbridge is all about.

Essbasepy updated for Python 3

A huge number of updates and cleanups were put in to the Essbase Python wrapper (essbasepy) recently. These were all made by Github user Kevin (‘nurse’) who updated many sections of code, modernized a number of things, cleaned up comments, and more. The updates look substantial and if you are using Python 3 with this module, absolutely recommended. Check out the GitHub Essbasepy page for more.

Drillbridge Community Edition 1.5.5 available

Hot on the heels of the announcement that Applied OLAP has acquired Drillbridge, there is a small update to the latest free edition of Drillbridge. Drillbridge 1.5.4 has been bumped up to version 1.5.5 and includes the following three very specific updates:

  1. Applied OLAP software license
  2. Fix for sporadic report deletion issue
  3. Change to default temp/working directory

To go into further detail, the license shipping with the previous version of Drillbridge has been changed to a standard Applied OLAP license that is more or less the same as that used for the free Outline Extractor. The switch to this license serves mostly to give uniformity and consistency to Applied OLAP software offerings rather than materially affect the usage of the software.

Second, there is a sporadic issue with respect to the ability to delete reports that has been fixed. How this would manifest is that the Delete button would just literally not show up sometimes to delete a report, thereby preventing the user from being able to delete it. The workaround was/is to go into the Drillbridge repository and manually delete the report entry.

Lastly, the default working directory has been changed. Some versions and combinations of Java 1.7, 1.8, and Windows Server 2008/2012, and various group/security policies would have problems launching Drillbridge. This was generally due to an issue where the temporary file directory to be used by Java would not be writable by the service running Drillbridge. So by default, the temp folder will try to be located inside of the Drillbridge installation folder itself, which should prevent most issues of this type from happening. This location can be changed via a config file as always, if needed.

As a bit of an aside, Drillbridge Enterprise 3.0 is still under development with a release planned for later this year. This Community Edition release of Drillbridge is currently slated to the last release in the 1.5.x series (unless further issues are discovered). Future versions will likely be based off of the 3.0 codebase.

Please let me know if you have any questions! As usual, Drillbridge can be downloaded from the Saxifrage Systems LLC downloads section. Note that in the hopefully not too distant future, this will be migrated over to the Applied OLAP downloads section but will remain where it is for now.