Oracle OpenWorld 2016 Presentation: Essbase Community Toys and Tools

I am very pleased to mention that I will be making a last minute appearance at Oracle OpenWorld this year! I am going to be presenting with Gabby Rubin, and Tim Tow about various free tools that exist in the Hyperion/Essbase ecosystem. The presentation is on Thursday, September 22nd, 2016, at 12:00pm at Moscone West. The presentation ID is CON6489.

For my part of the presentation, I am going to talk about a handful of tools I have personally created over the years. This will likely include Cubedata (a tool for generating large quantities of data to help test Essbase performance), Camshaft (a tool for running MDX queries and outputting the results to a text file), and Vess, a highly experimental/innovative JDBC driver for Essbase that provides a functional facade for Essbase servers/cubes including access to outline data, cube data, data loads, substitution variables, and more. The goal of the presentation will be to quickly inform intrepid Essbase/Hyperion administrators and developers about some of the interesting third-party functionality they might find useful and improve productivity.

 

Vess + Dodeca for Substitution Variable Management

I’m gonna go a little crazy today and combine two worlds, just for fun: the Vess “virtual” Essbase JDBC driver, and of course, Dodeca. I’ve written about Vess before, and even talked about it for a bit during Kscope16 earlier this year during a presentation with Tim Tow and Harry Gates on various interesting things we’re doing with Java and the Essbase Java API.

As a quick crash course on Vess, it’s a highly experimental Java JDBC driver that models an Essbase server’s applcations/cubes/properties into variable relational tables (I’ve written about Vess a few times before). At the moment this includes cube outline data, cube data, substitution variables, miscellaneous properties, and more. For example, when you connect Vess to, say, Sample/Basic, one of the tables you’ll get is SAMPLE.BASIC_VARS and it’ll contain four columns: the application, cube, variable name, and variable value. You might think you wouldn’t need to know the application and cube for this table but due to a nuance with Essbase variables (you can have the same variable name at both the cube, application, and server level) it’s actually needed.

In any case, not only you can read values using any SQL you want from these columns, but you can perform operations on the table that in turn affect the Essbase server. So you can do an UPDATE or DELETE and it’ll change the variable’s value, or delete a variable.

With that in mind, I thought to myself, you know what might be interesting – What if we added a Vess driver to Dodeca (since Dodeca supports third-party database drivers) and wire up a simple view that can edit the variables? So that’s exactly what I did and I thought it’d be fun to share.

Adding Vess to Dodeca

The first thing to do is add the Vess library and a couple of other Java libraries that it leans on to the Dodeca servlet. Typically you’d want to add these to your Dodeca WAR file when you build it with the “Click Once Prep Utility”, but since this is just for testing purposes, I can just add the JAR files to the already deployed servlet. I wouldn’t want to do it this way in production because when I went to deploy a new WAR file, I’d lose my Vess drivers. Here’s the drivers added to the /dodeca servlet:

Vess Java JAR files added to Dodeca (dodeca) servlet

Vess Java JAR files added to Dodeca (dodeca) servlet

For good measure I restarted the servlet container (in this case, restarting Tomcat 7 using sudo service tomcat7 restart on this little Ubuntu VM). Then we can login to Dodeca and create a new SQL connection:

A Vess connection is created inside of Dodeca

A Vess connection is created inside of Dodeca

There’s not a lot to see here other than to “show off” that Vess is indeed just a normal JDBC driver as far as other software is concerned – in this case, Dodeca. As you can see, Vess introduces a JDBC URL format. Vess can connect in embedded mode (in this case, indicated in the scheme of the URL. The rest is fairly standard: the address of the server (Vess assumes the default port of 1423 if none is specified), and in this case, a particular app/cube to connect to. Other than the URL, the driver class is specified. As with Oracle/SQL Server/MySQL, the class is just the Java class implementing the Driver Java interface. These typically are thing like com.mysql.Driver or something similar, and Vess is no different in this regard. Lastly for purposes of the Dodeca connection, a username and password are specified. This should be the credentials for an Essbase user, since internally Vess will use them to connect.

With the SQL connection mapped in, I can create the SQL Passthrough DataSet that will contain my SELECT queries, and optionally, parameterized INSERT/UPDATE/DELETE statements if I want to have support for those (which I will).

Configuring the SQL Passthrough DataSet for Vess variables

Configuring the SQL Passthrough DataSet for Vess variables

You can see that unlike some of the other SQL Passthrough DataSet examples I have shown lately, this one has two queries. It’s worth noting, briefly, that a SQLPTDS isn’t an object that just contains one query or otherwise concerns itself with one dataset. It can contain an arbitrary number of [usually related] queries. In this case I have two: one for server wide substitution variables, and one for variables just applicable to Sample/Basic (these actually overlap a bit as I’ll show in a bit).

The definition for the “server variables” query is very straightforward and only contains a SelectSQL configuration:

On the Dodeca query editor, looking at the first query for pulling out global variables from the Essbase server

On the Dodeca query editor, looking at the first query for pulling out global variables from the Essbase server

As noted earlier, Vess creates a table in the schema VESS_SCHEMA called VARS that contains the names and values of server-wide substitution variables. Over on the Sample/Basic variables configuration, there’s a little more to it:

The second query is modeled on a specific table for the Sample/Basic database

The second query is modeled on a specific table for the Sample/Basic database

Here there are queries that model the DELETE, INSERT, UPDATE, and of course SELECT operations. Not pictured (it’s collapsed on the config screen) is that I defined the primary key for this table as the combination of APPLICATION, CUBE, and NAME columns (while the final column, VALUE, is not part of the primary key).

To get a flavor for what the various queries look like, here’s the UpdateSQL configuration:

Dodeca UpdateSQL query for updating a variable's value

Dodeca UpdateSQL query for updating a variable’s value

You can see that the particular variable is identified by three column values (the primary key values), and that the value gets updated for this operation. There are four tokens in play, which will come from the row being edited in the view. There’s no primary key value being generated on the server-side (some of my previous examples had an integer that was generated server-side), so there’s no need for a post-insert select statement.

With all of the SQL Passthrough DataSet configuration out of the way (but a little more to come on the view configuration), I can now build a simple view template for showing the data:

Creating a template to display the variables from the SQL Passthrough DataSet

Creating a template to display the variables from the SQL Passthrough DataSet

If you’ve followed some of my other examples, this should seem pretty basic by now. There are two data ranges on this sheet but I’m just showing one in the preceding screenshot. The dataset has four columns, and so there are four columns on the range. That’s actually all there is to the view template itself. The rest of the configuration is on the view to set its data range and wire it up to the SQL Passthrough DataSet:

The configuration of the View that will display/edit the variables

The configuration of the View that will display/edit the variables

Noting too special here. You can see that I turned off RowAndColumnHeadersVisible to clean up the final appearance of the view a bit, and I have my one DataSet range defined. Over in the DataSet range definition:

The DataSet Range definition for the view

The DataSet Range definition for the view

There are two DataTable ranges defined (again, one for server variables and one for the Sample/Basic variables). Now opening up the configuration for the SampleBasicVarsData range (I’ll skip showing the details on the server variables range since it’s pretty simple):

DataTable Range Editor for the Sample/Basic data set

DataTable Range Editor for the Sample/Basic data set

I’ve turned on the abilities to add, delete, and modify rows (INSERT, DELETE, UPDATE). This is a really nice bit of granularity to have in Dodeca since in this case there’s a very legitimate use-case where I’d perhaps want a user to only be able to change a variable’s value but not otherwise delete it or add a new variable. Other than that bit of configuration, I’ve specified the corresponding range name on the sheet/template, and turned on InsertCells and NoColumnHeaders which is fairly standard for me with data sets like this.

Okay, the SQL Passthrough DataSet is setup, the template is setup, and the view configuration is setup. Let’s build this and see what happens:

Built Substitution Variable view

Built Substitution Variable view

It looks just like I thought it would! I can see my two server-wise substitution variables, and over on the table for Sample/Basic, I can see all of the variables that I have there. You’ll note that the server-wise variables seem to “repeat” in the Sample/Basic table. You simply have to think of the variables for a single database in terms of what variables are applicable to that database, and server-wide variables are applicable. Of course, if there’s a more specific (specified to the database) variable, it’ll trump the server-wide variable.

If you’re particularly astute with your screenshot reading skills you may notice that in preceding shot the cursor is on a cell in the Sample/Basic variables table, and therefore the row editing buttons on the toolbar are active (insert, delete, save). So I can change a value on a variable, hit the save data button, and Dodeca will perform the proper query from the SQLPTDS. Let’s do that and see what happens:

View after updating a variable value

View after updating a variable value

Well, it’s certainly less dramatic with screenshots, you’ll have to take my word for it that the PrevYear variable did indeed update on the server from FY11 to FY10. Under the hood, Dodeca fired off the properly filled in UpdateSQL statement, which of course was handed to the Vess driver, and in turn, Vess translated the call and called the appropriate variable updating logic on the Essbase Java API (magic!).

Summary / Vess Availability & Download

I hope you enjoyed this somewhat unique (or totally unique, I suppose) combination of a couple of different technologies. Vess is a bit of a unique take on things in the Essbase world, whereas Dodeca provides the peas to Essbase’s carrots. And yet, combining the two results in something wildly “interesting”.

I’m not saying that organizations should manage substitution variables this way (and again, the substitution variable aspect of Vess is just one of its facets, but it’s a nice simple one to play with), but this certainly makes it quite possible.

I know of many organizations that specifically or rather, begrudgingly, give EAS to a handful of finance power users that need to be able to tweak variables. Sometimes instead of EAS you’ll see one-off MaxL scripts where the update procedure is to tweak the script or a text file and run it. All too often this also involves plain-text credentials, hassling with installing the MaxL runtime on a ‘regular’ desktop machine, and more. So in this particular case, while the Vess driver actually does a lot more than just substitution variables, it can be leveraged for an innovative solution that is “cleaner” than many alternatives.

As an alternative (and still using Dodeca), we could have actually shelled out to launch a MaxL script and pass along the variable value, achieving much the same effect. This could work but obviously would be much more configuration. And to the extent possible I don’t like to create solutions that are ostensibly web-based that need to “drop down to the file system”, since it usually (in my experience), introduces a somewhat fragile or ‘sensitive’ element to the system that seems to act up or break relatively often.

Vess is still “highly experimental”, which I guess is a nice way of saying “a lot of things can go wrong, there’s no warranty, but it works… mostly. Asterisk.” Anyway, Vess isn’t available as a public download, but if you’d like to play with it, please feel free to contact me and I can provide the file and some basic instructions.

 

A primer on relational data views in Dodeca

I’m going to take a small detour from my series on data input in Dodeca so that I can lay the foundation for the next article. Lately I’ve talked about how we can get user input in Dodeca, how users can add comments to their input in Dodeca, and how we can audit the input data by tapping in to the Dodeca audit log tables. As a small preview of where the data input series is going, in the near future I’m going to look at how we can input data to a relational database from within Dodeca.

Prior to that, of course, I’m going to do a brief introduction to relational data in Dodeca. There are a handful of configuration items that need to occur. There’s a little more to it than just dropping in a SQL SELECT statement, but as you’ll see, there is a lot of power and flexibility that will be available to use with just a few clicks.

Define the SQL Connection

The first thing we need is to tell Dodeca about our SQL connection. This is about as standard as it sounds. It’s worth noting that Dodeca allows for an arbitrary number of SQL connections and supports a wide variety of databases, owing to the fact that the Dodeca middle-tier is written completely in Java. This means that, as with software such as Drillbridge, anything with a JDBC driver is fair game – including Oracle, Microsoft SQL Server, DB2, MySQL, and many others.

As with before in the Dodeca data input series, I am using a MySQL schema, since I like running my development instances son a lean and mean Linux VM:

Viewing SQL connections in Dodeca client

Viewing SQL connections in Dodeca client

Note that SQL connections only need to be setup once and then used over and over again. You don’t need to redefine them every time you have a new view. Most organizations will have anywhere from one to a dozen or so different connections, many times to quite a variety of data sources that they are pulling together.

Create the SQL Passthru DataSet

Given a SQL connection that we want to query, we need to create a SQL Passthru DataSet (SPTDS). Try to think of think of this as a collection of SQL queries defined along with several configuration options. In other words, we’re not just dumping a SELECT statement into our view or system somewhere and ending up with an unmaintainable mess. For this simple example, when I create the SQL Passthrough DataSet, I’m configuring which SQL Connection (defined earlier) to use, and defining one or more queries associated with the data set. Note in this example I just have the one query I care about:

Dodeca SQL Passthrough DataSets editor

Dodeca SQL Passthrough DataSets editor

Add the Query to the DataSet

Now that I have my SQL Passthru DataSet created, I will add a query to it. The following editor is used to do this:

Query Editor window for a SQL Passthrough DataSet query

Query Editor window for a SQL Passthrough DataSet query

The main thing I am doing on this screen, clearly, is defining the query itself, which is accomplished by editing the definition for the SelectSQL property:

Editing actual query text in the query editor

Editing actual query text in the query editor

This query is from my previous post on tapping in to the Dodeca audit log tables. Here’s the query for reference:

SELECT
    AUDITLOG.SERVER,
    AUDITLOG.APPLICATION,
    AUDITLOG.CUBE,
    AUDITLOG.USER_ID,
    AUDITLOG.CREATED_DATE,
    DP.MEMBER,
    DP.ALIAS,
    IFNULL(ITEMS.OLD_VALUE, '#Missing') AS OLD_VALUE,
    ITEMS.NEW_VALUE
FROM
    DATA_AUDIT_LOG_DATAPOINTS DP,
    DATA_AUDIT_LOG_ITEMS ITEMS,
    DATA_AUDIT_LOG AUDITLOG
WHERE
    DP.AUDIT_LOG_ITEM_NUMBER = ITEMS.AUDIT_LOG_ITEM_NUMBER AND
    ITEMS.AUDIT_LOG_RECORD_NUMBER = AUDITLOG.AUDIT_LOG_RECORD_NUMBER;

Also note that there are a handful of configuration options relating to the primary key and columns. For this simple example I’m going to stay away from defining those since I don’t need them. In a future post I will go into what those options are and how they can be useful. The important thing to consider for now is that for the most part, Dodeca chooses sensible defaults for me and I can grab the functionality I need without having to worry about setting a million options first.

Create the View

Now I have my SQL connection, a SQL Passthru DataSet, and a query defined. This effectively takes care of all of the non-view specific functionality that I need. Put another way, nothing I defined so far was specific to the view that I’ll be creating in a moment. The objects created so far are all things that can and likely will be reused on other views, saving myself development effort down the road.

Now I want to create my simple view to show the data that I’ve modeled. For my purposes here, I can create a very simple view. Recall that I’ve created a SQLExcel view as opposed to the views I’ve shown earlier in this series that focused on Essbase (don’t worry, it’s possible to put Essbase and relational data on the same view – stay tuned for a future post on that).

For my SQL Excel view, I’m just going to define labels on my top row, apply some very light formatting (bold text), and then freeze the panes so that when I scroll down, my headers will be retained. I have also defined a named range that is as wide as the number of columns I have and is two rows tall. This named range is important because in a moment I am going to configure the view so that it knows to put the SQL data it retrieves there.

Dodeca SQLView Excel template

Dodeca SQLView Excel template

With the view template saved, I can now go over to the view editor and configure a few things so I can “glue” this view (so to speak) to the SQL data I defined earlier. The main property to consider is this SQLPassthroughDataSet Ranges category, which contains one item, DataSetRanges:

dodeca-relational-data-primer-06-sqlview-properties

Upon editing it, I am presented with the DataSet Range Editor. All I have to do here is define my SQLPassthroughDataSetID to point to the dataset I defined earlier (helpfully, they are presented in a dropdown box so I just select it from a list), and then define a DataTableRange.

Dodeca DataSet Range Editor from Edit View screen

Dodeca DataSet Range Editor from Edit View screen

A Quick Note on Solution Architectures

Before going further, I want to step back for a moment and try to alleviate any qualms you might have in terms of the configuration we’ve done so far. If you’re feeling overwhelmed with all of these objects – SQL connections, SQL passthrough data sets, SQL queries, SQL data ranges – I can understand. You might be thinking “Why can’t I just drop in a SQL query and be done with it?”

Well, for a simple SQL Select example, that might seem simpler. But our solution is going to grow. And before long we’re going to want multiple SQL connections, queries, the ability to update rows, delete rows, sort data, group data, and more. And we’re going to have some absolutely incredibly power and flexibility in our hands – and it’ll be maintainable. We don’t want impenetrable walls of SQL code that breaks all the time, and this way of modeling things with connections/data sets/data ranges has been crafted incredibly carefully to offer performance, maintainability, and flexibility (just trust me).

Create the DataTableRange

In a lot of ways, the DataTableRange is where the magic happens. This is the last item we need to define before we can build our view. I don’t actually have to define much here in order to get things to work. I have to tell it where the data from the SQL query should go (my DataSheetRangeName, which corresponds to the defined name on the spreadsheet template), and a couple of other options. By default, the headers from the SQL query would come back along with the data, but I don’t want or need those in this case, because I put in my own “nice” headers on the template, so I can turn those off. This is the SetDataFlags option of NoColumnHeaders. Easy enough. You know what else I want? How about Filtering options that I know and love from Excel? Let’s turn that on with the click of a button by simply setting AutoFilteringEnabled to True.

Didn’t I just tell you that we would have some absolutely incredible power available to us with just a few clicks? That’s a prime example. No funky SQL code to write, no magic in the spreadsheet – just turn on that option and now I’ve got all of Excel’s powerful filtering abilities on any data set that comes back.

That’s all I want to configure for this data range for now. In total my options look like this:

Dodeca DataTable Range Editor screen

Dodeca DataTable Range Editor screen

Build the View

We made it – we have our SQL connection, data set, and data range definition. Future views that use this data will be able to shortcut and jump right into the view definition since we’ll be able to reuse the objects we setup previously, saving us development effort. Time go go build the view:

A Dodeca SQLExcel view built with data from the internal Dodeca audit log tables

A Dodeca SQLExcel view built with data from the internal Dodeca audit log tables

The data in this view should look familiar from the previous post on playing around with the Dodeca Audit Log. And again, note the filtering boxes in each header row, where I can, say, filter on the Member column in order to see only rows that were modified that involved Cola.

 

 

Data Input with Dodeca, part 3 – Data Audit Log

Welcome back to the Data Input with Dodeca blog series! We’ve already covered a good bit of ground already. To start things off, we looked at basic data input to an Essbase cube using Dodeca, then we looked at how to let users provide commentary on their Essbase data input. These are both incredibly useful features, but perhaps more importantly, form the cornerstone of many typical Dodeca applications.

Today I want to dive under the hood a bit and look at the Dodeca data audit log. Whenever data is input by a user, it’s logged. This one of the important legs of the data input stool (in addition to comments) and greatly complements data comments. Whereas data cell commentary might be thought of as being useful in a business context, the data audit log is probably more useful in an IT and SOX context. The rest of this article is going to focus on the technical details of the data audit log, while subsequent posts in this series will take a look at putting a friendlier face on it.

The Dodeca data audit log is comprised of three main tables that reside in the Dodeca repository itself. So there’s no additional setup to worry about – these tables exist out of the box.

Data Audit Log Tables

These tables are DATA_AUDIT_LOG, DATA_AUDIT_LOG_ITEMS, and DATA_AUDIT_LOG_DATAPOINTS.

DATA_AUDIT_LOG

The DATA_AUDIT_LOG table contains records of all the overall data input activities. A single data input operation may affect multiple cells of data; all of the affected cells of that are modified in a particular user action are grouped together. This table contains the audit log number (an integer primary key), the Dodeca tenant, the Essbase server/application/cube, the user, and the date the data was modified.

DATA_AUDIT_LOG_ITEMS

There are one or more audit log items associated to a single audit log. In other words, if the data audit log contains a list of transactions, then the audit log items are the list of cells (however many that my be) that were edited in that transaction. This table contains a unique ID (primary key), an association (foreign key) to the audit log table, the old value of the cell, and the new value of the cell. Note that it doesn’t not contain the members from the dimensions (that’s coming up next).

DATA_AUDIT_LOG_DATAPOINTS

The DATA_AUDIT_LOG_DATAPOINTS table contains the member names of cells that were modified. For example, consider our friend Sample/Basic. A sample intersection that was modified might be Sales, Budget, Jan, 100-10 (Cola), Washington. Each one of these would be represented as a single row in the data points table.

All Together, Now

The fully normalized format for storing modified data points tells us absolutely everything we want and need to know about data that is modified. We know the who (user), what (old value, new value), where (Dodeca app, Essbase app/cube, intersection), and when (created time). As for the why – that’s more of a commentary thing.

Given that we have all of this information, and given that it’s stored in a nice normalized form in a standard SQL database, we can query it and view/answer all manner of questions about the data. This opens up some very cool possibilities:

  • Query the database directly to see what changed, if anything
  • Setup an ETL process (ODI!) to provide a regular report of modified data (extra useful during the forecast cycle)
  • Build a view in Dodeca itself that will allow us to query the modified data using standard Dodeca selectors (coming to a future blog post)

For now, let’s take a look at some example queries to get an idea of what we’re working with. The Dodeca repository that I’m working with the moment will be a MySQL schema. MySQL is one of the many relational database technologies that Dodeca works with. The most common ones are Oracle, Microsoft SQL Server, and DB2. But I like my Dodeca servers on a nice compact Linux server, and MySQL fits the bill quite nicely. I’ve tried to write the SQL in the most generic way possible so that if you want to borrow it for your own repository it shouldn’t need any major modifications.

To start things off, let’s say we just want a list of all of the modified data, by user, by modification time, with all data points (this could potentially bring back a lot of data in a large repository, by the way):

SELECT
    AUDITLOG.SERVER,
    AUDITLOG.APPLICATION,
    AUDITLOG.CUBE,
    AUDITLOG.USER_ID,
    AUDITLOG.CREATED_DATE,
    DATAPOINTS.MEMBER,
    DATAPOINTS.ALIAS,
    IFNULL(ITEMS.OLD_VALUE, '#Missing') AS OLD_VALUE,
    ITEMS.NEW_VALUE
FROM
    DATA_AUDIT_LOG_DATAPOINTS DATAPOINTS,
    DATA_AUDIT_LOG_ITEMS ITEMS,
    DATA_AUDIT_LOG AUDITLOG
WHERE
    DATAPOINTS.AUDIT_LOG_ITEM_NUMBER = ITEMS.AUDIT_LOG_ITEM_NUMBER AND
    ITEMS.AUDIT_LOG_RECORD_NUMBER = AUDITLOG.AUDIT_LOG_RECORD_NUMBER
ORDER BY
    AUDITLOG.CREATED_DATE;

Note a couple of things:

  1. Data is sorted by date, oldest to newest
  2. There’s an inner join between the three tables, you must make sure that all tables are joined together
  3. Data that was or became #Missing will be null in the table. For niceness I have used an IFNULL here to convert it to #Missing. Oracle’s equivalent is NVL. SQL Server uses COALESCE.
  4. This table will contain one row per dimension per modified data point (as opposed to one row per modified cell).

Okay, that’s all well and good. How about we filter things a bit and we only want to see data points that were modified in the last 30 days? Just add a simple predicate:

SELECT
    AUDITLOG.SERVER,
    AUDITLOG.APPLICATION,
    AUDITLOG.CUBE,
    AUDITLOG.USER_ID,
    AUDITLOG.CREATED_DATE,
    DATAPOINTS.MEMBER,
    DATAPOINTS.ALIAS,
    IFNULL(ITEMS.OLD_VALUE, '#Missing') AS OLD_VALUE,
    ITEMS.NEW_VALUE
FROM
    DATA_AUDIT_LOG_DATAPOINTS DATAPOINTS,
    DATA_AUDIT_LOG_ITEMS ITEMS,
    DATA_AUDIT_LOG AUDITLOG
WHERE
    DATAPOINTS.AUDIT_LOG_ITEM_NUMBER = ITEMS.AUDIT_LOG_ITEM_NUMBER AND
    ITEMS.AUDIT_LOG_RECORD_NUMBER = AUDITLOG.AUDIT_LOG_RECORD_NUMBER AND
    AUDITLOG.CREATED_DATE BETWEEN CURDATE() - INTERVAL 30 DAY AND CURDATE()
ORDER BY
    AUDITLOG.CREATED_DATE;

Please note that SQL languages differ wildly on their date math. I think the Oracle analogue here is relatively similar but SQL Server’s is a fair bit different.

Okay, how about if we’re only interested in a particular product being modified? Let’s filter on the member name/alias:

SELECT
    AUDITLOG.SERVER,
    AUDITLOG.APPLICATION,
    AUDITLOG.CUBE,
    AUDITLOG.USER_ID,
    AUDITLOG.CREATED_DATE,
    DATAPOINTS.MEMBER,
    DATAPOINTS.ALIAS,
    IFNULL(ITEMS.OLD_VALUE, '#Missing') AS OLD_VALUE,
    ITEMS.NEW_VALUE
FROM
    DATA_AUDIT_LOG_DATAPOINTS DATAPOINTS,
    DATA_AUDIT_LOG_ITEMS ITEMS,
    DATA_AUDIT_LOG AUDITLOG
WHERE
    DATAPOINTS.AUDIT_LOG_ITEM_NUMBER = ITEMS.AUDIT_LOG_ITEM_NUMBER AND
    ITEMS.AUDIT_LOG_RECORD_NUMBER = AUDITLOG.AUDIT_LOG_RECORD_NUMBER AND
    AUDITLOG.CREATED_DATE BETWEEN CURDATE() - INTERVAL 30 DAY AND CURDATE() AND
    (MEMBER IN ('Cola') OR ALIAS IN ('Cola'))
ORDER BY
    AUDITLOG.CREATED_DATE;

Just to hit this home a bit, here’s a screenshot of the data that comes back for my local server, using one of my favorite SQL tools, RazorSQL:

Sample query on the Dodeca repository data audit log tables

Sample query on the Dodeca repository data audit log tables

As I mentioned earlier, one of the really interesting things we can do with Dodeca is to built a view in Dodeca itself that will allow us to easily filter and see what’s going on with the data, by tapping into Dodeca’s own repository. But in the meantime I hope you found this article helpful and saw some of the possibilities that are afforded to you. Invariably when I discuss this tool with people, there is a conversational progression of yes answers that lead to data audit logging:

Does it handle data input?

Does it handle data input comments?

Is there an audit log showing me which data was modified so that I can make my IT Risk/Compliance/SOX department happy, please say yes, please say yes?

Yes!

Data Input with Dodeca, part 2 – Comments

Yesterday, I kicked off my data input mini-series with Data Input with Dodeca, part 1. I’m going to take that example a small step further and put in comments that a user can edit as they add data input. Yesterday I also mentioned that in terms of data input to Essbase, you have several options, some of which include rolling your own in-house solution, such as with VBA (for the record, I recommend against rolling your own solution). It’s a lot of work.

But maybe you’re thinking: “You know what? Locking and sending isn’t so bad, we have a sheet we use…”. Fair enough. What about comments on the data? This question of comments and commentary comes up again and again – for good reason. It’s incredibly useful in the finance world to provide context to a data point, particularly when that data point appears out of the norm somehow.

Comments are a tentpole feature in Dodeca, and probably one of the biggest features in the product that goes to show its philosophy of being a best of breed tool for planning (with a lowercase P!), reporting, spreadsheets, and the best OLAP engine on the planet. Dodeca has extensive support for allowing commentary on any given cell. Today I’m going to talk about one of the simpler use cases for comments. I’ll do this by extending my example from yesterday so that in addition to allowing the user to input budget values for a given market, the user can now provide comments as well.

Setting up Comments in a View

The first thing we need to do is edit our Excel template to add cells for the comments themselves. You can see this in the following screenshot where I have enhanced the data input view from the previous article:

Dodeca input template comment range

Dodeca input template comment range

Note that I have given the comments range a name, in this case Comments.Range.1. This will come into play in a moment when we configure the comments in the view. The next thing that I need to do is define key/value pairs for each comment. Essentially, the key/value pairs are where we use a particular cell to define a unique string of text that identifies a particular comment. As with so many other things in Dodeca, we define this in the cells/workbook itself. The simplest way to achieve this is with a formula that references cells containing members from the point of view (POV).

Excel formula showing the key/value associations for a comment

Excel formula showing the key/value associations for a comment

Check out the formula for the comment for the Sales item:

="Measure=" & B9 & ";" & "Market=" & C$5 & ";" & "Time=" & C$7 & ";" & "Product=" & C$6

This is just a normal Excel formula. The format that I want to achieve in this case is that I have a semi-colon delimited list of items that in the format Dimension=Member. So for the first cell, the resulting intersection is this:

Measure=Sales;Market=[T.Market];Time=Jan;Product=Cola

Because it’s just a normal Excel formula, when I fill down, the item for Measure will update based on the current row (after Sales will be COGS). Also note that in this case we just see the token [T.Market]. Remember that with Dodeca templates we often need to think a bit temporally, which is to say that we need to keep in mind that when the view is built by Dodeca, the token will be filled in with the user’s current selection for the Market dimension, and thus the formula and in turn the POV for the comment will be updated dynamically. Also note the absolute cell references in my formula. I want to make sure that when I fill down the correct cell references are maintained.

Before moving on, note just one more thing regarding the POV for our comments: we don’t need to match up with the Essbase dimensions. We typically will match up to some extent, but you don’t have to slavishly represent each dimension. For if there is, for example, a dimension that has no bearing on the comments, we don’t need to bother to represent it.

Since the comment key/value range is only meant for Dodeca to be able to determine what intersection the comments belong to, we don’t really want or need to show it to the user, so we simply hide that column on our sheet, giving us the following template:

The comment key/value associations are hidden so that users aren't bothered with it

The comment key/value associations are hidden so that users aren’t bothered with it

Now let’s go over to the view properties and tell Dodeca about the comment range in our view, so that it knows how to update and populate them. Under the options for our view, there is a Comments category with several options. In this simple case, we don’t really need to change any of them, except to go in to the CommentRanges item and define a specific comment range (Dodeca allows multiple comment ranges but for now we are just concerned with our one range).

Comments options in Essbase Excel view

Comments options in Essbase Excel view

Let’s take a look at the configuration needed for the comment range that we have been setting up in the template:

Main comment range configuration for Dodeca input template

Main comment range configuration for Dodeca input template

Dodeca offers an incredible number of variations on the user comment experience and we can control most of that experience. For the moment, only consider the options in bold that I have specifically changed in order to make comments work on this sheet:

  • AllowDeleteString: True. I have specifically told Dodeca that I want to allow users to blank out a comment cell if they so choose, thus erasing the comment
  • InCellDisplayPolicy: MostRecent. Dodeca can track the comments for a given data point over time. In this simple case, I just want to show the most current comment
  • EditPolicy: EditInCell. Dodeca has a more featured comment explorer feature that I will get into in the future. For now we just want to edit the comments themselves in the cell
  • ThreadPolicy: OneCommentOnly. Again, there is quite a bit more enhanced functionality available here but I want to keep it simple
  • Address: Comment.Range.1. This address matches the defined name I have for the comments on the sheet
  • KeyItemsString: =OFFSET(@ACell(), 0, 1). This is probably the “trickiest” element to this entire configuration. In a nutshell, for a given comment range, we need to tell Dodeca about the cells that will contain the comments, and the cells that contain the POV for each comment individually. The formula in this cell represents a combination of an Excel formula along with a special Dodeca function @ACell(). The @ACell function returns the address of the current cell. Using the Excel OFFSET function, we can pass an address and a relative offset. In this case we are saying to offset by zero rows, and offset the column by 1. So this returns the value of the neighbor cell. If for whatever reason our comment POV cells were further to the right (such as one more column over), then I would need to increase this value to match.

Lastly, let’s run the view and see what happens:

The Market Input template as built by the user, with our new comment range

The Market Input template as built by the user, with our new comment range

Now let’s enter some text in to explain the value for Colas:

Entering a comment to a cell

Entering a comment to a cell

Given my input policy, the comment is sent up to the database right away. I can close and open this sheet and the comment will be loaded and shown. I can even develop other views and if I plugin the proper comment POV, I can show the comments on a totally different view. The comments are stored in the Dodeca relational repository (not as LROs or otherwise directly in the cube), which gives us fast access to them (and also explains why we don’t need to map every dimension from the cube if we don’t want to).

I hope this brief introduction to the comments functionality in Dodeca was useful and educational. Invariably when people (such as at the Kscope booth) ask about data input, the next question is whether they can get comments too. And the answer is yes; in fact, Dodeca makes it downright easy.

Cascading Report Summary Sheets in Dodeca

Earlier this week I looked at setting up report cascading with Dodeca. With that foundation in mind, today I’d like to look at elaborating on it a little bit by adding on a summary sheet with Dodeca’s built-in summary sheet functionality.

What is a Summary Sheet?

A summary sheet is a sheet in our workbook that is derived from the other sheets generated from a report cascade. For example, let’s say that in my earlier cascade example, I chose two products and two markets: Colas and Diet Colas as my products, and Washington and Oregon as my markets. This would result in four sheets/tabs being generated. Now, what Dodeca allows me to do pretty easily is turn on a summary sheet, which will automatically generate a tab for me (this will be a fifth tab in this example), and contain formulas I designate that will sum up the sheets generated from the cascade.

This is really useful for a couple of reasons. Now, in a perfect world, you’d be able to get summary data from the outline itself, for example, a parent rollup in the Product, Market, and Time dimensions. That’s the whole point of Essbase, right? But sometimes we need arbitrary groupings of things. Frequently this is accomplished with alternate hierarchies. But what if we want the user to be able to create a report with a summary value that is neither in the primary hierarchy nor any alternate hierarchies?

One way we can provide this functionality to our users is to turn on a summary sheet. Here’s a look at the summary sheet options, they are located under the Cascade option group on our Essbase Excel view:

dodeca-summary-sheet-01-options

In particular, note that SummarySheetAdd is set to True. Additionally, I have specified a name for the sheet (SummarySheetName = Summary), specified a SummarySheetPosition (I can have my sheet be the first or the last sheet), and a range of cells to summarize (SummarySheetSummaryRangeAddress). Per best practices, I have created a defined name in my workbook so that I can refer to it easily. Additionally, if I adjust the rows and columns in my template, I don’t need to come back to these options and worry about updating some arbitrary range name.

Let’s go take a look at the simple source template:

dodeca-summary-sheet-02-template

As you can see in the template, I have defined a fairly bread and butter Dodeca Excel template, where I have tokenized the Year, Product, and Market, meaning that users will be able to make choices from the selectors and build the report with whatever data they want.

Remember the summary range I mentioned earlier? Here it is, highlighted, showing the cell ranges that will be added up on the summary tab:

dodeca-summary-sheet-03-defined-name

Just for completeness, check out the retrieval range that will be refreshed when the view is built:

dodeca-summary-sheet-04-retrieve-range

Remember, Dodeca can do multiple retrievals in a single sheet and from multiple, disparate data sources. In this case we just have a simple retrieve from the classic Sample/Basic app, but this could just as easily be relational data from SQL Server, Oracle, or all of the above.

Lastly, having made selections from multiple dimensions, let’s build the view and check out our new summary sheet. Note the Summary tab that appears first, just as we wanted:

dodeca-summary-sheet-05-built-summary

There we have it: with just a few additional options to configure, I have a nice, dynamic summary sheet in this book that users can easily use. What’s even more interesting, is that I gave a user the ability to dynamically sum up data that doesn’t have a member or alternate hierarchy to do the math for me. In this case, it’s the arbitrary sum of Colas and Root Beer for January and February.

Saving Everyone Time

Not to belabor the point, but I think this example shows off some really interesting ways that we provided flexibility to the user and saved the user and administrator time. Without such dynamic summing capabilities, our user might have resorted to pulling data down to Excel and summing things up by hand/formulas, or otherwise going out of their way to get to data that wasn’t directly in the cube. As a one off it’s not so bad, but if it’s a way of common way of seeing the data, then you are potentially saving yourself a minute here and a minute there on a repeated basis.

Our administrator saved time because this might represent an alternate member/group that would be nice to have in the outline but not necessary to have. And in today’s complex metadata management world, that might just represent time not spent marshaling a member request through a byzantine approval process.

In Summary (Pun Intended)

Summary sheets are a very easy to utilize feature in the Dodeca Spreadsheet Management System and can offer your users a convenient way of looking at data that might otherwise be inconvenient (or worse, error prone!) to look at. As with so many other Dodeca features, they can quickly add value to an existing solution and provide your users with that little bit of extra utility that takes your overall solution from good to great.

Towards Spreadsheet Management

I follow quite a few Hyperion related blogs. Years ago there weren’t too many of them but now I’m quite pleased to be able to follow dozens of them with my RSS reader of choice. The other day I read an article posted by Edward Roske titled “7 Signs Your EPM Is Lagging Behind Your Competition”. Edward has been working with Hyperion for a long time and runs one of the more well known consulting firms, so he has seen quite a bit. And his thoughts are insightful. While I was reading through the article I couldn’t help but read it with a “Dodeca colored lens”, if you will.

For example, one of the signs is that strategy is planned verbally or in spreadsheets. The interesting thing here is that the spreadsheet modeling paradigm itself is a robust and essential. Where things go sideways is with how the files themselves are managed (and mismanaged).

For example, consider a typical analyst or a power user analyst that creates a spreadsheet: some pulls from Essbase, perhaps some relational data pasted in, some formulas, multiple tabs, formatting, and all that fun stuff. So far so good (well, not really, but let’s say it is). Now they email it out. Some feedback comes in from the CFO. Now the sheet Profitability Q3 2016 becomes something like Profitability Q3 2016 – Revised. More feedback comes in. More meetings. Now it’s Profitability Q3 2016 (2). Teresa down the hall needs a copy, so the analyst copes it to the LAN. Teresa makes some changes but the analyst has a lock on the file, so she saves Profitability Q3 2016 (2) – Teresa. I’ve seen some pretty heinous file management in my time.

This is an all too common scenario in the world of spreadsheets. Things get ugly quickly. And this is to say nothing of links across tabs and sheets and a host of other bad practices that make the situation even more error prone and hard to manage.

Another sign from Edward’s blog is that Excel is the key enabling technology in your FP&A department. I agree completely with the sentiment here. Note that Excel is specifically mentioned – not the spreadsheet paradigm itself. Excel as the enabler of FP&A is yesterday; the flexibility of spreadsheets combined with the power of Essbase is tomorrow.

Some of Edward’s other points such as there is only one version of the budget and budgets favor precision over timeliness are also spot on in terms of their accuracy.

Enter Dodeca

To reiterate from the beginning of this post, I couldn’t help but read this blog article while thinking about Dodeca. It’s because Dodeca takes all of the best things about spreadsheets: their power, their expressiveness, their familiarity to so many finance users, leverages the power of Essbase, leverages data from relational databases, and marries it all up in one cohesive interface. It’s saving numerous people and companies a lot of time: time that is not spent laboriously refreshing report decks, time that is not spent copying files around, time that is not spent emailing files, time that is not spent posting things to SharePoint or the LAN, and even a fair bit of time on end-user training.

I’m really looking forward to this year’s Kscope in Chicago. There are multiple presentations by Applied OLAP customers on how they built solutions with Dodeca while increasing productivity and reducing risk. And of course, we at Applied OLAP will have a booth. Please swing by and say hello!

 

 

Drillbridge acquired by Applied OLAP

The last few months have been rather hectic (as evidenced by the lack of blog posts, unfortunately), but I am happy to report some exciting news! There are two main things of note:

First, I am now a full-time employee of Applied OLAP. Applied OLAP is one of the biggest and best names in the EPM space, having worked with Essbase for many, many years. Applied OLAP develops and supports the Dodeca Spreadsheet Management System (much more to come on this!). Back in my Kroger days I helped roll out one of the earliest Dodeca implementations to what became a very enthusiastic user base and incredibly happy executives. Many of you are just as, if not more familiar, with the Outline Extractor – software that is also maintained by Applied OLAP as a free tool for the benefit of the community.

Second, Applied OLAP has acquired Drillbridge in its entirety. The free edition of Drillbridge will now be available alongside the Essbase Outline Extractor and other tooles that many people are so familiar with. Enterprise licensees will now have their software supported by Applied OLAP. Drillbridge Enterprise has an exciting roadmap and will continue to exist as a standalone product.

That’s the short version for now, many more exciting things to come in the days ahead!

Book Review: Developing Essbase Applications – Hybrid Techniques and Practices

I am very pleased to be able to review Developing Essbase Applications: Hybrid Techniques and Practices for you today. I can’t believe it has already been over three years since my review of the first Developing Essbase Applications book. As with before, this book is a collection of in-depth chapters on various subjects, written by some of the best and brightest out there in the Essbase community. The author list is pared down slightly from before and contains some new, but familiar names.

Again leading the cat herding editing efforts is the venerable Cameron Lackpour, along with John Booth, Tim German, William Hodges, Mike Nader, Martin Neuliep, and Glenn Schwartzberg. Those of you that frequent OTN, the Network54 Essbase Forum, or conferences such as Kscope will recognize these names quite easily. I am quite honored to consider these authors my friends and colleagues.

The DEA reader will be treated to several compelling chapters on incredibly contemporary Essbase topics, including the following:

  1. Exalytics
  2. Hybrid Essbase
  3. Young Person’s Guide to Essbase Cube Design
  4. Essbase Performance and Load Testing
  5. Utilizing SQL to Enhance Essbase
  6. Integrating OBIEE and Essbase
  7. Managing Spreadsheets with Dodeca
  8. Smart View

Essentially, the book takes a divide and conquer approach where each author tackles one or more of the chapters. Now, when it comes to programming books, of which I am a fairly avid reader, I am not a fan of the “cookbook” approach – wherein the book is a collection of disparate topics. In the programming book realm these books tend to read like a bunch of polished up blog posts. For this Essbase book, I think the approach works wonderfully. I have to say that none of the chapters feels like a glorified blog post. They are incredibly densely packed with useful information, and perhaps more usefully, they are packed with an incredible amount of insight and pointers that are derived from countless hours of efforts on the part of the respective authors.

I remember talking to Cameron when the first book came out and I told him that it was a good book, but I wasn’t sure that it was a great book (in retrospect I think I was being overly critical). And I remember that he told me that if you can learn just one or two things from the book, it’d be worth it. And that notion really resonated with me.

Along those lines, the chapters on Hybrid Essbase, Performance and Load Testing, and Smart View are worth the price of admission on their own. The other chapters are also quite compelling but your particular skill set will affect how much you get out of a chapter. For instance, I have done my fair share of SQL work, so I found Glenn’s chapter on Utilizing SQL to be an interesting treatise on Essbase and SQL but I didn’t happen to personally get a lot of it. That said, while I take familiarity with SQL for granted, I now know that SQL isn’t in every Essbase developer’s wheelhouse (but it should be!).

So, does this book pass the “did you learn one or two things from it?” test?

Absolutely.

If you are at all serious about being an Essbase developer, I can easily and whole-heartedly recommend that you add this book to your collection. This is a no-brainer if you are a developer or a consultant. This is content from people at the top of their games that have put in countless hours to compile and write this content on incredibly useful topics. If you gain one insight on Exalytics, Hybrid Essbase, Smart View, OBIEE, SQL, cube design, performance testing, or learn something about the awesome Dodeca software, it will have been easily worth it.

If it were me, I might have spiced this book by adding in a Jason Jones chapter on ODI and Hyperion or some other interesting things, but I can’t fault them for that (next time, Lackpour, next time).

Nice job, guys. Now go buy this book.

Database stats and outline viewing with Vess

I had a little bit of time this weekend and dusted off Vess for some updates. I was able to greatly enhance the functionality and perhaps more important, widen the number of use cases that it could be used in.

I already wrote about a use case previously: using SQL to create a substitution variable. Vess supports fully symmetric substitution variable updates. That is to say, you can create, read, update, and delete substitution variables on your Essbase server just by changing the data in the proper table. So just to elaborate on my thoughts on the previous post, I think there are some nice use cases here. Of course, there’s nothing wrong with MaxL, but if you have an instance where you are dumping substitution variables via MaxL and scraping the ASCII art off of them in order to get to the real variable values, Vess offers a cleaner approach.

As of now, Vess also provides a window in to a whole slew of properties on various Essbase objects: server, application, and database. Here’s a view of some database properties (as shown in a generic JDBC GUI with Vess configured as a driver):

Using Vess to view Essbase database properties for Sample.Basic

There’s a use case here too. I know of more than a few people that are dumping database stats using MaxL (such as to monitor a database/server stats over time).  So instead of running a script, dumping to a file, parsing that out (or whatever), you could slam the data directly into SQL (using ODI or just some tool that can execute SQL statements).

Next up, Vess models tables for the various dimensions in a cube. This is not unlike how ODI would create a model of a cube: one table per dimension. Here’s a view of some information from Sample/Basic:

Using Vess to view outline information

Outline functionality in Vess is not symmetric: you can read but updates are disabled for now. The use case here is that you could pull outline information without needing to use ODI (or something else) and possibly if you are using the Essbase Outline Extractor in your automation. Since Vess is already a fully functional SQL database, you could pretty easily do some interesting things like just grabbing all of the level-0 members in an outline (“SELECT * FROM SAMPLE.BASIC_DIM_MARKET WHERE LEVEL = 0”).

For now, Vess supports data loads to a cube (look mom, no load rules!). I wrote about this before too. Data reads are turned off for now (as in, you can INSERT but SELECT always returns nothing). It’s a bit of an interesting issue when you want to map this into a relational model.

Lastly, I started writing up the ODI technology for Vess. I think this is an absolute win for Vess because it would mean that you could build a nice clean technology/Knowledge Module that doesn’t have to use a bunch of Jython to glue things together: you can treat everything as a vanilla JDBC database model. No idea when this could be ready but you can rest assured that I’ll blog about it.