Camshaft (Essbase MDX query tool) 1.0.2 released

Apparently I’m having quite the productive Friday, what with showing how easy it is to setup drill-through with Dodeca and that I’m heading to Oracle Open World 2017 to contribute to a presentation on cool Essbase tools.

To these articles I’ll add that I just released a Camshaft point release. This release has a couple of fixes and enhancements. Thanks to André Märki and others for providing feedback.

This version of Camshaft fixes an issue where some data with many digits after the decimal would be rendered in scientific notation. Along with this fix I have added a new command-line switch, --maximum-fraction-digits (used on the command-line such as --maximum-fraction-digits=2) to set the max number of digits to render after a decimal.

Additionally, there was a bug with running a query from a file that is now fixed. You can now specify something like --query=somefile.mdx and Camshaft will look for the given file. If found, it’ll read its entire contents for an MDX query, then execute that. This option can help make command invocations with big gnarly MDX queries a little easier to manage.

Please keep that feedback coming and I’ll add enhancements/fixes to the best of my ability. I have some interesting Camshaft news coming in the near future that some people will really like!

As always the latest Camshaft documentation and download can be found linked from the Camshaft page.

Oracle OpenWorld 2017 Presentation: Essbase Community Toys and Tools

I am very pleased to mention that I will be making a last minute appearance at Oracle OpenWorld this year! I am going to be presenting with Gabby Rubin, and Tim Tow about various free tools that exist in the Hyperion/Essbase ecosystem. The presentation is on Thursday, September 22nd, 2016, at 12:00pm at Moscone West. The presentation ID is CON6489.

For my part of the presentation, I am going to talk about a handful of tools I have personally created over the years. This will likely include Cubedata (a tool for generating large quantities of data to help test Essbase performance), Camshaft (a tool for running MDX queries and outputting the results to a text file), and Vess, a highly experimental/innovative JDBC driver for Essbase that provides a functional facade for Essbase servers/cubes including access to outline data, cube data, data loads, substitution variables, and more. The goal of the presentation will be to quickly inform intrepid Essbase/Hyperion administrators and developers about some of the interesting third-party functionality they might find useful and improve productivity.

 

Simple Drill-through in Dodeca

Dodeca has robust support for drill-through. You can drill from Essbase data to relational data, from Essbase to Essbase, and SQL to SQL. You can have multiple drill-through definitions in a single view, so that a user can choose one of many drill destinations. Today I want to look at the simplest form of drill-through in Dodeca, which is to simply enable a couple of the Data Drillthrough options on a source view, tell it what the target view is, and be done with it. I call this the “simple” version of drill-through because it just gives us the ability to double click on a data cell and drill from it.

The less simple, or rather, more elaborate, version of drill-through can be configured with custom context menus, multiple drill targets, and more configuration options than you can shake a stick at. I’ll be looking at an example of that in an upcoming article. But for now, here’s how “simple” drill-through can be quickly and easily configured in Dodeca.

The example I’m going to look at today is one where we’ll let the user drill from one Essbase-based view to another. While many people think of Essbase drill-through in terms of going from OLAP/Essbase/consolidated data back to the original source OLTP/relational/transactional data, drill-through between Essbase views is an incredibly useful feature as well. It gives us the ability for a user to pull up data they are interested in, such as by time period and location, then very quickly jump to a different or expanded view of data based on those same intersections. Given the fluidity and seamlessness we can achieve in terms of going between different views with any data on them, drill-through becomes even more powerful. Instead of swimming upstream to more granular data, we can think of drill-through more as “intelligent navigation” – and drilling to details is just one type.

Continue Reading…

Camshaft MDX tool updated and available

Some of you may recall a tool I released quite some time ago (seemingly to beta-testing purgatory) called Camshaft. Camshaft is a simple Java utility that executes a given MDX query against an Essbase cube and outputs the results. The original version of Camshaft came out around two years ago. This version is built on the same framework but includes various updates and new options. In the interim, the output abilities of the MaxL interpreter have been improved a bit, and with the right incantation it can now output pretty useable data.

The name Camshaft is actually a portmanteau of who the tool is named for, and the feeling that he gets when writing a load rule (especially one loading in MDX data). It’s not every day that a tool is named after a tool, but I digress (I kid, I kid!).

Anyway, Camshaft offers a fairly wide array of options to customize the output from an MDX query. You can suppress headers, choose your column delimiter, how to format #Missing/#NoAccess cells, and more. There’s even an output option to generate an HTML table if you want.

You could run this query, for example:

SELECT
        CROSSJOIN({[Jan], [Feb], [Mar]}, {[Curr Year], [Prev Year]}) ON COLUMNS,
        {[Measures].Levels(0).members} ON ROWS

And you might get this output (depending on options):

	                        Jan, Curr Year          Jan, Prev Year          
	Original Price          #Missing                #Missing                
	Price Paid              #Missing                #Missing                
	Returns                 #Missing                #Missing                
	Units                   #Missing                #Missing

Of course, maybe you want Jan, Curr Year to be on multiple lines. Just pass in the --line-per-header command-line argument and get that output:

	                        Jan                     Jan                     
	                        Curr Year               Prev Year               
	Original Price          #Missing                #Missing                
	Price Paid              #Missing                #Missing                
	Returns                 #Missing                #Missing                
	Units                   #Missing                #Missing  

It’s fairly flexible. You can output to the console or a given text file, and more. You can suppress the whole header if you want. The latest version of the documentation for Camshaft is online (and will be updated from time to time as refinements are added), as well as inside of the Camshaft downloadable file. The Camshaft download site is here (also available on the small Camshaft info page).

Camshaft is a free utility offered with no support or warranty (although feature ideas are welcome), and is closed source (for now), although sometime in the future I may just open the source code up so that some intrepid developers can do what they want with it.

Vess + Dodeca for Substitution Variable Management

I’m gonna go a little crazy today and combine two worlds, just for fun: the Vess “virtual” Essbase JDBC driver, and of course, Dodeca. I’ve written about Vess before, and even talked about it for a bit during Kscope16 earlier this year during a presentation with Tim Tow and Harry Gates on various interesting things we’re doing with Java and the Essbase Java API.

As a quick crash course on Vess, it’s a highly experimental Java JDBC driver that models an Essbase server’s applcations/cubes/properties into variable relational tables (I’ve written about Vess a few times before). At the moment this includes cube outline data, cube data, substitution variables, miscellaneous properties, and more. For example, when you connect Vess to, say, Sample/Basic, one of the tables you’ll get is SAMPLE.BASIC_VARS and it’ll contain four columns: the application, cube, variable name, and variable value. You might think you wouldn’t need to know the application and cube for this table but due to a nuance with Essbase variables (you can have the same variable name at both the cube, application, and server level) it’s actually needed.

In any case, not only you can read values using any SQL you want from these columns, but you can perform operations on the table that in turn affect the Essbase server. So you can do an UPDATE or DELETE and it’ll change the variable’s value, or delete a variable.

With that in mind, I thought to myself, you know what might be interesting – What if we added a Vess driver to Dodeca (since Dodeca supports third-party database drivers) and wire up a simple view that can edit the variables? So that’s exactly what I did and I thought it’d be fun to share.

Adding Vess to Dodeca

The first thing to do is add the Vess library and a couple of other Java libraries that it leans on to the Dodeca servlet. Typically you’d want to add these to your Dodeca WAR file when you build it with the “Click Once Prep Utility”, but since this is just for testing purposes, I can just add the JAR files to the already deployed servlet. I wouldn’t want to do it this way in production because when I went to deploy a new WAR file, I’d lose my Vess drivers. Here’s the drivers added to the /dodeca servlet:

Vess Java JAR files added to Dodeca (dodeca) servlet

Vess Java JAR files added to Dodeca (dodeca) servlet

For good measure I restarted the servlet container (in this case, restarting Tomcat 7 using sudo service tomcat7 restart on this little Ubuntu VM). Then we can login to Dodeca and create a new SQL connection:

A Vess connection is created inside of Dodeca

A Vess connection is created inside of Dodeca

There’s not a lot to see here other than to “show off” that Vess is indeed just a normal JDBC driver as far as other software is concerned – in this case, Dodeca. As you can see, Vess introduces a JDBC URL format. Vess can connect in embedded mode (in this case, indicated in the scheme of the URL. The rest is fairly standard: the address of the server (Vess assumes the default port of 1423 if none is specified), and in this case, a particular app/cube to connect to. Other than the URL, the driver class is specified. As with Oracle/SQL Server/MySQL, the class is just the Java class implementing the Driver Java interface. These typically are thing like com.mysql.Driver or something similar, and Vess is no different in this regard. Lastly for purposes of the Dodeca connection, a username and password are specified. This should be the credentials for an Essbase user, since internally Vess will use them to connect.

With the SQL connection mapped in, I can create the SQL Passthrough DataSet that will contain my SELECT queries, and optionally, parameterized INSERT/UPDATE/DELETE statements if I want to have support for those (which I will).

Configuring the SQL Passthrough DataSet for Vess variables

Configuring the SQL Passthrough DataSet for Vess variables

You can see that unlike some of the other SQL Passthrough DataSet examples I have shown lately, this one has two queries. It’s worth noting, briefly, that a SQLPTDS isn’t an object that just contains one query or otherwise concerns itself with one dataset. It can contain an arbitrary number of [usually related] queries. In this case I have two: one for server wide substitution variables, and one for variables just applicable to Sample/Basic (these actually overlap a bit as I’ll show in a bit).

The definition for the “server variables” query is very straightforward and only contains a SelectSQL configuration:

On the Dodeca query editor, looking at the first query for pulling out global variables from the Essbase server

On the Dodeca query editor, looking at the first query for pulling out global variables from the Essbase server

As noted earlier, Vess creates a table in the schema VESS_SCHEMA called VARS that contains the names and values of server-wide substitution variables. Over on the Sample/Basic variables configuration, there’s a little more to it:

The second query is modeled on a specific table for the Sample/Basic database

The second query is modeled on a specific table for the Sample/Basic database

Here there are queries that model the DELETE, INSERT, UPDATE, and of course SELECT operations. Not pictured (it’s collapsed on the config screen) is that I defined the primary key for this table as the combination of APPLICATION, CUBE, and NAME columns (while the final column, VALUE, is not part of the primary key).

To get a flavor for what the various queries look like, here’s the UpdateSQL configuration:

Dodeca UpdateSQL query for updating a variable's value

Dodeca UpdateSQL query for updating a variable’s value

You can see that the particular variable is identified by three column values (the primary key values), and that the value gets updated for this operation. There are four tokens in play, which will come from the row being edited in the view. There’s no primary key value being generated on the server-side (some of my previous examples had an integer that was generated server-side), so there’s no need for a post-insert select statement.

With all of the SQL Passthrough DataSet configuration out of the way (but a little more to come on the view configuration), I can now build a simple view template for showing the data:

Creating a template to display the variables from the SQL Passthrough DataSet

Creating a template to display the variables from the SQL Passthrough DataSet

If you’ve followed some of my other examples, this should seem pretty basic by now. There are two data ranges on this sheet but I’m just showing one in the preceding screenshot. The dataset has four columns, and so there are four columns on the range. That’s actually all there is to the view template itself. The rest of the configuration is on the view to set its data range and wire it up to the SQL Passthrough DataSet:

The configuration of the View that will display/edit the variables

The configuration of the View that will display/edit the variables

Noting too special here. You can see that I turned off RowAndColumnHeadersVisible to clean up the final appearance of the view a bit, and I have my one DataSet range defined. Over in the DataSet range definition:

The DataSet Range definition for the view

The DataSet Range definition for the view

There are two DataTable ranges defined (again, one for server variables and one for the Sample/Basic variables). Now opening up the configuration for the SampleBasicVarsData range (I’ll skip showing the details on the server variables range since it’s pretty simple):

DataTable Range Editor for the Sample/Basic data set

DataTable Range Editor for the Sample/Basic data set

I’ve turned on the abilities to add, delete, and modify rows (INSERT, DELETE, UPDATE). This is a really nice bit of granularity to have in Dodeca since in this case there’s a very legitimate use-case where I’d perhaps want a user to only be able to change a variable’s value but not otherwise delete it or add a new variable. Other than that bit of configuration, I’ve specified the corresponding range name on the sheet/template, and turned on InsertCells and NoColumnHeaders which is fairly standard for me with data sets like this.

Okay, the SQL Passthrough DataSet is setup, the template is setup, and the view configuration is setup. Let’s build this and see what happens:

Built Substitution Variable view

Built Substitution Variable view

It looks just like I thought it would! I can see my two server-wise substitution variables, and over on the table for Sample/Basic, I can see all of the variables that I have there. You’ll note that the server-wise variables seem to “repeat” in the Sample/Basic table. You simply have to think of the variables for a single database in terms of what variables are applicable to that database, and server-wide variables are applicable. Of course, if there’s a more specific (specified to the database) variable, it’ll trump the server-wide variable.

If you’re particularly astute with your screenshot reading skills you may notice that in preceding shot the cursor is on a cell in the Sample/Basic variables table, and therefore the row editing buttons on the toolbar are active (insert, delete, save). So I can change a value on a variable, hit the save data button, and Dodeca will perform the proper query from the SQLPTDS. Let’s do that and see what happens:

View after updating a variable value

View after updating a variable value

Well, it’s certainly less dramatic with screenshots, you’ll have to take my word for it that the PrevYear variable did indeed update on the server from FY11 to FY10. Under the hood, Dodeca fired off the properly filled in UpdateSQL statement, which of course was handed to the Vess driver, and in turn, Vess translated the call and called the appropriate variable updating logic on the Essbase Java API (magic!).

Summary / Vess Availability & Download

I hope you enjoyed this somewhat unique (or totally unique, I suppose) combination of a couple of different technologies. Vess is a bit of a unique take on things in the Essbase world, whereas Dodeca provides the peas to Essbase’s carrots. And yet, combining the two results in something wildly “interesting”.

I’m not saying that organizations should manage substitution variables this way (and again, the substitution variable aspect of Vess is just one of its facets, but it’s a nice simple one to play with), but this certainly makes it quite possible.

I know of many organizations that specifically or rather, begrudgingly, give EAS to a handful of finance power users that need to be able to tweak variables. Sometimes instead of EAS you’ll see one-off MaxL scripts where the update procedure is to tweak the script or a text file and run it. All too often this also involves plain-text credentials, hassling with installing the MaxL runtime on a ‘regular’ desktop machine, and more. So in this particular case, while the Vess driver actually does a lot more than just substitution variables, it can be leveraged for an innovative solution that is “cleaner” than many alternatives.

As an alternative (and still using Dodeca), we could have actually shelled out to launch a MaxL script and pass along the variable value, achieving much the same effect. This could work but obviously would be much more configuration. And to the extent possible I don’t like to create solutions that are ostensibly web-based that need to “drop down to the file system”, since it usually (in my experience), introduces a somewhat fragile or ‘sensitive’ element to the system that seems to act up or break relatively often.

Vess is still “highly experimental”, which I guess is a nice way of saying “a lot of things can go wrong, there’s no warranty, but it works… mostly. Asterisk.” Anyway, Vess isn’t available as a public download, but if you’d like to play with it, please feel free to contact me and I can provide the file and some basic instructions.

 

Dodeca Dynamic Grouping with Relational Data

I am very pleased today to write about an incredibly awesome Dodeca capability: dynamically built groups based on relational data. This capability is interesting and useful for a variety of reasons. Using Dodeca’s spreadsheet/data/magic build paradigm, we can organize plain relational data into beautifully formatted, insightful, and dynamic views. Just to forecast where I’m headed with this, what we’re going to do is transform this plain relational data:

Some raw forecast related data

Some raw forecast related data

Into this dynamic, grouped, and formatted view:

Dodeca dynamic grouping opened up in Excel

Dodeca dynamic grouping opened up in Excel

And further, we’re going to do it without writing a single line of code (save for the simple SQL Select statement). This post will assume that you’re up and running already with Dodeca’s SQL Passthrough DataSets, which I have written about before, so head over for a refresher if you need it. Also, I’ll be recycling a simple SQL table with forecast data by employee that I also used in an earlier Dodeca relational database input article, so you can read that if you want to know more about the data in play and how it relates to Sample/Basic. Continue Reading…

Data Input with Dodeca, part 6 – SQL and Essbase Hybrid Input in one View

Dodeca Spreadsheet Management System Logo

The last article on relational data input with Dodeca was a bit epic – I was planning on something a little shorter and sweeter for this next article, but it’s going to be another long (but awesome!) one that combines everything we’ve seen so far in this data input series, and more. To recap, the series so far has consisted of the following articles:

Let’s get crazy today with a soup to notes implementation where we’ll input relational data and then load it to Essbase automatically so that the data ties out. You might call this “home-brew hybrid”. As with before, it’ll be based on our favorite database in the whole wide world, Sample/Basic.

Consider the Sample/Basic dimensionality: Year (time periods), Scenario, Market, Measures, and Product. The use case that I’m going to look at today will cover the scenario where we want to prepare a budget, by product, by time period, by region, but have it be by employee. But this dimension doesn’t exist in the cube – no problem! Let’s further stipulate that for either architectural, performance, or other reasons, we absolutely do not or cannot put in an Employee dimension. So what we’re going to do is have Dodeca facilitate inputting data by employee and feed that into a relational database, then we’re going to use some simple Dodeca automation (workbook scripts) to take the sum of the data we input (for the given time period and market and so forth), send it up to Essbase, do a focused calculation on the cube, and then retrieve the updated data to show on the exact same sheet that we’re already on. Continue Reading…

Data Input with Dodeca, part 5 – Relational Database input

Dodeca Spreadsheet Management System Logo

I have been really, really looking forward to writing this continuation in the Dodeca Data Input series, for a couple of reasons. For one, it’s a genuinely useful feature that Dodeca implements very well. But secondly, and perhaps more important, the ability to get and store this data from users is just an absolutely missing piece of functionality in the traditional Hyperion toolbox. So this is going to be a bit of a long article but will cover how relational data input in Dodeca works and why it’s so important.

As a quick recap, up to this point I’ve covered basic Essbase data input, cell/variance commentary, going under the hood to look at the audit log tables, and focused calc scripts that run after Essbase input. To this we will now add SQL/relational data input. To put it in context, relational database input is one of the tentpole Dodeca features, and stands next to other heavy hitter features such as Essbase input, comments, drill-through, and cascading reports. Now, all of the individual features of Dodeca are useful and interesting. And yet, I see relational data input as a feature that almost singlehandedly makes Dodeca greater than the sum of its parts.

Relational Data as Part of the Hyperion Toolbox

Before jumping in to the technical implementation of relational data input in Dodeca, I want to wax philosophical a bit on how important I think this feature is. It has the power to be a game changer for a lot of organizations.

My own experience with Hyperion/Essbase is from all angles: as a full-time Hyperion developer for multiple companies, as a consultant with multiple companies, working on dozens projects, and as an independent software vendor with a Hyperion product. Further, my computer science degree minor was in relational database algebra (yes, I’m a nerd). I wrote the innovative Drillbridge software that bridges the gap between Smart View, Planning, and Financial Reporting and relational data. I created an absolutely free version of the Drillbridge software that is fully functional and is downloaded daily and regularly put into production with zero assistance from myself.

So to say that relational data is near and dear or otherwise useful to me is an understatement. As with Dodeca’s robust and battle-tested middle tier component (the secret sauce/glue between the Dodeca client application and all Essbase/relational database servers), Drillbridge is written in 100% Java and contains a web interface for managing its configuration.

All that background is basically my long-winded way of saying that I’ve worked with Hyperion a lot, and if anyone should be qualified to find a way to get user input into a relational database, it’s the guy that programs in Java, writes CDFs for fun, and has created systems that literally take input from a user and put it into a relational database.

And yet, even with all of this experience, getting relational data from Hyperion users has traditionally been this absolute missing link. The situation with pure Essbase data has been a little better: you had lock and send or submit data with the classic add-in/Smart View. Of course, lock and send is not without it’s issues. It’s more of a power user thing, although as I’ve explored in the past, Dodeca can quite nicely provide some structure to the Essbase input process that makes things much more user friendly.

Essbase Relational Data Input Anti-Patterns

I seem to harp on this notion of anti-patterns a bit. An anti-pattern occurs when something is ostensibly designed incorrectly. This happens a lot in the Hyperion consulting world, for instance. A client might be having an issue with their system or performance, and they come to a consultant looking for assistance on that one particular symptom. Unfortunately, all too often, the performance or technical problem is essentially predicated upon a series of unfortunate business, technical, and design decisions (usually ones that can’t be easily/cheaply rectified). Or the company has otherwise accumulated a lot of technical debt – where band-aids have been put on a system in order to keep it hobbling along, without addressing the underlying design problem.

Armed with only Smart View or the classic add-in, but needing to get relational input from users, an intrepid (or masochistic) Hyperion developer might choose a few different routes to try and satisfy this, all of which are less than ideal for various reasons;

  • Dummy members/dimensionality for pseudo relational input
  • Text measures
  • Enter supporting details/data to Excel spreadsheet, and email to admin or store on share folder/drive
  • Custom VBA program/functionality to upload supplemental detail
  • Custom software/web service for user to input data

All of these approaches have issues. Adding dummy members or attributes to a cube is less than ideal and “pollutes” the cube. Some additional functionality might be needed to pull that data out of the cube and marshall into a relational database. Sending emails and saving spreadsheets off to the local share drive is a disaster waiting to happen. I’ve railed on VBA solutions before. They are a mixed bag. Speaking as a consultant, they all too often turn into spaghetti code maintenance nightmares, fraught with glitches, security issues, and more. Lastly, a custom web service or software package might fit the bill, but it takes time and money.

Dodeca Does It (#dodecadoesit)

Let’s explore some what-ifs:

  • What if users could input data using the same interface they are using for reporting and analysis
  • What if we didn’t need to make a single change to our cube dimensionality and cube get relational input from the user
  • What if it didn’t require any custom programming, save for the SQL statements themselves
  • What if we could work with almost any major relational database technology on the planet
  • What if this functionality was a first-class citizen in our software and worked out of the box?
  • What if we could format the data to our heart’s content using a spreadsheet model that we already work with day in and day out?

Here’s the thing: Essbase ostensibly started its life not really caring at all about SQL/relational data. As has been wistfully recalled time and again, Essbase was the secret weapon sitting under your desk. The classic Excel add-in could magically slice and dice data. Over the years, Essbase – and users, whether they realized it or not – grew to have an increasingly important relationship with relational data.

Even the most experienced of Hyperion developers is often at a loss when it comes to providing their users a cohesive solution that can seamlessly work with relational and multi-dimensional data (or OLTP/OLAP if you prefer). And yet, this is a bread and butter feature of Dodeca. It feels almost hyperbolic to say, but I just can’t stress this enough.

Okay, enough with the abstract and architectural. Now let’s move on an actual implementation inside of Dodeca that writes back to a SQL table of our choosing.

Implementing Relational Data Input With Dodeca

For the remainder of this exercise, we’re going to work with a table called EMPLOYEES. It’s a very simple table that contains a employee ID, first name, last name, and a comment about a given employee. The employee ID must be unique (it’s the primary key). The other fields are just made of text. Thy MySQL table definition would look like this:

CREATE TABLE `EMPLOYEES` (
    `EMPLOYEE_ID` int(11) NOT NULL AUTO_INCREMENT,
    `FIRST_NAME` varchar(25) NOT NULL,
    `LAST_NAME` varchar(25) NOT NULL,
    `COMMENTS` varchar(255) DEFAULT NULL,
    PRIMARY KEY (`EMPLOYEE_ID`)
)

Also note that the EMPLOYEE_ID field is an AUTO_INCREMENT value. This is MySQL’s equivalent of a SQL Server identity column, or using a sequence in an Oracle table to generate the next unique value. Essentially what this means is that the database engine itself will take care of creating new values for us, so we don’t (in fact, we don’t want to) insert them manually or ourselves. However, we will be interested in the value that the database engine assigns to the rows we insert. You’ll see later how this is accomplished.

I went ahead and put in a couple of rows using a generic database tool and built a very simple Dodeca view that pulls back the data. Here’s a preview of that:

A basic Dodeca view with relational data

A basic Dodeca view with relational data

One of the blog posts leading up to this one was a quick crash course in how to put relational data into a Dodeca view, so if you’re fuzzy on that, then I suggest you take a look at that. But in a nutshell, here’s what is going on with respect to the template:

Basic template for relational view

Basic template for relational view

Things to note:

  • The range that will be populated with data from the relational query is named EmployeeComments and contains four columns (one for each column we retrieve with the query)
  • I turned on the option to return the headers from the query; those will be populated into the first row of the range. This can be turned off and custom headers can be supplied, but in this case I want to just use them
  • I’ve applied some light formatting to spruce things up a bit: row 2 (the first row of the range) is grey with white text, and I added a spacer row/column to offset the table a little bit. I’m going to set the options on this view to not show row/column headers or the different tabs (again, just settings that I can easily update)

Note the Grid Properties settings that I’ve updated for the view, in order to enhance the visual appearance of the rendered view for the user. In particular, I’ve turned off grid lines (GridLinesVisible = False), headers for the cells won’t be displayed (RowAndColumnHeadersVisible = False), and tab names won’t be displayed (TabsVisible = False).

Updated Grid Properties for relational Dodeca view

Updated Grid Properties for relational Dodeca view

Next we need to use the DataTable Range Editor to tell Dodeca a little about how the SQL Passthrough DataSet we defined earlier is rendered into the named range on our sheet. In the previous post looking at this functionality, we got to leave many of the configuration values as their defaults. This time we need to set a few more things in order to allow user input in addition to viewing the data.

DataTable Range Editor associated with the SQL passthrough dataset on our view

DataTable Range Editor associated with the SQL passthrough dataset on our view

Of particular note in this editor:

  • The DataSheetRangeName is set to the named range from our Excel template (EmployeeComments)
  • The SetDataFlags configuration value includes a value of InsertCells (note that the SetDataFlags parameter can accept multiple values; in this case we are setting just one of of them).

Now let’s cut over to the Query Editor associated with our SQL Passthrough DataSet and take a look at the configuration there:

Query Editor editing the SQL passthrough dataset for employees

Query Editor editing the SQL passthrough dataset for employees

Now, it looks like there is a lot going on here but it’s not too bad. Let’s walk through all of the things that are set in this query. Also remember that the this query configuration is associated with the SQL Passthrough DataSet itself. In other worse, this is the type of logic that we only need to configure in one place and we can then reuse across multiple views if we want (as opposed to having to reinvent this configuration/logic for each individual view).

The important aspects of this query configuration are the SQLConnectionID, DataTableInfo/Columns, and the values in the SQL configuration (InsertSQL, SelectSQL, and UpdateSQL):

  • SQLConnectionID: this is the simplest item to configure. We simply use a dropdown box to choose from our list of SQL connections that have been mapped in previously. We set this regardless of writing data back to SQL or not (we need to set it even if we’re just reading data from SQL, obviously)
  • DataTableInfo/Columns: often these don’t even need to be set because Dodeca can figure them out dynamically. It depends on the JDBC driver in play. I went ahead and created mappings for the columns just to make sure that there would be no issues with reading the column names and types out. The editor for creating these is straightforward and is purely just a literal column name and a column type (int, varchar, datetime, etc.). Additionally, I also explicitly told Dodeca what the primary key for the table is (EMPLOYEE_ID).
  • The SelectSQL configuration is the exact same as before (when we we’re just reading data out of SQL), so nothing new to see there. What’s new is the configuration for the InsertSQL statement.

Let’s take a closer look at the exact configuration of the InsertSQL parameter, as it’s possibly one of the more interesting nuances in this whole configuration. The InsertSQL setting is ostensibly just the parameterized SQL code to insert a new row into the table, however, in this case we actually have two statements (one per line in the following screenshot):

The InsertSQL value for the employees query on the employee SQL passthrough dataset

The InsertSQL value for the employees query on the employee SQL passthrough dataset

The first statement is the parameterized INSERT. The full statement is INSERT INTO EMPLOYEES (FIRST_NAME, LAST_NAME, COMMENTS) VALUES (@FIRST_NAME, @LAST_NAME, @COMMENTS). I want to draw your attention to the fact that I am explicitly not mentioning the primary key (EMPLOYEE_ID) here. Recall that this is the primary key but also an AUTO_INCREMENT (similar to IDENTITY/sequence in SQL Server/Oracle respectively). I’m basically telling the relational database engine “Hey, I’m going to explicitly give you these three things, but you’re smart enough to figure out how to automatically generate the key for me, so please do that.”

Inside of the VALUES section of our insert statement, you’ll see that we have tokens starting with an @ symbol: @FIRST_NAME, @LAST_NAME, and @COMMENTS. When Dodeca goes to do the insert, it’ll dynamically place the values from the row into these placeholders and then execute the query. So to be clear, these aren’t part of the native SQL syntax. For instance, if I am inserting my own name and comments into the row and then have Dodeca save it, the resulting SQL statement that Dodeca generates and then hands off to the database for processing might look like this:

INSERT INTO EMPLOYEES (FIRST_NAME, LAST_NAME, COMMENTS) VALUES ('Jason', 'Jones', 'Awesome employee')

The next statement, and one that’s incredibly useful to our user experience, is the “post insert SQL” command. The code for this post insert command in this case and for this technology is the following:

SELECT EMPLOYEE_ID, FIRST_NAME, LAST_NAME, COMMENTS FROM EMPLOYEES WHERE EMPLOYEE_ID = LAST_INSERT_ID()

Take special note that there is a semi-colon at the end of the first line that is separating the insert command from our special post-insert command. With respect to the post insert command, there are no special tokens in it, but it is specific to MySQL in this case. In particular, the LAST_INSERT_ID() function is a special function that returns the generated ID for the row that was just inserted in the previous statement. Effectively what I’m telling Dodeca is this: “After you insert the first name, last name, and comments to the relational database table, a primary key will have been generated. Here’s how you can use that generated primary key to fetch all of the details for that row, so that you can populate the key on my spreadsheet.”

Let’s go ahead and take a look at how this looks on the spreadsheet and the user experience. With my view all configured, let’s run it and take a look:

The Insert Row button on the toolbar is enabled when the cursor is inside a value input range

The Insert Row button on the toolbar is enabled when the cursor is inside a value input range

I apparently have two employees in this absolutely fictitious company. As you can see, there’s myself, and then there’s Cameron Lackpour. Apparently Cameron likes CALC ALL;. He also really likes load rules, low block density, and inputting to upper level members. But that’s neither here nor there. Anyway.

You can see in the spreadsheet that my cursor is located within the data table somewhere. Because of this, the Insert Row button is active. Take a look at the button toolbar and about in the middle you can see there are some table row-related icons. The Insert Row button is the third button to the left of the “100%” zoom indicator. I simply click on that to insert a new row to the table:

Finished entering data to be sent to the SQL database, but not saved yet

Finished entering data to be sent to the SQL database, but not saved yet

As you can see, I’ve added a new employee and comment. It’s Tim Tow and he apparently knows a thing or two about Excel. The row has not been sent to the database just yet. I will use the Save button on the toolbar (directly to the left of the 100% zoom indicator) to save this row.

Remember, behind the scenes, Dodeca knows that the value of FIRST_NAME is 'Tim', LAST_NAME is 'Tow', and COMMENTS is 'Excel ninja'. So it takes care of all of the ugly work of turning those raw inputs into a valid SQL query, using the statement we provided, talking to the relational database, getting the result back, and in this case, immediately executing the post-insert SQL statement. Immediately after pressing Save, our sheet looks like this:

New row saved and primary key value is automatically updated

New row saved and primary key value is automatically updated

It’s basically the same, save for one thing: the newly generated primary key for the row we entered has been pulled back (along with the rest of the new row) and dynamically updated into the view – with no refresh needed.

Conclusion

This was a very long article that covered aspects of architecture, Essbase anti-patterns, Cameron Lackpour’s love of load rules, and a real example of using Dodeca to write back to a relational table. There are many more nuances to the SQL data editing/updating that I’ll explore in future posts, such as updating existing rows, deleting rows, data grouping and more. But I wanted to give a practical crash course on the basics of this incredibly useful feature. Relational data input is an incredibly useful and important ability to have in so many organizations, and yet when the need for this type of capability arises on the Hyperion side of things in many organizations, all too often there isn’t a compelling, cohesive, and maintainable way to achieve it – but Dodeca does it.

Data Input with Dodeca, part 4 – Focused Calcs

Dodeca Spreadsheet Management System Logo

Today’s article continues my series on data input with Dodeca. This post will be an elaboration on the basic data input to Essbase shown off in part 1. As a quick refresher, part 1 just looked at setting up a view that allows a user to input data to a given Essbase intersection. We made it a little more interesting by allowing the user to choose their Market (from our favorite database in the whole wide world, Sample/Basic). Now we want to take it a step further and run a calc script after the user inputs their data. This is pretty typical requirement because data in the cube often needs to be aggregated after lower level inputs.

Achieving this functionality is pretty straightforward. We also have some interesting possibilities because we aren’t limited to just running a static calc script on the server – we are afforded all of the normal Dodeca token replacement functionality so that we can focus the calc however we want. This can be incredibly advantageous for performance reasons. For example, rather than running a calc that refreshes all of the data across the cube, we can focus it on a particular cost center/region/functional unit based on the current POV. Why recalculate data that doesn’t need to be recalculated? Speed up the calc – speed up the user experience.

Cleaning up Anti-Patterns

This technique also let’s us cleanup an Essbase anti-pattern I have seen time and time again out in the real world. Imagine a company that has several managers that control different markets. For example, there are separate managers for New York, Washington, and California. Up until this point, the company has managed to get away with a process that involves doing a classic Essbase lock and send to the proper market, then choosing a calculation to run. The list of calculations might contain the following:

  • BdNewYrk
  • BdWash
  • BdCalifor

All of these calc scripts contain effectively the same script, differing only by that they FIX on. For example:

FIX ("New York", "Budget")
    CALC DIM ("Measures");
ENDFIX

The “run calc after data send” pattern in Dodeca lets us clean this up and consolidate down to a  single calc that will simply plug in the POV from the user’s current Market selector. Let’s take a look at how to set this all up.

Introduction to Workbook Scripts

I’m going to leverage the exact same view as part 1 of the series, and simply add a Workbook Script to it. I’m going to get much, much deeper into workbook scripts in the future, but think of workbook scripts as the procedural side of Dodeca views. They are like a unique but approachable blend of Access macros and VBA functionality. Any view can have a workbook script attached to it. Inside of the workbook script, we can define sequences of procedures and attach them to particular events that can happen to our view.

In our case, what we want to have happen is that after the user submits data to Essbase (the AfterSheetSend event), we want to run a procedure that runs a calc script.

Tokenize the Calc Script

The very first thing we need to do is create the calc script that we want to run. This will be a normal server-side calc script, with a twist: replacing the market with a token. Here’s our script:

FIX ("[T.Market]", "Budget")
    CALC DIM ("Measures");
ENDFIX

Note that the market is replaced with a token, just like the tokens that are used on a normal Excel view. Also note that the token is enclosed in double-quotes. Dodeca will perform a full and literal token replacement. So we want to make sure that if the market is New York that it is put inside of the double quotes so we don’t end up with a syntax error. I’ll save the calc as BdMarket.

Create the Workbook Script

Now we head back over to Dodeca and create the workbook script. We can create a workbook script as with any other major object in Dodeca by simply navigating to Admin → Workbook Scripts, then selecting New. Nicely enough, the Workbook Script editor provides a rich environment where we can define most options and items by simply selecting them from a dropdown menu. Consider the following screenshot, showing everything we need:

Dodeca Workbook Script Editor

Dodeca Workbook Script Editor

In particular, see in the Event Links pane that there is a definition that associates the AfterSheetSend event with the CalcMarket procedure. Next, look at the Procedures pane containing all of the procedures in this workbook script. There is just one, the CalcMarket procedure. In the workbook scripting world, there are many, many functions available to us to choose from. In Dodeca parlance, these are known as methods. For many methods within Dodeca, there are multiple versions of it available, these are known as  Overloads. These terms are borrowed from the world of object-oriented programming. Think of the overloads as slightly different versions of a methods but with the same name.

In this current case, the method I’m using is the EssbaseRunCalc method. This particular method has several overloads available. These are General, TextBased, ServerBased, and DefaultCalc. Most use cases will probably be satisfied with TextBased or ServerBased. In the case of TextBased, we can define the entire calc script locally (inside of this Dodeca procedure) and run it on the server. With ServerBased, it’s a calc script that resides on the server, but we still get to perform token replacement on it.

I think what makes the most sense in this case is that we use a ServerBased calc script and include token replacement within it. Don’t be overwhelmed by the numerous options available to us. We can live with the defaults for just about everything. The only thing important that we need to specify is to tell it the name of the calc script (the ScriptName value), and to make sure that DoTokenReplacement is set to TRUE. These should hopefully be self-explanatory by now, but it’s worth pointing out that if we just wanted to run any given server calc script without worrying about tokens, we could just leave the token replacement value set to false.

With the workbook script created and saved, we now simply need to associate it to the view. This is set in the Workbook Script category:

Assigning a Workbook Script to a View in Dodeca

Assigning a Workbook Script to a View in Dodeca

Lastly, after we change some data in the view and click on the Send button, we can go back out to our Essbase server and see what happened on the cube:

Viewing calc script execution results in EAS

Viewing calc script execution results in EAS

You can see in the log that the current POV was used (New York) to replace into the script text, and the resulting script was executed. We can replace any number of tokens if need be, focusing the calc even more. This can frequently be a win for organizations with a wide/deep outline, and many forecasters that need to see aggregated data – but can’t wait for a more general calc to run. This technique can also frequently significantly streamline the technical side of things (fewer calc scripts) and the user experience (as compared to manual input with Excel spreadsheets). It can also potentially help you clean up your filter/calc security situation, in that you can let the user piggyback off their existing read-level access without having to dole out access to a particular calc script.

A primer on relational data views in Dodeca

Dodeca Spreadsheet Management System Logo

I’m going to take a small detour from my series on data input in Dodeca so that I can lay the foundation for the next article. Lately I’ve talked about how we can get user input in Dodeca, how users can add comments to their input in Dodeca, and how we can audit the input data by tapping in to the Dodeca audit log tables. As a small preview of where the data input series is going, in the near future I’m going to look at how we can input data to a relational database from within Dodeca.

Prior to that, of course, I’m going to do a brief introduction to relational data in Dodeca. There are a handful of configuration items that need to occur. There’s a little more to it than just dropping in a SQL SELECT statement, but as you’ll see, there is a lot of power and flexibility that will be available to use with just a few clicks.

Define the SQL Connection

The first thing we need is to tell Dodeca about our SQL connection. This is about as standard as it sounds. It’s worth noting that Dodeca allows for an arbitrary number of SQL connections and supports a wide variety of databases, owing to the fact that the Dodeca middle-tier is written completely in Java. This means that, as with software such as Drillbridge, anything with a JDBC driver is fair game – including Oracle, Microsoft SQL Server, DB2, MySQL, and many others.

As with before in the Dodeca data input series, I am using a MySQL schema, since I like running my development instances son a lean and mean Linux VM:

Viewing SQL connections in Dodeca client

Viewing SQL connections in Dodeca client

Note that SQL connections only need to be setup once and then used over and over again. You don’t need to redefine them every time you have a new view. Most organizations will have anywhere from one to a dozen or so different connections, many times to quite a variety of data sources that they are pulling together.

Create the SQL Passthru DataSet

Given a SQL connection that we want to query, we need to create a SQL Passthru DataSet (SPTDS). Try to think of think of this as a collection of SQL queries defined along with several configuration options. In other words, we’re not just dumping a SELECT statement into our view or system somewhere and ending up with an unmaintainable mess. For this simple example, when I create the SQL Passthrough DataSet, I’m configuring which SQL Connection (defined earlier) to use, and defining one or more queries associated with the data set. Note in this example I just have the one query I care about:

Dodeca SQL Passthrough DataSets editor

Dodeca SQL Passthrough DataSets editor

Add the Query to the DataSet

Now that I have my SQL Passthru DataSet created, I will add a query to it. The following editor is used to do this:

Query Editor window for a SQL Passthrough DataSet query

Query Editor window for a SQL Passthrough DataSet query

The main thing I am doing on this screen, clearly, is defining the query itself, which is accomplished by editing the definition for the SelectSQL property:

Editing actual query text in the query editor

Editing actual query text in the query editor

This query is from my previous post on tapping in to the Dodeca audit log tables. Here’s the query for reference:

SELECT
    AUDITLOG.SERVER,
    AUDITLOG.APPLICATION,
    AUDITLOG.CUBE,
    AUDITLOG.USER_ID,
    AUDITLOG.CREATED_DATE,
    DP.MEMBER,
    DP.ALIAS,
    IFNULL(ITEMS.OLD_VALUE, '#Missing') AS OLD_VALUE,
    ITEMS.NEW_VALUE
FROM
    DATA_AUDIT_LOG_DATAPOINTS DP,
    DATA_AUDIT_LOG_ITEMS ITEMS,
    DATA_AUDIT_LOG AUDITLOG
WHERE
    DP.AUDIT_LOG_ITEM_NUMBER = ITEMS.AUDIT_LOG_ITEM_NUMBER AND
    ITEMS.AUDIT_LOG_RECORD_NUMBER = AUDITLOG.AUDIT_LOG_RECORD_NUMBER;

Also note that there are a handful of configuration options relating to the primary key and columns. For this simple example I’m going to stay away from defining those since I don’t need them. In a future post I will go into what those options are and how they can be useful. The important thing to consider for now is that for the most part, Dodeca chooses sensible defaults for me and I can grab the functionality I need without having to worry about setting a million options first.

Create the View

Now I have my SQL connection, a SQL Passthru DataSet, and a query defined. This effectively takes care of all of the non-view specific functionality that I need. Put another way, nothing I defined so far was specific to the view that I’ll be creating in a moment. The objects created so far are all things that can and likely will be reused on other views, saving myself development effort down the road.

Now I want to create my simple view to show the data that I’ve modeled. For my purposes here, I can create a very simple view. Recall that I’ve created a SQLExcel view as opposed to the views I’ve shown earlier in this series that focused on Essbase (don’t worry, it’s possible to put Essbase and relational data on the same view – stay tuned for a future post on that).

For my SQL Excel view, I’m just going to define labels on my top row, apply some very light formatting (bold text), and then freeze the panes so that when I scroll down, my headers will be retained. I have also defined a named range that is as wide as the number of columns I have and is two rows tall. This named range is important because in a moment I am going to configure the view so that it knows to put the SQL data it retrieves there.

Dodeca SQLView Excel template

Dodeca SQLView Excel template

With the view template saved, I can now go over to the view editor and configure a few things so I can “glue” this view (so to speak) to the SQL data I defined earlier. The main property to consider is this SQLPassthroughDataSet Ranges category, which contains one item, DataSetRanges:

dodeca-relational-data-primer-06-sqlview-properties

Upon editing it, I am presented with the DataSet Range Editor. All I have to do here is define my SQLPassthroughDataSetID to point to the dataset I defined earlier (helpfully, they are presented in a dropdown box so I just select it from a list), and then define a DataTableRange.

Dodeca DataSet Range Editor from Edit View screen

Dodeca DataSet Range Editor from Edit View screen

A Quick Note on Solution Architectures

Before going further, I want to step back for a moment and try to alleviate any qualms you might have in terms of the configuration we’ve done so far. If you’re feeling overwhelmed with all of these objects – SQL connections, SQL passthrough data sets, SQL queries, SQL data ranges – I can understand. You might be thinking “Why can’t I just drop in a SQL query and be done with it?”

Well, for a simple SQL Select example, that might seem simpler. But our solution is going to grow. And before long we’re going to want multiple SQL connections, queries, the ability to update rows, delete rows, sort data, group data, and more. And we’re going to have some absolutely incredibly power and flexibility in our hands – and it’ll be maintainable. We don’t want impenetrable walls of SQL code that breaks all the time, and this way of modeling things with connections/data sets/data ranges has been crafted incredibly carefully to offer performance, maintainability, and flexibility (just trust me).

Create the DataTableRange

In a lot of ways, the DataTableRange is where the magic happens. This is the last item we need to define before we can build our view. I don’t actually have to define much here in order to get things to work. I have to tell it where the data from the SQL query should go (my DataSheetRangeName, which corresponds to the defined name on the spreadsheet template), and a couple of other options. By default, the headers from the SQL query would come back along with the data, but I don’t want or need those in this case, because I put in my own “nice” headers on the template, so I can turn those off. This is the SetDataFlags option of NoColumnHeaders. Easy enough. You know what else I want? How about Filtering options that I know and love from Excel? Let’s turn that on with the click of a button by simply setting AutoFilteringEnabled to True.

Didn’t I just tell you that we would have some absolutely incredible power available to us with just a few clicks? That’s a prime example. No funky SQL code to write, no magic in the spreadsheet – just turn on that option and now I’ve got all of Excel’s powerful filtering abilities on any data set that comes back.

That’s all I want to configure for this data range for now. In total my options look like this:

Dodeca DataTable Range Editor screen

Dodeca DataTable Range Editor screen

Build the View

We made it – we have our SQL connection, data set, and data range definition. Future views that use this data will be able to shortcut and jump right into the view definition since we’ll be able to reuse the objects we setup previously, saving us development effort. Time go go build the view:

A Dodeca SQLExcel view built with data from the internal Dodeca audit log tables

A Dodeca SQLExcel view built with data from the internal Dodeca audit log tables

The data in this view should look familiar from the previous post on playing around with the Dodeca Audit Log. And again, note the filtering boxes in each header row, where I can, say, filter on the Member column in order to see only rows that were modified that involved Cola.