Evolution of Essbase: new URL-based drill-through showed up in 11.1.1.3

Continuing on with the idea of getting insight into the Essbase feature set over time, as viewed through the lens of its Essbase Java API evolution, you can quite clearly see that the open/URL-style drill-through (as opposed to classic LRO-based drill-through) showed up in version 11.1.1.3, which in fact is pretty much the only thing that seemed to get added to this particular release, Java API-wise, along with some ancillary drill-through methods/functionality in some related classes.

More near to my heart: this is the exact functionality that paved the way for Drillbridge! Although it wasn’t available as a feature on day 1, subsequent versions of Drillbridge gained the ability to automatically deploy drill-through definitions to a given cube, and it uses exactly these API methods to accomplish it.

Drillbridge as drill-through solution with CSV data and replacing Access

An interesting use-case has come up with Drillbridge recently where drill-through is currently being “handled” with an Access database. I put the quotes around handled because the current solution requires the user to look at the current POV and then go fetch the corresponding data from an Access database. You might be thinking that this setup is horribly sub-optimal, but I wouldn’t characterize it as such. In my career on all sides of Hyperion – a developer, a consultant, and software developer – I have seen this pattern (particularly those involving Access) pop up again and again.

Access is often (perhaps all too often) the glue that binds finance solutions together, particularly in cases like this involving drill-through. It’s cheap, you can use it on the network simply by dropping the file onto a share drive, it gives you a quick and dirty GUI, and more. Many EPM projects I have been on involve many deliverables, often including drill-through. And all too often those projects had to cut it due to budget and time constraints. And if it gets cut, sure, finance might have to do the “quick and dirty” option like this with Access.

Now, the request du jour: use Drillbridge to quickly implement true drill-through, where the data currently resides in an Access database? A couple of options come to mind:

  • JDBC to ODBC data bridge to access current Access database
  • Export Access data to relational database
  • Export to CSV and access via JDBC CSV reader
  • Read CSV dynamically using Drillbridge’s embedded database

I won’t bore you with an exhaustive discussion of the pros and cons of these options, but I will say that the JDBC/ODBC bridge was a non-starter from the get-go (for me), mostly because I looked into it for another project years ago and the general consensus from Sun/Oracle was a) don’t do that [anymore] and b) performance is not too great. Regarding exporting Access to a relational database, yes that is more towards the ideal configuration, but if that were an easy/quick option in this case, we probably wouldn’t be on Access already (i.e., for whatever reason, finance didn’t have the time/patience to have the IT department stand up and manage a relational database, to say nothing of maintenance, ETL, and other things). Next, while there are a handful of JDBC CSV readers, they seem to have their quirks and various unsupported features, and hey, as it turns out, Drillbridge’s embedded database actually ships with a pretty capable CSV reading capability that let’s us essentially treat CSV files as tables, so that sounds perfect, and bonus: no additional JDBC drivers to ship. So let’s focus on that option and how to set it up! Continue Reading…

Speed up ASO SQL data loads by using multiple rules files

Just another quick post today about possibly speeding up data loads to an ASO database when loading from SQL. I got on a quick call with a former colleague that was looking to gain a little more performance on their load process to a massive ASO database, and the first thing that jumped out at me was that I recall you can do parallel loads with some native MaxL syntax.

Here’s a quick example of the syntax:


import database $APPLICATION.$DATABASE data
connect as $SQL_USER identified by $SQL_PW
using multiple rules_file $RULE1, $RULE2, $RULE3, $RULE4, $RULE5
to load_buffer_block starting with buffer_id 100 on error write to "errors.txt";

Basically, you provide multiple rules files (configured for your SQL datasource of course). The rules files are likely to be the same as each other but I suppose it’s possible you might want to partition the data in some logical way to try and speed things up even more.

For example, let’s say that in the code above, we are loading five years of data from a relational database. We might then make it so that each rule is set for this particular year by doing the following things:

  • Set the year in the data header
  • Remove that column from the list of SELECT columns
  • Put a filter/predicate in the WHERE clause on the query
  • Bonus points for using substitution variables in both the header definition and the where clause

Performance in this particular use case went up substantially. It’s my understanding that data loads that were taking an hour are now cut down to 17 minutes. Your mileage may vary, of course.

Let’s Not Forget About Hybrid BSO

That said, I think this can be an effective strategy for trying to squeeze performance out of some ASO cubes that need a smaller load window and you don’t want to go changing a lot of the internals in play. If you’re doing new development, then I strongly, strongly recommend using hybrid BSO (or rather, BSO and making sure the cube is configured properly so as to get the hybrid BSO performance benefits). I have been seeing hybrid BSO cubes absolutely killing it in performance, what with their ability to leverage ASO technology for aggregates, and massive calculation improvements owing to the smaller block sizes and indexes you get from having so many dynamic calc members in dimensions. Plus, you of course get all of the classic/rich/awesome BSO functionality out of the box, like dynamic time series, expense tagging, time balance, and more. These were never very strong areas for ASO and often required a lot of non-optimal workarounds to make users happy.

New webpage for Essbase Java API evolution

A fair bit of my job is dealing with and building solutions around the Essbase Java API. For many years, the Java API has been the premier way to programmatically work with Essbase (compared to say, the C and VB APIs, which have fallen out of favor). As part of this development work, it’s often important to see when (in terms of version) a certain class, method, interface, or other object has been added, modified, removed, or deprecated.

As a bit of a side project, I have been working with a library for comparing Java JARs to each other (japicmp). By processing and interpreting the results of just about every single Essbase Java JAR from 7.0.1, through the 9.x series, multiple 11.x’s, and finally to version 12.2.x, I have come up with something of a master table that shows all of these changes. You can view the initial results of the Essbase JAPI JAR evolution analysis. I’ll probably refresh this and enhance the output as new library versions become available or as I determine that additional insights become useful.

Screenshot from the Essbase Java API evolution analyzer

Continue Reading…

Configuring Drillbridge with Financial Reporting Web Studio

Drillbridge works perfectly with Financial Reporting Web Studio – the successor to the desktop-based version of Financial Reporting (also commonly called HFR, FR). FR was stuck with a very archaic client (let’s just say it’s from around the Clinton administration), but it has revamped for the future, with a completely web-based interface now. In retrospect, and based on my interactions with the interface, I think this product overall can be thought of as gap coverage for FR users. It’s not necessarily the place you want to do new development, especially given some of the other shifts/developments in the reporting ecosystem lately. My colleague Opal Alapat has posted some really great thoughts on FR and its place in this ever-changing world, which I encourage you to read.

In the meantime, there are countless current installs of FR that organizations need to support and perhaps transition to this newer incarnation of FR. As with before, Drillbridge works seamlessly to give you and your users advanced drill-through capabilities in Smart View, Hyperion Planning/PBCS, FR, and now FR web. I found that the UI had a few quirks to it, but I’ll walk through a simple example and try to point those out along the way.

Continue Reading…