New webpage for Essbase Java API evolution

A fair bit of my job is dealing with and building solutions around the Essbase Java API. For many years, the Java API has been the premier way to programmatically work with Essbase (compared to say, the C and VB APIs, which have fallen out of favor). As part of this development work, it’s often important to see when (in terms of version) a certain class, method, interface, or other object has been added, modified, removed, or deprecated.

As a bit of a side project, I have been working with a library for comparing Java JARs to each other (japicmp). By processing and interpreting the results of just about every single Essbase Java JAR from 7.0.1, through the 9.x series, multiple 11.x’s, and finally to version 12.2.x, I have come up with something of a master table that shows all of these changes. You can view the initial results of the Essbase JAPI JAR evolution analysis. I’ll probably refresh this and enhance the output as new library versions become available or as I determine that additional insights become useful.

Screenshot from the Essbase Java API evolution analyzer

Continue Reading…

Hacking the Essbase Java API to run Application Calcs

This post might alternately be titled, “So you’re really stubborn and wasted a couple of hours messing with the Essbase Java API”, or something. I was in a discussion the other day and asked about the ability to run an application-level calc script.

Well, back up, actually. Did you know that calc scripts can exist at the application level in Essbase? For a very long time, Essbase has had this notion of applications and databases (with databases often just being called cubes), such that there is usually one database/cube inside of an application, but there can technically be more (at least in the case of BSO). It’s almost always the best practice to have just one cube to an application. This is largely for technical reasons.

Continue Reading…

Top Posts of the Year 2016

Well, 2016 is almost behind us. I haven’t done this before but given that I’ve been doing a fair bit of blogging this year, I wanted to point out the “top posts of the year” on ye olde Jason’s Hyperion Blog. The subjects are diverse (as far as a Hyperion blog goes I suppose) and I think are an interesting reflection of what things people are interested in. Starting with the most popular:

Running MDX queries through a JDBC driver (for fun?): I got a lot of feedback on the MDX over JDBC franken-driver in JDBC. In retrospect, I think this goes to show how rich, diverse, and challenging the world of data integration around Essbase can be. People – developers, consultants, users, whoever – are constantly spending time, energy, and money getting data in and out of their EPM systems. The Thriller MDX-over-JDBC driver hit a real chord with some people that see it as a way to bridge the gap between EPM and other systems.

Drillbridge acquired by Applied OLAP: Probably the biggest news for me this year. Applied OLAP acquired all of Drillbridge (as well as myself) and added it to their portfolio of products, including the Dodeca Spreadsheet Management System, Dodeca Excel Add-In for Essbase, and the Next Generation Outline Extractor. Recently I announced that the enterprise/supported version of Drillbridge was officially named Drillbridge Plus and offers many compelling features, such as upper-level drill support from PBCS.

Kscope16 sessions I’m looking forward to: Interestingly, people were very curious as to what sessions I planned on attending at Kscope16. I’ll be sure to post thoughts on Kscope17 sessions when the time is right. I’ll have a single presentation at Kscope17, which will focus on “demystifying the PBCS REST API”. I hope it’s a crowd-pleaser that people will find useful.

Dependent Selectors in Dodeca: I blogged extensively about Dodeca this year, and apparently this was one the most popular article. Dependent selectors are a great feature in Dodeca that allow for narrowing down or otherwise dynamically generating the selection values for a user. For example, choosing a state could cause another selector to narrow its list of cities to just those in the given state. I’m both surprised and not surprised that this is the most popular Dodeca article. I think it’s cool because this is the type of feature that really enhances the user experience by respecting their time and making a system easier to use.

Data Input with Dodeca, part 1: Dodeca is great for providing a structured way to input data into a cube that is incredibly more robust than “we do lock and sends”. This was the first part in my data input series (six articles!) that covered inputting to Essbase, relational datasources, both at the same time, commentary, and more.

Camshaft MDX tool updated and available: Again with the MDX/data integration theme, people were very curious to find out more about a command-line tool that helps convert MDX queries to useable data files.

Essbasepy updated for Python 3: Surprisingly (to me), people the article on the Essbasepy library caught a lot of people’s attention. A lot of people are using Python to do integration/automation, and Essbase is definitely a part of the picture.

TBC Files for Bankruptcy: My tongue-in-cheek look at the woeful situation at everyone’s favorite beverage company!

Drillable Columns in Drillbridge: Lastly (but not least), one of my favorite features in Drillbridge and I think one of the standout features that you get when it comes to drilling into a web browser instead of a tab in your workbook: the ability to drill from a drill. With drillable columns, you can specify a subsequent view to drill to and the POV of the row (the global POV plus the key/values from that row) will be used to execute it. Many organizations are using this to drill into further/related journal detail, PDF files of invoices, and more. It’s a great feature!

Well, that’s the highlights from 2016. I’ll be looking forward to another productive blogging year with all sorts of exciting things regarding Dodeca, Drillbridge, the Next Generation Outline Extractor, Kscope17, and even a few secret projects I have been working on. Happy new year!

 

Vess + ODI to extract Essbase metadata

Well, apparently it’s Friday Fun Day over here in Seattle. I was going to head up to the mountains for the weekend but plans changed. So how about a little frankendriver Vess update with some ODI goodness thrown in?

Vess has some really interesting data integration possibilities that I’ve mentioned before, one of which is being able to drop it into Oracle Data Integrator and use it as you would any other JDBC driver. I hadn’t really tested this out though, until yesterday. It turns out that it works surprisingly well.

Continue Reading…

Playing with the Thriller MDX/JDBC Driver in Drillbridge

Last week I talked about a new side project, which is a JDBC driver called Thriller for executing MDX queries against Essbase and mapping the results back into a normal relational database. And at the time, I said that this driver had some really interesting use cases, such as in Dodeca, Drillbridge, ODI, and other tools that work with JDBC drivers.

Speaking of Drillbridge – in the very near future I will be sharing Drillbridge’s official future direction, which I think is really exciting, but more on that later. In the meantime, let’s drop this baby into Drillbridge and see what happens!

The following walkthrough of using Thriller with Drillbridge will show off some features that are only available in the licensed version of Drillbridge, although this should in theory work with Drillbridge Community Edition (the free edition of Drillbridge), assuming you have the Thriller driver JAR file.

Continue Reading…

Running MDX queries through a JDBC driver (for fun?)

So there I am, sitting in front of the Alaska Airlines gate at Boston Logan airport, waiting for my flight home to Seattle. It’s not a particularly glamorous terminal – the divorce from Delta hasn’t been too kind to Alaska at BOS; Delta seems to have kept the house and kids while Alaska microwaves Lean Cuisine on a futon in its bachelor pad…

As I’m pondering why there are white rocking chairs in the terminal, my phone rings with a familiar name: Mr. Brian Marshall. We catch up and exchange pleasantries before pivoting over to more important matters (all things EPM of course!).

Brian: “So… Vess.”

Jason: “Oh boy…”

So we get to talking about accessing Essbase data through a Java database driver, á la Vess. And we get to talking about running MDX queries and dumping the output – á la Camshaft.

And as the talk goes on I end up saying something stupid like this: “You know what might work? Jjust pass an MDX query through the driver over to Essbase and map the output to a fake table… It’d be like an unholy combination of Vess and Camshaft. You could probably knock it out in a day or two.”

And at that moment I knew I wouldn’t be able to resist opening my laptop for the five plus hour flight home. Continue Reading…

Oracle Open World 2016 Recap

As I mentioned a week or so ago, I made a last minute appearance at Oracle Open World this year. It was my first time attending and presenting at OOW. I actually didn’t catch too much of the conference as I only flew in on Wednesday and flew out on Thursday. Nevertheless, I had a bit of a whirlwind experience, but a very good one. While I hadn’t planned on it (I’m more of a Kscope guy), I am now looking forward to attending Open World next year.

As for the presentation I was part of, I think it went pretty well. Many thanks to Gabby Rubin of Oracle for coming up with the idea for the presentation and facilitating it. The presentation was on “Essbase Tools and Toys” and was meant to highlight, at a high level, some of the interesting things that folks such as myself are doing that involve the Essbase APIs or otherwise work with Essbase. The presentation discussed items created by me, Tim Tow, and Harry Gates. Additionally, Kumar Ramaiyer (also from Oracle) talked a bit about what’s coming with Essbase Cloud Service (EssCS).

Continue Reading…

TBC Files for Bankruptcy

After years of declining and inconsistent revenue, the multi-state beverage company TBC has filed for bankruptcy. TBC was known for carrying an eclectic, if antiquated, selection of products that didn’t seem to resonate with modern consumers.

A vice president at TBC for the last 15 years, Richard Doyle, found out that his last day would be at the end of September. “Our products just don’t seem to resonate with the young, hip crowd,” said the forlorn executive, with a wistful look on his face (they don’t seem to resonate with the older crowd either, but I digress). “Kids these days are drinking their fancy this and that, grande double pumpkin bullshit or whatever.”

Continue Reading…

Vess + Dodeca for Substitution Variable Management

I’m gonna go a little crazy today and combine two worlds, just for fun: the Vess “virtual” Essbase JDBC driver, and of course, Dodeca. I’ve written about Vess before, and even talked about it for a bit during Kscope16 earlier this year during a presentation with Tim Tow and Harry Gates on various interesting things we’re doing with Java and the Essbase Java API.

As a quick crash course on Vess, it’s a highly experimental Java JDBC driver that models an Essbase server’s applcations/cubes/properties into variable relational tables (I’ve written about Vess a few times before). At the moment this includes cube outline data, cube data, substitution variables, miscellaneous properties, and more. For example, when you connect Vess to, say, Sample/Basic, one of the tables you’ll get is SAMPLE.BASIC_VARS and it’ll contain four columns: the application, cube, variable name, and variable value. You might think you wouldn’t need to know the application and cube for this table but due to a nuance with Essbase variables (you can have the same variable name at both the cube, application, and server level) it’s actually needed.

In any case, not only you can read values using any SQL you want from these columns, but you can perform operations on the table that in turn affect the Essbase server. So you can do an UPDATE or DELETE and it’ll change the variable’s value, or delete a variable.

With that in mind, I thought to myself, you know what might be interesting – What if we added a Vess driver to Dodeca (since Dodeca supports third-party database drivers) and wire up a simple view that can edit the variables? So that’s exactly what I did and I thought it’d be fun to share.

Adding Vess to Dodeca

The first thing to do is add the Vess library and a couple of other Java libraries that it leans on to the Dodeca servlet. Typically you’d want to add these to your Dodeca WAR file when you build it with the “Click Once Prep Utility”, but since this is just for testing purposes, I can just add the JAR files to the already deployed servlet. I wouldn’t want to do it this way in production because when I went to deploy a new WAR file, I’d lose my Vess drivers. Here’s the drivers added to the /dodeca servlet:

Vess Java JAR files added to Dodeca (dodeca) servlet

Vess Java JAR files added to Dodeca (dodeca) servlet

For good measure I restarted the servlet container (in this case, restarting Tomcat 7 using sudo service tomcat7 restart on this little Ubuntu VM). Then we can login to Dodeca and create a new SQL connection:

A Vess connection is created inside of Dodeca

A Vess connection is created inside of Dodeca

There’s not a lot to see here other than to “show off” that Vess is indeed just a normal JDBC driver as far as other software is concerned – in this case, Dodeca. As you can see, Vess introduces a JDBC URL format. Vess can connect in embedded mode (in this case, indicated in the scheme of the URL. The rest is fairly standard: the address of the server (Vess assumes the default port of 1423 if none is specified), and in this case, a particular app/cube to connect to. Other than the URL, the driver class is specified. As with Oracle/SQL Server/MySQL, the class is just the Java class implementing the Driver Java interface. These typically are thing like com.mysql.Driver or something similar, and Vess is no different in this regard. Lastly for purposes of the Dodeca connection, a username and password are specified. This should be the credentials for an Essbase user, since internally Vess will use them to connect.

With the SQL connection mapped in, I can create the SQL Passthrough DataSet that will contain my SELECT queries, and optionally, parameterized INSERT/UPDATE/DELETE statements if I want to have support for those (which I will).

Configuring the SQL Passthrough DataSet for Vess variables

Configuring the SQL Passthrough DataSet for Vess variables

You can see that unlike some of the other SQL Passthrough DataSet examples I have shown lately, this one has two queries. It’s worth noting, briefly, that a SQLPTDS isn’t an object that just contains one query or otherwise concerns itself with one dataset. It can contain an arbitrary number of [usually related] queries. In this case I have two: one for server wide substitution variables, and one for variables just applicable to Sample/Basic (these actually overlap a bit as I’ll show in a bit).

The definition for the “server variables” query is very straightforward and only contains a SelectSQL configuration:

On the Dodeca query editor, looking at the first query for pulling out global variables from the Essbase server

On the Dodeca query editor, looking at the first query for pulling out global variables from the Essbase server

As noted earlier, Vess creates a table in the schema VESS_SCHEMA called VARS that contains the names and values of server-wide substitution variables. Over on the Sample/Basic variables configuration, there’s a little more to it:

The second query is modeled on a specific table for the Sample/Basic database

The second query is modeled on a specific table for the Sample/Basic database

Here there are queries that model the DELETE, INSERT, UPDATE, and of course SELECT operations. Not pictured (it’s collapsed on the config screen) is that I defined the primary key for this table as the combination of APPLICATION, CUBE, and NAME columns (while the final column, VALUE, is not part of the primary key).

To get a flavor for what the various queries look like, here’s the UpdateSQL configuration:

Dodeca UpdateSQL query for updating a variable's value

Dodeca UpdateSQL query for updating a variable’s value

You can see that the particular variable is identified by three column values (the primary key values), and that the value gets updated for this operation. There are four tokens in play, which will come from the row being edited in the view. There’s no primary key value being generated on the server-side (some of my previous examples had an integer that was generated server-side), so there’s no need for a post-insert select statement.

With all of the SQL Passthrough DataSet configuration out of the way (but a little more to come on the view configuration), I can now build a simple view template for showing the data:

Creating a template to display the variables from the SQL Passthrough DataSet

Creating a template to display the variables from the SQL Passthrough DataSet

If you’ve followed some of my other examples, this should seem pretty basic by now. There are two data ranges on this sheet but I’m just showing one in the preceding screenshot. The dataset has four columns, and so there are four columns on the range. That’s actually all there is to the view template itself. The rest of the configuration is on the view to set its data range and wire it up to the SQL Passthrough DataSet:

The configuration of the View that will display/edit the variables

The configuration of the View that will display/edit the variables

Noting too special here. You can see that I turned off RowAndColumnHeadersVisible to clean up the final appearance of the view a bit, and I have my one DataSet range defined. Over in the DataSet range definition:

The DataSet Range definition for the view

The DataSet Range definition for the view

There are two DataTable ranges defined (again, one for server variables and one for the Sample/Basic variables). Now opening up the configuration for the SampleBasicVarsData range (I’ll skip showing the details on the server variables range since it’s pretty simple):

DataTable Range Editor for the Sample/Basic data set

DataTable Range Editor for the Sample/Basic data set

I’ve turned on the abilities to add, delete, and modify rows (INSERT, DELETE, UPDATE). This is a really nice bit of granularity to have in Dodeca since in this case there’s a very legitimate use-case where I’d perhaps want a user to only be able to change a variable’s value but not otherwise delete it or add a new variable. Other than that bit of configuration, I’ve specified the corresponding range name on the sheet/template, and turned on InsertCells and NoColumnHeaders which is fairly standard for me with data sets like this.

Okay, the SQL Passthrough DataSet is setup, the template is setup, and the view configuration is setup. Let’s build this and see what happens:

Built Substitution Variable view

Built Substitution Variable view

It looks just like I thought it would! I can see my two server-wise substitution variables, and over on the table for Sample/Basic, I can see all of the variables that I have there. You’ll note that the server-wise variables seem to “repeat” in the Sample/Basic table. You simply have to think of the variables for a single database in terms of what variables are applicable to that database, and server-wide variables are applicable. Of course, if there’s a more specific (specified to the database) variable, it’ll trump the server-wide variable.

If you’re particularly astute with your screenshot reading skills you may notice that in preceding shot the cursor is on a cell in the Sample/Basic variables table, and therefore the row editing buttons on the toolbar are active (insert, delete, save). So I can change a value on a variable, hit the save data button, and Dodeca will perform the proper query from the SQLPTDS. Let’s do that and see what happens:

View after updating a variable value

View after updating a variable value

Well, it’s certainly less dramatic with screenshots, you’ll have to take my word for it that the PrevYear variable did indeed update on the server from FY11 to FY10. Under the hood, Dodeca fired off the properly filled in UpdateSQL statement, which of course was handed to the Vess driver, and in turn, Vess translated the call and called the appropriate variable updating logic on the Essbase Java API (magic!).

Summary / Vess Availability & Download

I hope you enjoyed this somewhat unique (or totally unique, I suppose) combination of a couple of different technologies. Vess is a bit of a unique take on things in the Essbase world, whereas Dodeca provides the peas to Essbase’s carrots. And yet, combining the two results in something wildly “interesting”.

I’m not saying that organizations should manage substitution variables this way (and again, the substitution variable aspect of Vess is just one of its facets, but it’s a nice simple one to play with), but this certainly makes it quite possible.

I know of many organizations that specifically or rather, begrudgingly, give EAS to a handful of finance power users that need to be able to tweak variables. Sometimes instead of EAS you’ll see one-off MaxL scripts where the update procedure is to tweak the script or a text file and run it. All too often this also involves plain-text credentials, hassling with installing the MaxL runtime on a ‘regular’ desktop machine, and more. So in this particular case, while the Vess driver actually does a lot more than just substitution variables, it can be leveraged for an innovative solution that is “cleaner” than many alternatives.

As an alternative (and still using Dodeca), we could have actually shelled out to launch a MaxL script and pass along the variable value, achieving much the same effect. This could work but obviously would be much more configuration. And to the extent possible I don’t like to create solutions that are ostensibly web-based that need to “drop down to the file system”, since it usually (in my experience), introduces a somewhat fragile or ‘sensitive’ element to the system that seems to act up or break relatively often.

Vess is still “highly experimental”, which I guess is a nice way of saying “a lot of things can go wrong, there’s no warranty, but it works… mostly. Asterisk.” Anyway, Vess isn’t available as a public download, but if you’d like to play with it, please feel free to contact me and I can provide the file and some basic instructions.

 

Friday fun day, fish names, and 100th post!

Happy Friday! Most of you Hyperion folks out there have probably called it a week already, so you can just catch the fun from your RSS readers or when you come up for air next week. It has been a busy week on my end, what with doing a fairly deep cubeSavvy review, building elegant/robust/awesome solutions for clients, polishing up open source Essbase power tools, and more (even a few things I can’t mention… yet).

A while back I mused on some Essbase or Hyperion related names for my new betta fish. I never circled back to this but I have to go with a late submission from “Keith” – so without further ado, Mr. Fish will be henceforth known as DBAG. *hehe*

Last, but not least, this is my one-hundredth blog post here. Wow! I’d probably write anyway even if no one read this blog because I find it oddly therapeutic (not to mention serving as a repository for the obscure bugs I track down and fix), but to those of you that regularly comment, email me, offer to help out on testing some tool, and say Hello at conferences, thank you very much for your time and thoughtfulness.

Have a great weekend.