Vess Updates Substitution Variables

A colleague of mine is running into an issue with substitution variables and was looking for a solution that he could use to sync values up. He thought maybe Vess would be a good fit. Vess, as I have blogged about before, is a “virtual” Essbase JDBC driver. Vess maps Essbase concepts and crams them into a typical database model. For example, Vess exposes tables that model substitution variables.

In the case of server-wide substitution variables, there is a “VARS” table that has two columns: NAME and VALUE. For each application, there is another table that contains four columns: NAME, VALUE, APPLICATION, DATABASE.

As a quick aside, this might seem a little odd to have separate tables. After all, this table is notionally about the same as the Variables screen in EAS. Well, you have to kind of flip your thinking a little. Don’t think of variables as being only either server or a cube: think about in terms of what variables are applicable to a cube. In other words, if you ask Essbase what variables are applicable to the whole server, then this would be the global variables only. If you ask Essbase what variables are applicable to a cube, then it’s the cube, app, AND the server specific variables. This is one of the reasons there are multiple tables to model the variables.

Getting back on track, given that we have these tables and we can treat them just like normal SQL tables, we can do some interesting things. Let’s say we want to create or update a variable specific to an app that exists in the global scope. We can do this in one line:


INSERT INTO SAMPLE.VARS<br />
SELECT NAME, VALUE, 'Sample', 'Basic'<br />
FROM VESS_SCHEMA.VARS WHERE NAME = 'Foo';

What’s going on here? In Vess, a schema named VESS_SCHEMA is presented for server-wide things (server wide variables are in the table VARS in this schema, as shown above). The server VARS table only has columns NAME and VALUE.

Each application on the Essbase sever is modeled as its own schema. In this case, our favorite app – Sample – gets a schema named SAMPLE. This schema also contains a VARS table (containing columns NAME, VALUE, APPLICATION, and DATABASE).

Given these tables we have, it’s a simple matter of selecting the server variable with the name we want (in this case, a variable named ‘Foo’), and insert it into the variables for the Sample app.

Of course, if we wanted to for some reason, we could alter the name using normal SQL (string truncating, substrings, etc), or whatever. We can also delete variables, such as this:


DELETE FROM SAMPLE.VARS WHERE CUBE = 'Basic';

As I’ve said before, Vess continues to be an “interesting” proof of concept. As time permits I am filling out more and more functionality. At present, Vess models things like substitution variables, metadata you might see in MaxL or EAS (cube statistics, user sessions, etc), can load data to cubes, and can do certain outline related operations.

Vess is not available as a public download at this time but I have handed a few copies out to get feedback. I think Vess is just about good enough to be used in automation and other things. If you’re interested in using this in a production situation (automation or otherwise), please contact me to discuss support options.

More fun with Vess, the Essbase JDBC driver

All I can say about Vess is that it’s still… interesting. This is definitely one of the more complex things I’ve ever worked on, as well as one of the more elegant. I did a lot of plumbing on how the driver works which is now enabling so interesting and rapid progress.

Here are some interesting things that you can do as of right now:

  1. Query list of applications and metadata
  2. Query list of cubes and metadata
  3. Query list of sessions (same as Sessions in EAS)
  4. Load data without a load rule by using a SQL INSERT statement
  5. List users
  6. List substitution variables

Some of the really fun things are actually the most subtle. Wherever possible Vess tries to supply metadata for a modeled table that matches up with Essbase semantics or naming limits. For example, substitution variables are in a table where you have the variable name, value, application, and database. Each of these columns are normal String columns but they have character limits that match the maximum Essbase length (320, 256, 30, and 30, respectively, taking into account Unicode).

Similarly for member names on the data load table. Also wherever possible, a primary key or nullability constraint is set on a table where it also makes sense. For example, substitution variables are assumed to be non-null, as well as member names for an Essbase fact table, cube names, and more.

I have been using a free generic JDBC tool called Squirrel SQL to help facilitate testing. Squirrel supports any database for which you can supply a generic JDBC driver. Here are a few screenshots from testing out Vess when connected to Sample/Basic (list of cubes on server):

View list of cubes and metadata with Vess

List of active sessions on server:

Viewing list of active sessions on Essbase server using Vess

Some of the tables also support DELETE semantics, for example, deleting a row from the session table is equivalent to killing the session, as with substitution variables.

Lastly, all tables that are modeled in Vess are part of the SQL metadata store, so that tools that go in to ask for metadata (such as Squirrel) get it back. This is interesting/notable because theoretically you could already use Vess with ODI to perform data loads and some other things. I haven’t tried this yet, but theoretically I could add the drivers, reverse the metadata, and use one of the tables in an interface. So it’d be fairly trivial to update a substitution variable directly from SQL, perform a data load, and you could even kick off a calc script simply by invoking a procedure.

In short, things are exciting in the Vess world. While many of you want to load data to Essbase cubes without a load rule, just as many are interested in pulling database metadata (things you normally dump with MaxL and scrape out), and this tool might just fill that need in a clean way. More to come later, I have some presentations to get ready for Kscope15 (although sadly none on this little technological flight of fancy…)

Essbase Version Archaeology

I was digging through some of the Essbase Java API internals the other day, trying to work around an issue I was encountering with Vess, my Essbase JDBC driver. Along the way I came across a constants file with a fun history (of sorts) of Essbase versions.

Complex software packages will often track their version somehow. This is for many reasons, but a very common one is so the software has a way to gracefully handle version discrepancies. For example, some client software (perhaps the latest version of Smart View) may be working with an older version of the Essbase server. If there was some feature that came out that Smart View supports but isn’t available in the older version of Essbase that it’s connecting to, it might find this out by asking the server for its version. The client can then hide/disable the feature as needed.

That said, the Essbase Java API has a file that tracks various versions of Essbase. This shouldn’t necessarily be interpreted as a full version history, but when the Essbase developers need to make a change that could affect version compatibility, it will often result in a change to the internal version matrix.

In any case, the constants in the file tell kind of an interesting story. We can see a handful of code names in the variables, such as Gemini, Orion, Levi, Sunspot, Eclipse, Cromwell, and Beckett.

Other versions are just designated by a major/minor/patch/build combination, such as Essbase 6.2, 6.5, 6.5.1 (this is the version of Essbase I cut my teeth on!), 7.0.0, 7.1, 9.0.1, 9.2.0.2, 9.2.0.3, 9.2.1, 9.3.1 (one of my favorite versions), and 9.3.1.4.

Then the jump to version 11 happens, where we can see 11.1.1, 11.1.1.2.1, 11.1.1.3, 11.1.2, 11.1.2.1, 11.1.2.2, 11.1.2.2.1, 11.1.2.2.5, and 11.1.2.3.

Curiously in the 11.x series, at some point there was an introduction of a seemingly separate series of variables, for editions 11.1.2.2.1 PV0, 11.1.2.3 PV0, and 11.1.2.3 PV1. I’m not totally clear what the PV is meant to indicate (patch version?).

Lastly, at some point someone introduced a set of three variables to track the release/revision/patch that would apparently always be the current version of the API, but it was only ever set with major version 6, minor version 5, and patch 0. I think someone decided that the variable structure, while seemingly clean/useful, was untenable in the long haul.

Lastly, there are a couple of “versions” that coincide with specific features that came out or were otherwise necessitated by testing: ECLIPSE_ALPHA, ECLIPSE_ALPHA_II, Hybrid Analysis (HYBRIDANALYSIS), non-unique names (NONUNIQUENAME), UNICODE, and some sort of “parity” (PARITY) marker that has the same version as 11.1.2.1. Un unqualified version variable seems to match up with the version of the library itself (I took this from an 11.1.2.3 JAR file, hence it matches the entry for 11.1.2.3).

I remember an Oracle product manager at some time indicating that the code names for Essbase versions were authors – hence Beckett and Cromwell – but I also distinctly remember hearing that one of these versions was codenamed “Tallyrand”, which isn’t reflected in the code names for this list. I’d be curious what some of the other code names were along the way, so if anyone out there knows, please send me an email and I can post them here for posterity (if nothing else).

Here’s the full list from the file from 11.1.2.3 for those that are curious:

public static final int ESS_API_VERSION = 725552;
public static final int ESS_API_VERSION_11_1_2_3 = 725552;
public static final int ESS_API_VERSION_11_1_2_2_5 = 725541;
public static final int ESS_API_VERSION_11_1_2_2_1 = 725537;
public static final int ESS_API_VERSION_11_1_2_2 = 725536;
public static final int ESS_API_VERSION_PARITY = 725520;
public static final int ESS_API_VERSION_11_1_2_1 = 725520;
public static final int ESS_API_VERSION_11_1_2 = 725504;
public static final int ESS_API_VERSION_11_1_1_3 = 725296;
public static final int ESS_API_VERSION_11_1_1_2_1 = 725281;
public static final int ESS_API_VERSION_11_1_1 = 725248;
public static final int ESS_API_VERSION_9314 = 602432;
public static final int ESS_API_VERSION_931 = 602368;
public static final int ESS_API_VERSION_BECKETT = 602112;
public static final int ESS_API_VERSION_921 = 598272;
public static final int ESS_API_VERSION_9203 = 598064;
public static final int ESS_API_VERSION_9202 = 598048;
public static final int ESS_API_VERSION_901 = 590080;
public static final int ESS_API_VERSION_NONUNIQUENAME = 589824;
public static final int ESS_API_VERSION_CROMWELL = 463360;
public static final int ESS_API_VERSION_71 = 462848;
public static final int ESS_API_VERSION_UNICODE = 458752;
public static final int ESS_API_VERSION_700 = 458752;
public static final int ESS_API_VERSION_651 = 413952;
public static final int ESS_API_VERSION_65 = 413696;
public static final int ESS_API_VERSION_HYBRIDANALYSIS = 413696;
public static final int ESS_API_VERSION_62 = 401408;
public static final int ESS_API_VERSION_ECLIPSE = 329472;
public static final int ESS_API_VERSION_ECLIPSE_ALPHA_II = 329216;
public static final int ESS_API_VERSION_ECLIPSE_ALPHA = 328960;
public static final int ESS_API_VERSION_SUNSPOT = 328192;
public static final int ESS_API_VERSION_LEVI = 327936;
public static final int ESS_API_VERSION_ORION = 327680;
public static final int ESS_API_VERSION_GEMINI = 262144;
public static final short ESS_API_RELEASE = 6;
public static final short ESS_API_REVISION = 5;
public static final short ESS_API_PATCH = 0;
public static final long ESS_API_VERSION_11_1_2_3_PV1 = 3116222111612928L;
public static final long ESS_API_VERSION_11_1_2_3_PV0 = 3116222111547392L;
public static final long ESS_API_VERSION_11_1_2_2_1_PV0 = 3116157687037952L;

Vess: a Virtual Essbase JDBC driver

Normally I finish some programming and then throw it over the wall to you guinea pigs enthusiastic blog readers before I say anything about it (no vaporware here, nosiree!), but I thought I’d share something I’ve been playing with as a proof of concept. It’s sort of inspired by some of the ODI/Essbase integration I’ve been working with lately.

The Essbase/Hyperion knowledge modules in ODI are sort of different from typical knowledge modules because in a lot of ways they cram a round peg into a square hole. So you kind of get this relational facade over an Essbase cube in terms of loading and integrating data/metadata. Instead of relying on a normal JDBC driver, the Hyperion KMs more or less delegate out to some custom Java code that wraps the Essbase Java API. This isn’t a bad thing, it’s just how it works.

Unrelated to ODI, there are many situations where we are jumping through hoops to get data of some kind – not just cube data and metadata – out of Essbase. For example, you run a MaxL script to dump stats from a cube to a text file. Then you run some regular expression voodoo on it to get to the data you actually want.

Side note: parsing MaxL text output is surely amusing to the boffins at Oracle that designed it. I don’t think many people know this, but MaxL communicates with the Essbase server using a hacked up JDBC driver (it’s ingenious really). So when we parse the plusses and minuses and other ASCII crap off of some MaxL output, effectively what is happening is that a real dataset is coming back from the Essbase server, the MaxL interpreter is applying the extra work of prettying it up with some text art (the only thing missing is lime green letters from the 80’s), it gets dumped to a text file, and then what happens in so many places is that the text is reparsed into data.

To some extent the MaxL Perl and Essbasepy modules can be used to get MaxL data in a cleaner way. The Java API can definitely be used. Of course, I have no problem myself jumping in and coding up a Essbase Java API routine to pull some data down. But for people that aren’t as comfortable, stripping some info out of MaxL output offers a clear path with less resistance, so I can’t say I blame them.

So we have all of these instances where we can get data out of Essbase (not just actual cube data, but metrics from the server) using EAS, using MaxL, dumping MaxL to a text file, the C API, Java API, and so on. But it seems like a lot of these approaches have undesirable combinations of effort/quality/reward.

Let’s talk about JDBC for a moment. JDBC is the Java Database Connectivity model. Java is one of the most popular programming languages on the planet (perhaps not the sexiest language, but it’s getting better), and JDBC is the tried and true model for connecting to databases. Every popular database in the world has a JDBC driver – you can grab a generic SQL tool (I’ve been using the simple RazorSQL as of late), pop in the appropriate JDBC driver (such as for MySQL, Microsoft SQL Server, Oracle, DB2, whatever), connect, and start issuing SELECT statements to your heart’s content.

So, this all brings me to something interesting I’ve been playing with for awhile: Vess – a virtual Essbase JDBC driver. Vess is a JDBC-compliant driver that can be dropped into any generic SQL tool that supports JDBC drivers (RazorSQL) as well as used out of the box with ODI. Vess offers a JDBC-compliant URL for connecting to Essbase, such as the following:

jdbc:essbase://epm11123.corp.saxifrages.com:9000/Sample.Basic;mode=embedded

Vess then presents several tables, views, and stored procedures. For example, consider substitution variables. These are modeled in a table where the column types are String, String, String, String (the VARCHAR database type). So you can do this:

SELECT * FROM SYS.SUBSTITUION_VARIABLES

And you get back a dataset kind of like this:

Sample, Basic, CurrMonth, Jan

(It’s not actually text with commas, it’s a normal dataset). There are similar tables with other system/cube information in them, such as database stats, connected users (the Sessions tab in EAS), and more. There are views for dimensions/members/aliases/UDAs. There are stored procedures for calling calc scripts (e.g. sp_exec_calc(‘CalcAll’)). MDX queries can also be crammed into a normal JDBC ResultSet with a little bit of imagination.

So, why bother?

Using a JDBC driver would let us easily (and cleanly) get to Essbase data and metadata using any generic database tool, as well as being able to drop the driver right in to ODI and use it as part of a normal interface. Whereas ODI uses Jython to call custom Java classes, this could potentially simplify the layers a little bit and offer an interesting alternative. This could also potentially be used in places where MaxL output is being parsed out via black magic, obviate the need for writing custom Essbase JAPI programs, and some use cases with the outline extractor.

Status

As of right now, the framework of the driver is created already and you can drop it into a SQL tool such as the aforementioned RazorSQL and run a command like SELECT name FROM SYS.SUBSTITUTION_VARIABLES WHERE APP = 'Sample' and get the proper results back. Over time (as time permits) I will be adding a few more gizmos. If there are any intrepid Java developers out there that would like to play with this, let me know and I can make this an open source project on GitHub that could be refined over time.

Where is the Essbase Java API method to build dimensions from a SQL source?

I was writing a custom ODI Open Tool the other day so that I could update an Essbase dimension as a custom package step. Normally this would be better suited as a normal update metadata interface, but in this case it seems like a good solution to be able to call an existing load rule without having to worry about the models and topologies.

In the Essbase Java API, most data load and dimension build operations are initiated from the cube object (IEssCube). I’m already familiar with the various loadData() methods that have several variations (variants of providing a rules file name, local/remote file, SQL username/password, error file name, etc), but hadn’t tried to start a dimension build from a SQL datasource from the Java API before.

Interestingly, there is no method in the IEssCube interface for directly performing a dimension build from a SQL datasource. But obviously we know it’s possible – and generally speaking, there’s a pretty clear mapping from EAS functionality to the underlying Java API.

So I was curious, what the hell does ODI do when you’re performing a dimension build (metadata update) from a datasource? Well, after a bit of digging through the ODI internals, I found it: ODI pulls the data from SQL, dumps it to a temporary local text file (a temporary file starting with the name EssData and ending with the .txt extension), then calls one of the normal data load methods on what is just a normal text file.

So, the solution in my case (to implement the custom package step) is just to dump a ResultSet to a text file (no big deal), then call a normal method. It’s a bit of a small workaround, but it works just fine. I thought it was pretty interesting that there wasn’t a built-in method to do this.

Essbase Java API Load from FTP quick followup

I was accused of being a tease for not knowing when the apparently “load from FTP” functionality got slipped in to the IEssCube loadData() function. I did a little digging around and it looks like it shows up no later than 11.1.2.2 and is definitely not in 9.2.1 (based on Javadoc). I’m not sure about 11.1.1.3, 11.1.1.4, and 11.1.2.1 – I couldn’t quickly find their Javadoc pages (11.1.2.1 seems to just redirect to 11.1.2.2 on Oracle’s site now, others just tell you to check out you local APS folder…). I hope you’re happy (happier) now, Tim German. :-)

Essbase Java API digging: data load from FTP?

I spend a lot of time working with the Essbase Java API, so I have become pretty familiar with it. I have used it to build a middle tier for a mobile reporting solution, a core part of the innovative Drillbridge drill-through solution for Essbase, Planning, and Financial Reporting, and countless other little apps. (In fact, my experience with Essbase, Java, ODI, and Jython are all at the confluence of one of my upcoming endeavors, but more on that later…)

In any case, much like a trip to Costco, going through the Essbase Java API can be a bit of a treasure hunt at times. One such thing I came across recently had to do with the loadData() method on a cube object (IEssCube).

There are actually a few loadData() methods – in programming parlance, the method is overloaded. That is, there are multiple functions with the same name, but they differ with their calling argument types, so behind the scenes, Java is intelligently able to figure out which one to call. Method overloading is frequently done for programming convenience.

For example, an object might have a method getGreeting(String name) that takes a name and returns “Hello ” plus that name. Another method with the same name might be getGreeting(String name, Date time) and this returns a greeting that is customized by the time of day.

The Essbase cube object in the Java API contains three loadData methods. One of them caught my eye. There’s a loadData() method that takes a username and password. According to the docs, you can actually load a file over FTP (in which case the file name would be an FTP path, presumably with a server and maybe even prefixed with ftp://), and the username/password are at the FTP username/password.

I thought this was kind of cool because it’s not something that’s ostensibly visible in EAS. So it could be something that’s buried in the API that is used behind the scenes, or maybe it was some plumbing done for PBCS. Maybe it has even been there forever. Like I said, I thought this was interesting… there are also a few other fun tidbits I’ve seen over the years in the API so I’ll try and point those out in the future. If you know of some please send them my way!

Essbase Outline Export Parser released

I had a use-case today where I needed to parse an XML file created by the relatively new MaxL command “export outline”. This command generates an XML file for a given cube for either all dimensions or just all dimensions you specify. I just needed to scrape the file for the hierarchy of a given dimension, and that’s exactly what this tool does: pass in an XML file that was generated by export outline, then pass in the name of a dimension, and the output to the console will be a space-indented list of members in the dimension. More information on usage at the Essbase Outline Export Parser GitHub page including sample input, sample output, and command-line usage.

Also note that the venerable Harry Gates has also created something similar that includes a GUI in addition to working on the command line. While both written in Java, we’re using different methods to parse the XML. Since I’m more familiar/comfortable with JAXB for reading XML I went with that, which in my experience gives a nice clean and extensible way to model the XML file and read it without too much trouble. The code for this project could be easily extended to provide other output formats.

Essbase Java API Consulting & Custom Development Available

I recently finished developing a solution for a client that involved writing a custom Java program using the Essbase API. The client was really amazed at how quickly I was able to develop the solution because their previous experience with using it (or hiring someone to develop with it for them) was not nearly as productive or smooth.

I graciously accepted their compliment and then told them that I’ve simply been working with the Essbase Java API for a long time – almost a decade now. Not only that, but I have several helper libraries that I use in most of my projects that prevent me from having to reinvent the wheel. By this time the libraries are quite battle-tested and robust and help speed up the development of many common operations such as pulling information out of the outline, running MDX queries, programmatically doing a data load, pulling statistical information, and more. Instead of spinning my wheels writing and rewriting the same boilerplate code, I accelerate development and focus on creating a good solution for the task at hand.

That all being said, for those of you out there finding this blog post now or in the future, whether you’re an administrator, consultant, manager, or other, and find yourself needing some help with a solution that involves Java development and utilizing the Essbase Java API, don’t hesitate to contact me. I am available through my consulting firm to do custom work or even fixing existing solutions you have already that are now exhibiting some quirk or need an enhancement. My extensive experience with Java and this particular API means than I can get up and running fixing your problem, not learning how to do it while on the clock.

Possible idea for a tool: cube delta

I have a question for my audience about a tool idea. Would it be useful to be able to tell what the data differences between two cubes with the same (or highly similar) dimensional structure is? For example, let’s say you had Sample/Basic on one server, and Sample/Basic on another server. Would it be useful to check for differences in the data loaded to them, if any?

I could see this as possible being helpful in checking for differences between cubes in development/qa/production, between archive cubes and ‘real’ cubes, and possibly during testing when you spin off a side cube to check some calcs.

Just a thought. Let me know! After HUMA is kicked over the wall I’ll be looking for my next side project (as time permits) and I am trying to focus on things that will increase the productivity of Hyperion developers.