Quadruple Backslashes

No, Quadruple Backslashes is not a college rock band (that I know of). But maybe it should be one. Anyway, ever see a quadruple backslashes on a path in a batch file? Like this:

SET FILE=\\\\some-server\\path-to-app-folder\\app\\data.txt

What’s going on here? This is a UNC path, and the normal form is like this:

SET FILE=\\some-server\path-to-app-folder\app\data.txt

Let’s say we are doing a data import in a MaxL script:

import database Sample.Basic data
    from data_file "$FILE"
    using server rules_file "LdAll"
    on error write to "errors.txt";

In many scripting languages, backslashes are special. They are used to “escape ” things inside of strings. What if you need a newline inside of a string? That’s \n. What if you want a tab? That’s \t. What if you want to print a backslash itself? You can’t just put a single backslash in because then the interpreter thinks that the next character is something special. So to write out just a backslash you escape a backslash itself (so it’s \\).

To further complicate things, many scripting languages have a notion of interpolated and non-interpolated strings. That means that strings contained inside of double quotes (“This is a double quoted string!”) have their contents parsed/scanned for special characters. Strings in single quotes don’t (‘This is a single quoted string!’). So in the single-quoted string, we can sometimes (not always) get away with doing whatever we want with backslashes. But if we use the single quoted string we can’t stick variables in it.

So now we need a double quoted string so we can put variables in it, but this makes putting in backslashes for our UNC path a little complicated. No worries – just escape the backslashes. Returning to our quadruple backslashed MaxL example that is surrounded with double quotes, what’s happening is that MaxL parses the string contents (the stuff between the double quotes) and the four backslashes become two backslashes in the final string. The double backslashes in the other parts of the UNC path get translated into a single backslash.

The final resulting string has the two backslashes to start the UNC path and any other path delimiters are just single backslashes.

Normal Paths

If you are just loading a file from a normal, non-networked path (or it’s a mapped drive), and you’re on Windows, then I highly recommend just using forward slashes to delimit the path. For example, consider an input file at D:\Hyperion\data.txt. Your double quoted string would be D:\\Hyperion\\data.txt, which gets translated down to just single quotes. For the last decade or more, Windows has had awesome support for the more traditional forward slash Unix path separator, meaning you can just use D:/Hyperion/data.txt. The forward slash isn’t special at all so you don’t need to escape it. You can’t use double forward slashes to denote a UNC path, though, so //server-name/path/to/file.txt will not work, thus requiring some backslash kung foo to get working.

ODI Automation: Don’t Call Us, We’ll Call You

I have developed numerous jobs with Oracle Data Integrator. I have developed new jobs that connect systems together, and I have developed jobs in ODI that replace existing jobs. There are plenty of reasons for redeveloping a solution in ODI that I have talked about before. Generally you can significantly clean things up – sometimes a little, sometimes a lot.

Sometimes a very particular aspect (such as a certain section of script code) of an existing batch job will be targeted for replacement with something developed in ODI. Let’s say that there is an existing batch file that performs a series of six distinct steps (not counting boilerplate code like setting some variables and echoing out logging statements and such). For example, the steps might be this:

  1. Copy a file from one place to another (such as from an “incoming” folder to a known folder to another location)
  2. Run a custom program to process the text file in some way, generating a new file
  3. Use a command-line SQL program to run a particular line of SQL, to truncate a particular table
  4. Run a custom program (or a BULK INSERT) to load the text file to a SQL table
  5. Use command-line program to run a line of SQL such as an UPDATE on the newly loaded data (UPDATE STAGING SET YEAR = '2015' WHERE YEAR IS NULL or something)
  6. Run a MaxL script that clears a cube, imports data via a load rule, and calculates the database

The Perfect Solution

Now, sitting from my architectural ODI ivory tower, I would make the case that this should all be done in an ODI job with an interface. For example, loading a text file to Microsoft SQL Server, defaulting some of the years data, and loading the data to a cube could all more or less be done in a single interface (BULK INSERT LKM, Hyperion Data Load IKM, use appropriate load rule, etc.).

The Intermediate Solution

But let’s say that just the data load and nothing else is to be performed by ODI because we are revamping some of the job, but not the whole job, for some reason (risk, time, money, incrementalism). This means that we are going to call an ODI job from an existing batch file. We will use our trusty friend startscen.bat in order to accomplish this. startscen.bat is good for running scenarios. We pass in the name of the scenario, the ID, and the context so that we can very specifically ID the exact job for the ODI server to run.

So now we’ll keep our existing batch file and most of its contents, but in place of part of the previous code, we’ll have a call to startscen.bat and then resume any necessary processing after that. So technically, we’ve improved the process a bit (hopefully) by using ODI for part of it, but we haven’t really done much to significantly improve things. This is now where I get to the point of the article: I say go all in with your ODI automation.

Think of this intermediate solution like this, in terms of script code:

  1. Copy file from incoming folder
  2. Run custom program to transform file
  3. Call startscen.bat (invoke ODI scenario, load SQL data, load to Essbase)

The benefits of this incremental approach should be obvious: we’ve ostensibly cleaned up part of the job. Here’s the thing though: I think we’ve actually added some undesirable technical debt by doing it this way.

Yes, the job technically works, but think about how this batch file is positioned from an administration point of view now: we have a batch file, some configuration settings for that batch file (server names, encryption keys, folder paths, etc.), the batch file is likely outputting data or otherwise logging things to a text file, then we’re calling the ODI job itself, thereby creating an entry in the ODI Operator (which is a good thing, but it will only show part of the the overall job).

More often than not when I am redeveloping an automation process for someone, the batch files have no error control and inconsistent amounts of logging. The logging that is done is often times never looked at. Having easy visibility into our automation processes, however, is a very good thing. We can achieve this with ODI. While incrementalism has its merits, I say go all in with the ODI automation.

Go all in

What I mean by this is that instead of sticking with the batch file, instead of the batch file calling the automation steps and then calling an ODI step, use a package in ODI to call the necessary steps, your interfaces, and any other processing you need.

You get a lot of benefits from this approach. The ODI Operator will log each step so that you know how long each one took. You can get consistent and easy error handling/branching in an ODI package. You can easily log all of the output. You can schedule the job from the ODI scheduler. ODI has package steps that can be used to replace kludgey code in a batch file. For example, there is an ODI “wait for file” step that waits around for a file to show up. You can copy files around, pull files from FTP, and more – if you need to just call an arbitrary program or script you can do that with an ODI OS Command. Everything you need is there, and more.

You don’t even have to use the ODI scheduler. If you are already invested in another scheduler (Windows Scheduler, cron jobs, etc.) and want to keep everything in one place, then just call the ODI scenario from your scheduler, but that scenario will have been generated from a package that has all of the steps in it (instead of your batch file calling those).

This new world of pure ODI jobs is worth the extra effort to get to: you have status and metrics for all of your jobs (without tons of boilerplate logging code), you can manage your entire job in one place, and its easier to follow best (better) practices with development. While batch files and various scripts have served us well over the years, more and more I find myself abstracting away from the file system when it comes to automation and given the capabilities of ODI. It’s worth it.

Kscope15 Presentation Preview: ODI Workhorses

The other day I mentioned my goals for attendees for my upcoming ODTUG presentation on Drillbridge. Today I’m going to talk about my goals for my presentation on Oracle Data Integrator (ODI).

Over the last few years I have presented on ODI a handful of times. My main presentation on it has been highlighting a success story with it, where ODI was used to clean up and facilitate a lot of automation and ETL jobs for a health services company that [as you can imagine] has tons of data flying around everywhere. This previous presentation was more of a high-level affair, where I talked very generically about what the benefits of ODI were over the previous solution. Wanting to add a little more technical meat to the presentation, I appended what started off as a small section at the end where I take a look at just how ODI works under the covers.

While the “business” or high-level part of the presentation was all well and good, I found myself getting really excited to explain just how awesome the workings of the details of ODI ETL jobs were, and what started out as a 10-minute flight of fancy into the lower depths of something technical has now been promoted, as it were, to full on presentation.

In other words, I am going to spend an entire presentation literally tearing apart a single ODI interface, explaining why the steps are sequences the way they are, how to build idiomatic ODI interfaces, the affect of various options (journalization, query hints, delete all vs. truncate, etc.), update strategies, and more. I’m also going to marry up what is ostensibly an IT tool with various concepts from computer science, including the notion of idempotence (and why it’s a good thing).

With any luck, the attendee coming to this presentation will have a new or expanded understanding of how interfaces/mappings work, feel comfortable with modifying knowledge modules, and learn a trick or two when it comes to debugging ODI jobs. This will be a nuts and bolts, technical deep dive. While it’s ostensibly an advanced content presentation, I believe that people with even only a cursory familiarity should benefit from it as well. If you haven’t worked with ODI at all but are curious (and intrepid!) about what it can do, I think you’ll also benefit. So to all of my pure-Hyperion colleagues that haven’t dipped their toes in the ODI pool just yet, this is a great chance to hop on a different track and expand your horizons – I hope to see you there!

Kscope15 Presentation Preview: Drillbridge

I am very pleased to be able to say that I have two presentations at Kscope15. One of my presentations is on Drillbridge, the other presentation is a deep dive on how interfaces work in ODI. I’m going to really have a chance to geek out on both of these, especially the ODI presentation.

As for the Drillbridge presentation, I want to give a brief overview not just of what I am going to cover, but what I hope to accomplish for those in attendance.

This Drillbridge presentation will be an elaboration of my original webinar. I’m going to introduce the attendee to what Drillbridge is and how it works (of course). Then, as with the webinar, I will download and deploy Drillbridge in real-time, showing off just how easy it is to get drill-through up and running. The original webinar had a stated goal of deploying drill-through in 10 minutes or less. Not only was drill-through deployed to a cube in less time (five minutes!), I used the spare time to redeploy drill-through on the same cube but with drill-to-bottom enabled in the Time dimension (and still had some time left over). Yeah, it’s that good.

Since I’ll have more time than the webinar, I’ll also be able to show off the handful of awesome features that have been added in the last year: custom mappings, support for Linux, custom plugins, server-side result paging, smart formatting, drill-from-drill, custom stylesheets, and more.

After the presentation, attendees should have a firm grasp of how Drillbridge works, how it can be used, and how implement drill-through in simple situations. More importantly, they should be heading back to their respective companies excited about how they can now deliver a solid win to their users without breaking the bank or having to re-architect their systems.

There are a lot of great presentations this year, but I hope if you’re interested in drill-through or Drillbridge you will be able to attend!

Essbase Version Archaeology

I was digging through some of the Essbase Java API internals the other day, trying to work around an issue I was encountering with Vess, my Essbase JDBC driver. Along the way I came across a constants file with a fun history (of sorts) of Essbase versions.

Complex software packages will often track their version somehow. This is for many reasons, but a very common one is so the software has a way to gracefully handle version discrepancies. For example, some client software (perhaps the latest version of Smart View) may be working with an older version of the Essbase server. If there was some feature that came out that Smart View supports but isn’t available in the older version of Essbase that it’s connecting to, it might find this out by asking the server for its version. The client can then hide/disable the feature as needed.

That said, the Essbase Java API has a file that tracks various versions of Essbase. This shouldn’t necessarily be interpreted as a full version history, but when the Essbase developers need to make a change that could affect version compatibility, it will often result in a change to the internal version matrix.

In any case, the constants in the file tell kind of an interesting story. We can see a handful of code names in the variables, such as Gemini, Orion, Levi, Sunspot, Eclipse, Cromwell, and Beckett.

Other versions are just designated by a major/minor/patch/build combination, such as Essbase 6.2, 6.5, 6.5.1 (this is the version of Essbase I cut my teeth on!), 7.0.0, 7.1, 9.0.1,,, 9.2.1, 9.3.1 (one of my favorite versions), and

Then the jump to version 11 happens, where we can see 11.1.1,,, 11.1.2,,,,, and

Curiously in the 11.x series, at some point there was an introduction of a seemingly separate series of variables, for editions PV0, PV0, and PV1. I’m not totally clear what the PV is meant to indicate (patch version?).

Lastly, at some point someone introduced a set of three variables to track the release/revision/patch that would apparently always be the current version of the API, but it was only ever set with major version 6, minor version 5, and patch 0. I think someone decided that the variable structure, while seemingly clean/useful, was untenable in the long haul.

Lastly, there are a couple of “versions” that coincide with specific features that came out or were otherwise necessitated by testing: ECLIPSE_ALPHA, ECLIPSE_ALPHA_II, Hybrid Analysis (HYBRIDANALYSIS), non-unique names (NONUNIQUENAME), UNICODE, and some sort of “parity” (PARITY) marker that has the same version as Un unqualified version variable seems to match up with the version of the library itself (I took this from an JAR file, hence it matches the entry for

I remember an Oracle product manager at some time indicating that the code names for Essbase versions were authors – hence Beckett and Cromwell – but I also distinctly remember hearing that one of these versions was codenamed “Tallyrand”, which isn’t reflected in the code names for this list. I’d be curious what some of the other code names were along the way, so if anyone out there knows, please send me an email and I can post them here for posterity (if nothing else).

Here’s the full list from the file from for those that are curious:

public static final int ESS_API_VERSION = 725552;
public static final int ESS_API_VERSION_11_1_2_3 = 725552;
public static final int ESS_API_VERSION_11_1_2_2_5 = 725541;
public static final int ESS_API_VERSION_11_1_2_2_1 = 725537;
public static final int ESS_API_VERSION_11_1_2_2 = 725536;
public static final int ESS_API_VERSION_PARITY = 725520;
public static final int ESS_API_VERSION_11_1_2_1 = 725520;
public static final int ESS_API_VERSION_11_1_2 = 725504;
public static final int ESS_API_VERSION_11_1_1_3 = 725296;
public static final int ESS_API_VERSION_11_1_1_2_1 = 725281;
public static final int ESS_API_VERSION_11_1_1 = 725248;
public static final int ESS_API_VERSION_9314 = 602432;
public static final int ESS_API_VERSION_931 = 602368;
public static final int ESS_API_VERSION_BECKETT = 602112;
public static final int ESS_API_VERSION_921 = 598272;
public static final int ESS_API_VERSION_9203 = 598064;
public static final int ESS_API_VERSION_9202 = 598048;
public static final int ESS_API_VERSION_901 = 590080;
public static final int ESS_API_VERSION_NONUNIQUENAME = 589824;
public static final int ESS_API_VERSION_CROMWELL = 463360;
public static final int ESS_API_VERSION_71 = 462848;
public static final int ESS_API_VERSION_UNICODE = 458752;
public static final int ESS_API_VERSION_700 = 458752;
public static final int ESS_API_VERSION_651 = 413952;
public static final int ESS_API_VERSION_65 = 413696;
public static final int ESS_API_VERSION_HYBRIDANALYSIS = 413696;
public static final int ESS_API_VERSION_62 = 401408;
public static final int ESS_API_VERSION_ECLIPSE = 329472;
public static final int ESS_API_VERSION_ECLIPSE_ALPHA_II = 329216;
public static final int ESS_API_VERSION_ECLIPSE_ALPHA = 328960;
public static final int ESS_API_VERSION_SUNSPOT = 328192;
public static final int ESS_API_VERSION_LEVI = 327936;
public static final int ESS_API_VERSION_ORION = 327680;
public static final int ESS_API_VERSION_GEMINI = 262144;
public static final short ESS_API_RELEASE = 6;
public static final short ESS_API_REVISION = 5;
public static final short ESS_API_PATCH = 0;
public static final long ESS_API_VERSION_11_1_2_3_PV1 = 3116222111612928L;
public static final long ESS_API_VERSION_11_1_2_3_PV0 = 3116222111547392L;
public static final long ESS_API_VERSION_11_1_2_2_1_PV0 = 3116157687037952L;

Drillbridge 1.5.4 Available

Nothing fancy in this release, just a rollup release with a collection of bug fixes, enhancements, and small improvements. There were some issues with null values in certain situations and Excel files that are now resolved. Thanks to those that provided feedback and testing. Please email for Dropbox link for now, otherwise this new version will be available from the normal downloads site later today.

Your daily Vess (Virtual Essbase JDBC driver) update

The other week I mentioned that I’d been playing around with the idea of a virtual JDBC driver for Essbase. Much to my amazement, quite a few people (relatively speaking…) expressed an interest in this project and even graciously offered to help test it.

To reiterate from last week, I still think that Vess is “interesting”. It has also been one of the more complicated things I have tried to create. Implementing such a thing is really kind of an exercise in pounding a round peg into a square whole (or pounding a cube into a cylinder…), requiring advanced knowledge of Java, the JDBC driver model, Essbase, the Essbase Java API, network programming, and more. I’m leveraging code from a half dozen projects I have written, some public and some not.

That said if you wondered why Oracle wasn’t nice enough to make a true JDBC driver for Essbase, it’s because you have to be a little crazy to do it. And apparently I’m just that right kind of masochistic crazy.

Anyway, the driver works – amazingly. It’s not production ready and won’t be for some time, but you can view your substitution variables from a table and even load data to a cube by doing a normal SQL INSERT.

In fact, you can even read a CSV file on the fly and insert it to a cube with no load rule, like this:

INSERT INTO VESS_SCHEMA.SAMPLE_BASIC_DATA ("Year", "Measures", "Product", "Market", "Scenario", "DATA")
SELECT * FROM CSVREAD('/Users/jasonwjones/test/sample_basic_data.txt');

Crazy, eh? So all you load rule haters rejoice.

You can even use SQL transformations to adjust the data and load that into a cube from a remote JDBC table:


What next?

A lot of the hard stuff is in place so the name of the game now would be to iterate and start to introduce a few more table representations of Essbase concepts, such as cube stats, server sessions, and some other interesting things. I’m still contemplating how you could layer in the outline editing model in a sensible way.

Getting back to this being “interesting” – I think it’s really interesting in a couple of ways. One is the idea of getting to use any off-the-shelf JDBC tool to be able to do things with an Essbase server. The other thing that is interesting is ODI (which I guess is interesting by way of the first reason, since databases/JDBC “fit” into ODI better than OLAP). The way Essbase works in ODI right now is that some custom Java libraries are used and glued together with some Jython scripts and Knowledge Module steps. In theory you could instead drop in the Vess driver as a generic SQL data source and do all of this with vanilla RKMs, LKMs, and IKMs. How cool would that be? To the extent that supporting Hyperion inside of ODI has been “problematic” for Oracle, you could perhaps sidestep it this way.

Anyway, I have a few folks that have valiantly offered to help test this thing out. If you have a spare cube, Java 1.8 (or greater), can download the free Squirrel SQL tool, and want to play with this, please hit me up, I would love some feedback.


Vess: a Virtual Essbase JDBC driver

Normally I finish some programming and then throw it over the wall to you guinea pigs enthusiastic blog readers before I say anything about it (no vaporware here, nosiree!), but I thought I’d share something I’ve been playing with as a proof of concept. It’s sort of inspired by some of the ODI/Essbase integration I’ve been working with lately.

The Essbase/Hyperion knowledge modules in ODI are sort of different from typical knowledge modules because in a lot of ways they cram a round peg into a square hole. So you kind of get this relational facade over an Essbase cube in terms of loading and integrating data/metadata. Instead of relying on a normal JDBC driver, the Hyperion KMs more or less delegate out to some custom Java code that wraps the Essbase Java API. This isn’t a bad thing, it’s just how it works.

Unrelated to ODI, there are many situations where we are jumping through hoops to get data of some kind – not just cube data and metadata – out of Essbase. For example, you run a MaxL script to dump stats from a cube to a text file. Then you run some regular expression voodoo on it to get to the data you actually want.

Side note: parsing MaxL text output is surely amusing to the boffins at Oracle that designed it. I don’t think many people know this, but MaxL communicates with the Essbase server using a hacked up JDBC driver (it’s ingenious really). So when we parse the plusses and minuses and other ASCII crap off of some MaxL output, effectively what is happening is that a real dataset is coming back from the Essbase server, the MaxL interpreter is applying the extra work of prettying it up with some text art (the only thing missing is lime green letters from the 80’s), it gets dumped to a text file, and then what happens in so many places is that the text is reparsed into data.

To some extent the MaxL Perl and Essbasepy modules can be used to get MaxL data in a cleaner way. The Java API can definitely be used. Of course, I have no problem myself jumping in and coding up a Essbase Java API routine to pull some data down. But for people that aren’t as comfortable, stripping some info out of MaxL output offers a clear path with less resistance, so I can’t say I blame them.

So we have all of these instances where we can get data out of Essbase (not just actual cube data, but metrics from the server) using EAS, using MaxL, dumping MaxL to a text file, the C API, Java API, and so on. But it seems like a lot of these approaches have undesirable combinations of effort/quality/reward.

Let’s talk about JDBC for a moment. JDBC is the Java Database Connectivity model. Java is one of the most popular programming languages on the planet (perhaps not the sexiest language, but it’s getting better), and JDBC is the tried and true model for connecting to databases. Every popular database in the world has a JDBC driver – you can grab a generic SQL tool (I’ve been using the simple RazorSQL as of late), pop in the appropriate JDBC driver (such as for MySQL, Microsoft SQL Server, Oracle, DB2, whatever), connect, and start issuing SELECT statements to your heart’s content.

So, this all brings me to something interesting I’ve been playing with for awhile: Vess – a virtual Essbase JDBC driver. Vess is a JDBC-compliant driver that can be dropped into any generic SQL tool that supports JDBC drivers (RazorSQL) as well as used out of the box with ODI. Vess offers a JDBC-compliant URL for connecting to Essbase, such as the following:


Vess then presents several tables, views, and stored procedures. For example, consider substitution variables. These are modeled in a table where the column types are String, String, String, String (the VARCHAR database type). So you can do this:


And you get back a dataset kind of like this:

Sample, Basic, CurrMonth, Jan

(It’s not actually text with commas, it’s a normal dataset). There are similar tables with other system/cube information in them, such as database stats, connected users (the Sessions tab in EAS), and more. There are views for dimensions/members/aliases/UDAs. There are stored procedures for calling calc scripts (e.g. sp_exec_calc(‘CalcAll’)). MDX queries can also be crammed into a normal JDBC ResultSet with a little bit of imagination.

So, why bother?

Using a JDBC driver would let us easily (and cleanly) get to Essbase data and metadata using any generic database tool, as well as being able to drop the driver right in to ODI and use it as part of a normal interface. Whereas ODI uses Jython to call custom Java classes, this could potentially simplify the layers a little bit and offer an interesting alternative. This could also potentially be used in places where MaxL output is being parsed out via black magic, obviate the need for writing custom Essbase JAPI programs, and some use cases with the outline extractor.


As of right now, the framework of the driver is created already and you can drop it into a SQL tool such as the aforementioned RazorSQL and run a command like SELECT name FROM SYS.SUBSTITUTION_VARIABLES WHERE APP = 'Sample' and get the proper results back. Over time (as time permits) I will be adding a few more gizmos. If there are any intrepid Java developers out there that would like to play with this, let me know and I can make this an open source project on GitHub that could be refined over time.

Where is the Essbase Java API method to build dimensions from a SQL source?

I was writing a custom ODI Open Tool the other day so that I could update an Essbase dimension as a custom package step. Normally this would be better suited as a normal update metadata interface, but in this case it seems like a good solution to be able to call an existing load rule without having to worry about the models and topologies.

In the Essbase Java API, most data load and dimension build operations are initiated from the cube object (IEssCube). I’m already familiar with the various loadData() methods that have several variations (variants of providing a rules file name, local/remote file, SQL username/password, error file name, etc), but hadn’t tried to start a dimension build from a SQL datasource from the Java API before.

Interestingly, there is no method in the IEssCube interface for directly performing a dimension build from a SQL datasource. But obviously we know it’s possible – and generally speaking, there’s a pretty clear mapping from EAS functionality to the underlying Java API.

So I was curious, what the hell does ODI do when you’re performing a dimension build (metadata update) from a datasource? Well, after a bit of digging through the ODI internals, I found it: ODI pulls the data from SQL, dumps it to a temporary local text file (a temporary file starting with the name EssData and ending with the .txt extension), then calls one of the normal data load methods on what is just a normal text file.

So, the solution in my case (to implement the custom package step) is just to dump a ResultSet to a text file (no big deal), then call a normal method. It’s a bit of a small workaround, but it works just fine. I thought it was pretty interesting that there wasn’t a built-in method to do this.