Manually injecting rows to an ODI journal table

Earlier this year I did an absolute whirlwind crash course on ODI journalization. In that article I talked about the layout of tables that ODI uses to track changes in particular tables (for simple journalization with one table). You may recall that changed (inserted, updated, deleted) rows are tracked in this table. This is done by way of some metadata as well as the primary key of the monitored table.

Typically after your processing is done, the entries from the ODI journal table that were processed will be deleted. Occasionally, however, you may need to manually fix the ODI journal table. For me this sometimes involves “manually injecting” rows to the journal table. You could achieve this in a couple of ways. This way we can force some rows to process that otherwise wouldn’t be processed because ODI doesn’t consider them to be new or updated.

The first way is to “touch” the rows in the monitored table in some way so that the trigger installed on that table gets activated and inserts rows to the ODI journal table. You could perform some sort of dummy update on a row. This can work but isn’t always desirable.

Another technique and the one I’ll talk about in this post is to manually add rows to the journal table that contain the primary key of the row to be processed. The trick, then, is what to fill in for the other values of the columns?

Recall that the general format of a table is the subscriber name, a consumed flag, the date it went into the table, whether it’s an insert or a delete, and the primary key of the row being journalized.

As an example, assuming the journal table is called J$TEST_CODE_RESULT (for a real table of TEST_CODE_RESULT), and also assuming a primary key name of TEST_CODE_RESULT_ID (that in this case is an integer), we can add a row to the journal table like this:

INSERT INTO TEST_CODES.J$TEST_CODE_RESULT VALUES ('TEST_CODE_PROCESSING', 0, 'I', sysdate, 12345);

The first value is the JRN_SUBSCRIBER. This corresponds to the name of the subscriber setup in the ODI journalizing and is also the same as the filter you use in the interface to get rows for just your particular subscriber. Next is the JRN_CONSUMED field which we set to 0 (during processing, ODI will change this to 1 to indicate it’s working on that data), the character ‘I’ (upper case i) to indicate the row was an insert or update, the current time (this assumes an Oracle database), and finally, the ID of the row we need to inject.

You can of course also insert as many rows as you want such as by writing an INSERT that gets values from a SELECT. Every now and then if you have a lot of rows you need to reprocess you might find yourself with a table that has the IDs, and you can just insert them into this table and they’ll get picked up on the next run.

 

More fun with Vess, the Essbase JDBC driver

All I can say about Vess is that it’s still… interesting. This is definitely one of the more complex things I’ve ever worked on, as well as one of the more elegant. I did a lot of plumbing on how the driver works which is now enabling so interesting and rapid progress.

Here are some interesting things that you can do as of right now:

  1. Query list of applications and metadata
  2. Query list of cubes and metadata
  3. Query list of sessions (same as Sessions in EAS)
  4. Load data without a load rule by using a SQL INSERT statement
  5. List users
  6. List substitution variables

Some of the really fun things are actually the most subtle. Wherever possible Vess tries to supply metadata for a modeled table that matches up with Essbase semantics or naming limits. For example, substitution variables are in a table where you have the variable name, value, application, and database. Each of these columns are normal String columns but they have character limits that match the maximum Essbase length (320, 256, 30, and 30, respectively, taking into account Unicode).

Similarly for member names on the data load table. Also wherever possible, a primary key or nullability constraint is set on a table where it also makes sense. For example, substitution variables are assumed to be non-null, as well as member names for an Essbase fact table, cube names, and more.

I have been using a free generic JDBC tool called Squirrel SQL to help facilitate testing. Squirrel supports any database for which you can supply a generic JDBC driver. Here are a few screenshots from testing out Vess when connected to Sample/Basic (list of cubes on server):

View list of cubes and metadata with Vess

List of active sessions on server:

Viewing list of active sessions on Essbase server using Vess

Some of the tables also support DELETE semantics, for example, deleting a row from the session table is equivalent to killing the session, as with substitution variables.

Lastly, all tables that are modeled in Vess are part of the SQL metadata store, so that tools that go in to ask for metadata (such as Squirrel) get it back. This is interesting/notable because theoretically you could already use Vess with ODI to perform data loads and some other things. I haven’t tried this yet, but theoretically I could add the drivers, reverse the metadata, and use one of the tables in an interface. So it’d be fairly trivial to update a substitution variable directly from SQL, perform a data load, and you could even kick off a calc script simply by invoking a procedure.

In short, things are exciting in the Vess world. While many of you want to load data to Essbase cubes without a load rule, just as many are interested in pulling database metadata (things you normally dump with MaxL and scrape out), and this tool might just fill that need in a clean way. More to come later, I have some presentations to get ready for Kscope15 (although sadly none on this little technological flight of fancy…)

Essbase data loads with Windows integrated authentication

This is a followup to a post I made on the Network54 Essbase forum: Yes, you can use Windows integrated authentication (indirectly) to perform Essbase data loads. It’s not necessarily supported, but it works.

First you start off with the connection definition in the Windows ODBC connection manager. This is where you setup a normal ODBC connection to SQL Server:

ODBC connections in Windows to be used by Essbase data loads

ODBC connections in Windows to be used by Essbase data loads

One of the configuration screens is where you specify the credentials, like this:

Connecting via integrated authentication

Connecting via integrated authentication

The SQL Server connection configuration lets you specify how to connect to the database instance. I would guess that much of the time this is configured with a particular native SQL Server ID. Note that it’s also possible to connect using “Windows NT authentication”. You can select this option to try and have the local machine connect via its own credentials to the SQL Server instance. That is, the user running the Essbase service on the local machine would need to have access to the given SQL Server instance.

With the ODBC connection setup, you can reference it in a load rule as normal:

Essbase Data Load SQL Definition

Next when you go to test the connection/data load, just specify anything you want for the user name and password when it prompts you in this dialog box:

Essbase SQL Data Load Credential Dialog

I have gotten into the habit of typing in ‘dummy’. As Peter Nitschke unhelpfully points out, in this case you I don’t need to document the automation author because because it’s already documented (heyoooo). Pete, I know you read this, by the way, is that rash cleared up yet? You actually have to put in something, otherwise the load won’t work.

Similarly if you do a normal data load, you have to put something in:

Essbase Data Load from SQL Load Rule

Essbase Data Load from SQL Load Rule

Or else you’ll get this helpful error message:

The error when the username is not specified

The error when the username is not specified

Do note that the username and password are both discarded or otherwise unneeded assuming the network credentials are sufficient to connect to the database. As a somewhat interesting aside, this seems to be a quirk in the Essbase code path that processes the data loads. It guards against NULL (empty) values and if it picks up one or the other for username and password, it errors out the load. If memory serves, you can get away with only filling in a username in the data load rule SQL editor test but you need both when doing a data load. This is due to differences in how the EAS GUI is verifying data. Also, if you write MaxL automation that relies on this Windows integrated authentication, you need to fill in some dummy or otherwise ignored values in the MaxL.

My guess is that Essbase actually tries using the username/password but when it is able to make a successful connection via the integrated authentication, the SQL Server driver just ignores any other connection efforts, so the “bad” username and password have no effect.

Apparently connecting this way is unsupported but I have set this up and used it numerous times without any issue. I think if Oracle really wanted to polish this up and make integrated authentication a more supported option, that’d be great, and I might start with some consistency on the EAS side of things as well as a modified MaxL syntax alternative that lets you explicitly specify integrated authentication (or put another way, a MaxL syntax that doesn’t need username/password in it).

Quadruple Backslashes

No, Quadruple Backslashes is not a college rock band (that I know of). But maybe it should be one. Anyway, ever see a quadruple backslashes on a path in a batch file? Like this:

SET FILE=\\\\some-server\\path-to-app-folder\\app\\data.txt

What’s going on here? This is a UNC path, and the normal form is like this:

SET FILE=\\some-server\path-to-app-folder\app\data.txt

Let’s say we are doing a data import in a MaxL script:

import database Sample.Basic data
    from data_file "$FILE"
    using server rules_file "LdAll"
    on error write to "errors.txt";

In many scripting languages, backslashes are special. They are used to “escape ” things inside of strings. What if you need a newline inside of a string? That’s \n. What if you want a tab? That’s \t. What if you want to print a backslash itself? You can’t just put a single backslash in because then the interpreter thinks that the next character is something special. So to write out just a backslash you escape a backslash itself (so it’s \\).

To further complicate things, many scripting languages have a notion of interpolated and non-interpolated strings. That means that strings contained inside of double quotes (“This is a double quoted string!”) have their contents parsed/scanned for special characters. Strings in single quotes don’t (‘This is a single quoted string!’). So in the single-quoted string, we can sometimes (not always) get away with doing whatever we want with backslashes. But if we use the single quoted string we can’t stick variables in it.

So now we need a double quoted string so we can put variables in it, but this makes putting in backslashes for our UNC path a little complicated. No worries – just escape the backslashes. Returning to our quadruple backslashed MaxL example that is surrounded with double quotes, what’s happening is that MaxL parses the string contents (the stuff between the double quotes) and the four backslashes become two backslashes in the final string. The double backslashes in the other parts of the UNC path get translated into a single backslash.

The final resulting string has the two backslashes to start the UNC path and any other path delimiters are just single backslashes.

Normal Paths

If you are just loading a file from a normal, non-networked path (or it’s a mapped drive), and you’re on Windows, then I highly recommend just using forward slashes to delimit the path. For example, consider an input file at D:\Hyperion\data.txt. Your double quoted string would be D:\\Hyperion\\data.txt, which gets translated down to just single quotes. For the last decade or more, Windows has had awesome support for the more traditional forward slash Unix path separator, meaning you can just use D:/Hyperion/data.txt. The forward slash isn’t special at all so you don’t need to escape it. You can’t use double forward slashes to denote a UNC path, though, so //server-name/path/to/file.txt will not work, thus requiring some backslash kung foo to get working.

ODI Automation: Don’t Call Us, We’ll Call You

I have developed numerous jobs with Oracle Data Integrator. I have developed new jobs that connect systems together, and I have developed jobs in ODI that replace existing jobs. There are plenty of reasons for redeveloping a solution in ODI that I have talked about before. Generally you can significantly clean things up – sometimes a little, sometimes a lot.

Sometimes a very particular aspect (such as a certain section of script code) of an existing batch job will be targeted for replacement with something developed in ODI. Let’s say that there is an existing batch file that performs a series of six distinct steps (not counting boilerplate code like setting some variables and echoing out logging statements and such). For example, the steps might be this:

  1. Copy a file from one place to another (such as from an “incoming” folder to a known folder to another location)
  2. Run a custom program to process the text file in some way, generating a new file
  3. Use a command-line SQL program to run a particular line of SQL, to truncate a particular table
  4. Run a custom program (or a BULK INSERT) to load the text file to a SQL table
  5. Use command-line program to run a line of SQL such as an UPDATE on the newly loaded data (UPDATE STAGING SET YEAR = '2015' WHERE YEAR IS NULL or something)
  6. Run a MaxL script that clears a cube, imports data via a load rule, and calculates the database

The Perfect Solution

Now, sitting from my architectural ODI ivory tower, I would make the case that this should all be done in an ODI job with an interface. For example, loading a text file to Microsoft SQL Server, defaulting some of the years data, and loading the data to a cube could all more or less be done in a single interface (BULK INSERT LKM, Hyperion Data Load IKM, use appropriate load rule, etc.).

The Intermediate Solution

But let’s say that just the data load and nothing else is to be performed by ODI because we are revamping some of the job, but not the whole job, for some reason (risk, time, money, incrementalism). This means that we are going to call an ODI job from an existing batch file. We will use our trusty friend startscen.bat in order to accomplish this. startscen.bat is good for running scenarios. We pass in the name of the scenario, the ID, and the context so that we can very specifically ID the exact job for the ODI server to run.

So now we’ll keep our existing batch file and most of its contents, but in place of part of the previous code, we’ll have a call to startscen.bat and then resume any necessary processing after that. So technically, we’ve improved the process a bit (hopefully) by using ODI for part of it, but we haven’t really done much to significantly improve things. This is now where I get to the point of the article: I say go all in with your ODI automation.

Think of this intermediate solution like this, in terms of script code:

  1. Copy file from incoming folder
  2. Run custom program to transform file
  3. Call startscen.bat (invoke ODI scenario, load SQL data, load to Essbase)

The benefits of this incremental approach should be obvious: we’ve ostensibly cleaned up part of the job. Here’s the thing though: I think we’ve actually added some undesirable technical debt by doing it this way.

Yes, the job technically works, but think about how this batch file is positioned from an administration point of view now: we have a batch file, some configuration settings for that batch file (server names, encryption keys, folder paths, etc.), the batch file is likely outputting data or otherwise logging things to a text file, then we’re calling the ODI job itself, thereby creating an entry in the ODI Operator (which is a good thing, but it will only show part of the the overall job).

More often than not when I am redeveloping an automation process for someone, the batch files have no error control and inconsistent amounts of logging. The logging that is done is often times never looked at. Having easy visibility into our automation processes, however, is a very good thing. We can achieve this with ODI. While incrementalism has its merits, I say go all in with the ODI automation.

Go all in

What I mean by this is that instead of sticking with the batch file, instead of the batch file calling the automation steps and then calling an ODI step, use a package in ODI to call the necessary steps, your interfaces, and any other processing you need.

You get a lot of benefits from this approach. The ODI Operator will log each step so that you know how long each one took. You can get consistent and easy error handling/branching in an ODI package. You can easily log all of the output. You can schedule the job from the ODI scheduler. ODI has package steps that can be used to replace kludgey code in a batch file. For example, there is an ODI “wait for file” step that waits around for a file to show up. You can copy files around, pull files from FTP, and more – if you need to just call an arbitrary program or script you can do that with an ODI OS Command. Everything you need is there, and more.

You don’t even have to use the ODI scheduler. If you are already invested in another scheduler (Windows Scheduler, cron jobs, etc.) and want to keep everything in one place, then just call the ODI scenario from your scheduler, but that scenario will have been generated from a package that has all of the steps in it (instead of your batch file calling those).

This new world of pure ODI jobs is worth the extra effort to get to: you have status and metrics for all of your jobs (without tons of boilerplate logging code), you can manage your entire job in one place, and its easier to follow best (better) practices with development. While batch files and various scripts have served us well over the years, more and more I find myself abstracting away from the file system when it comes to automation and given the capabilities of ODI. It’s worth it.

Kscope15 Presentation Preview: ODI Workhorses

The other day I mentioned my goals for attendees for my upcoming ODTUG presentation on Drillbridge. Today I’m going to talk about my goals for my presentation on Oracle Data Integrator (ODI).

Over the last few years I have presented on ODI a handful of times. My main presentation on it has been highlighting a success story with it, where ODI was used to clean up and facilitate a lot of automation and ETL jobs for a health services company that [as you can imagine] has tons of data flying around everywhere. This previous presentation was more of a high-level affair, where I talked very generically about what the benefits of ODI were over the previous solution. Wanting to add a little more technical meat to the presentation, I appended what started off as a small section at the end where I take a look at just how ODI works under the covers.

While the “business” or high-level part of the presentation was all well and good, I found myself getting really excited to explain just how awesome the workings of the details of ODI ETL jobs were, and what started out as a 10-minute flight of fancy into the lower depths of something technical has now been promoted, as it were, to full on presentation.

In other words, I am going to spend an entire presentation literally tearing apart a single ODI interface, explaining why the steps are sequences the way they are, how to build idiomatic ODI interfaces, the affect of various options (journalization, query hints, delete all vs. truncate, etc.), update strategies, and more. I’m also going to marry up what is ostensibly an IT tool with various concepts from computer science, including the notion of idempotence (and why it’s a good thing).

With any luck, the attendee coming to this presentation will have a new or expanded understanding of how interfaces/mappings work, feel comfortable with modifying knowledge modules, and learn a trick or two when it comes to debugging ODI jobs. This will be a nuts and bolts, technical deep dive. While it’s ostensibly an advanced content presentation, I believe that people with even only a cursory familiarity should benefit from it as well. If you haven’t worked with ODI at all but are curious (and intrepid!) about what it can do, I think you’ll also benefit. So to all of my pure-Hyperion colleagues that haven’t dipped their toes in the ODI pool just yet, this is a great chance to hop on a different track and expand your horizons – I hope to see you there!

Kscope15 Presentation Preview: Drillbridge

I am very pleased to be able to say that I have two presentations at Kscope15. One of my presentations is on Drillbridge, the other presentation is a deep dive on how interfaces work in ODI. I’m going to really have a chance to geek out on both of these, especially the ODI presentation.

As for the Drillbridge presentation, I want to give a brief overview not just of what I am going to cover, but what I hope to accomplish for those in attendance.

This Drillbridge presentation will be an elaboration of my original webinar. I’m going to introduce the attendee to what Drillbridge is and how it works (of course). Then, as with the webinar, I will download and deploy Drillbridge in real-time, showing off just how easy it is to get drill-through up and running. The original webinar had a stated goal of deploying drill-through in 10 minutes or less. Not only was drill-through deployed to a cube in less time (five minutes!), I used the spare time to redeploy drill-through on the same cube but with drill-to-bottom enabled in the Time dimension (and still had some time left over). Yeah, it’s that good.

Since I’ll have more time than the webinar, I’ll also be able to show off the handful of awesome features that have been added in the last year: custom mappings, support for Linux, custom plugins, server-side result paging, smart formatting, drill-from-drill, custom stylesheets, and more.

After the presentation, attendees should have a firm grasp of how Drillbridge works, how it can be used, and how implement drill-through in simple situations. More importantly, they should be heading back to their respective companies excited about how they can now deliver a solid win to their users without breaking the bank or having to re-architect their systems.

There are a lot of great presentations this year, but I hope if you’re interested in drill-through or Drillbridge you will be able to attend!

Essbase Version Archaeology

I was digging through some of the Essbase Java API internals the other day, trying to work around an issue I was encountering with Vess, my Essbase JDBC driver. Along the way I came across a constants file with a fun history (of sorts) of Essbase versions.

Complex software packages will often track their version somehow. This is for many reasons, but a very common one is so the software has a way to gracefully handle version discrepancies. For example, some client software (perhaps the latest version of Smart View) may be working with an older version of the Essbase server. If there was some feature that came out that Smart View supports but isn’t available in the older version of Essbase that it’s connecting to, it might find this out by asking the server for its version. The client can then hide/disable the feature as needed.

That said, the Essbase Java API has a file that tracks various versions of Essbase. This shouldn’t necessarily be interpreted as a full version history, but when the Essbase developers need to make a change that could affect version compatibility, it will often result in a change to the internal version matrix.

In any case, the constants in the file tell kind of an interesting story. We can see a handful of code names in the variables, such as Gemini, Orion, Levi, Sunspot, Eclipse, Cromwell, and Beckett.

Other versions are just designated by a major/minor/patch/build combination, such as Essbase 6.2, 6.5, 6.5.1 (this is the version of Essbase I cut my teeth on!), 7.0.0, 7.1, 9.0.1, 9.2.0.2, 9.2.0.3, 9.2.1, 9.3.1 (one of my favorite versions), and 9.3.1.4.

Then the jump to version 11 happens, where we can see 11.1.1, 11.1.1.2.1, 11.1.1.3, 11.1.2, 11.1.2.1, 11.1.2.2, 11.1.2.2.1, 11.1.2.2.5, and 11.1.2.3.

Curiously in the 11.x series, at some point there was an introduction of a seemingly separate series of variables, for editions 11.1.2.2.1 PV0, 11.1.2.3 PV0, and 11.1.2.3 PV1. I’m not totally clear what the PV is meant to indicate (patch version?).

Lastly, at some point someone introduced a set of three variables to track the release/revision/patch that would apparently always be the current version of the API, but it was only ever set with major version 6, minor version 5, and patch 0. I think someone decided that the variable structure, while seemingly clean/useful, was untenable in the long haul.

Lastly, there are a couple of “versions” that coincide with specific features that came out or were otherwise necessitated by testing: ECLIPSE_ALPHA, ECLIPSE_ALPHA_II, Hybrid Analysis (HYBRIDANALYSIS), non-unique names (NONUNIQUENAME), UNICODE, and some sort of “parity” (PARITY) marker that has the same version as 11.1.2.1. Un unqualified version variable seems to match up with the version of the library itself (I took this from an 11.1.2.3 JAR file, hence it matches the entry for 11.1.2.3).

I remember an Oracle product manager at some time indicating that the code names for Essbase versions were authors – hence Beckett and Cromwell – but I also distinctly remember hearing that one of these versions was codenamed “Tallyrand”, which isn’t reflected in the code names for this list. I’d be curious what some of the other code names were along the way, so if anyone out there knows, please send me an email and I can post them here for posterity (if nothing else).

Here’s the full list from the file from 11.1.2.3 for those that are curious:

public static final int ESS_API_VERSION = 725552;
public static final int ESS_API_VERSION_11_1_2_3 = 725552;
public static final int ESS_API_VERSION_11_1_2_2_5 = 725541;
public static final int ESS_API_VERSION_11_1_2_2_1 = 725537;
public static final int ESS_API_VERSION_11_1_2_2 = 725536;
public static final int ESS_API_VERSION_PARITY = 725520;
public static final int ESS_API_VERSION_11_1_2_1 = 725520;
public static final int ESS_API_VERSION_11_1_2 = 725504;
public static final int ESS_API_VERSION_11_1_1_3 = 725296;
public static final int ESS_API_VERSION_11_1_1_2_1 = 725281;
public static final int ESS_API_VERSION_11_1_1 = 725248;
public static final int ESS_API_VERSION_9314 = 602432;
public static final int ESS_API_VERSION_931 = 602368;
public static final int ESS_API_VERSION_BECKETT = 602112;
public static final int ESS_API_VERSION_921 = 598272;
public static final int ESS_API_VERSION_9203 = 598064;
public static final int ESS_API_VERSION_9202 = 598048;
public static final int ESS_API_VERSION_901 = 590080;
public static final int ESS_API_VERSION_NONUNIQUENAME = 589824;
public static final int ESS_API_VERSION_CROMWELL = 463360;
public static final int ESS_API_VERSION_71 = 462848;
public static final int ESS_API_VERSION_UNICODE = 458752;
public static final int ESS_API_VERSION_700 = 458752;
public static final int ESS_API_VERSION_651 = 413952;
public static final int ESS_API_VERSION_65 = 413696;
public static final int ESS_API_VERSION_HYBRIDANALYSIS = 413696;
public static final int ESS_API_VERSION_62 = 401408;
public static final int ESS_API_VERSION_ECLIPSE = 329472;
public static final int ESS_API_VERSION_ECLIPSE_ALPHA_II = 329216;
public static final int ESS_API_VERSION_ECLIPSE_ALPHA = 328960;
public static final int ESS_API_VERSION_SUNSPOT = 328192;
public static final int ESS_API_VERSION_LEVI = 327936;
public static final int ESS_API_VERSION_ORION = 327680;
public static final int ESS_API_VERSION_GEMINI = 262144;
public static final short ESS_API_RELEASE = 6;
public static final short ESS_API_REVISION = 5;
public static final short ESS_API_PATCH = 0;
public static final long ESS_API_VERSION_11_1_2_3_PV1 = 3116222111612928L;
public static final long ESS_API_VERSION_11_1_2_3_PV0 = 3116222111547392L;
public static final long ESS_API_VERSION_11_1_2_2_1_PV0 = 3116157687037952L;

Drillbridge 1.5.4 Available

Nothing fancy in this release, just a rollup release with a collection of bug fixes, enhancements, and small improvements. There were some issues with null values in certain situations and Excel files that are now resolved. Thanks to those that provided feedback and testing. Please email for Dropbox link for now, otherwise this new version will be available from the normal downloads site later today.

Your daily Vess (Virtual Essbase JDBC driver) update

The other week I mentioned that I’d been playing around with the idea of a virtual JDBC driver for Essbase. Much to my amazement, quite a few people (relatively speaking…) expressed an interest in this project and even graciously offered to help test it.

To reiterate from last week, I still think that Vess is “interesting”. It has also been one of the more complicated things I have tried to create. Implementing such a thing is really kind of an exercise in pounding a round peg into a square whole (or pounding a cube into a cylinder…), requiring advanced knowledge of Java, the JDBC driver model, Essbase, the Essbase Java API, network programming, and more. I’m leveraging code from a half dozen projects I have written, some public and some not.

That said if you wondered why Oracle wasn’t nice enough to make a true JDBC driver for Essbase, it’s because you have to be a little crazy to do it. And apparently I’m just that right kind of masochistic crazy.

Anyway, the driver works – amazingly. It’s not production ready and won’t be for some time, but you can view your substitution variables from a table and even load data to a cube by doing a normal SQL INSERT.

In fact, you can even read a CSV file on the fly and insert it to a cube with no load rule, like this:

INSERT INTO VESS_SCHEMA.SAMPLE_BASIC_DATA ("Year", "Measures", "Product", "Market", "Scenario", "DATA")
SELECT * FROM CSVREAD('/Users/jasonwjones/test/sample_basic_data.txt');

Crazy, eh? So all you load rule haters rejoice.

You can even use SQL transformations to adjust the data and load that into a cube from a remote JDBC table:

INSERT INTO VESS_SCHEMA.DRILLING_DRILLING_DATA ("Years", "Time", "Location", "Account", "DATA")
SELECT 'FY' || RIGHT(FISCAL_YEAR, 2) AS Years, 'Jan' AS Time, DIVISION || '-' || LOCATION AS LOCATION, '0' || ACCOUNT, AMOUNT FROM TRANS_SMALL;

What next?

A lot of the hard stuff is in place so the name of the game now would be to iterate and start to introduce a few more table representations of Essbase concepts, such as cube stats, server sessions, and some other interesting things. I’m still contemplating how you could layer in the outline editing model in a sensible way.

Getting back to this being “interesting” – I think it’s really interesting in a couple of ways. One is the idea of getting to use any off-the-shelf JDBC tool to be able to do things with an Essbase server. The other thing that is interesting is ODI (which I guess is interesting by way of the first reason, since databases/JDBC “fit” into ODI better than OLAP). The way Essbase works in ODI right now is that some custom Java libraries are used and glued together with some Jython scripts and Knowledge Module steps. In theory you could instead drop in the Vess driver as a generic SQL data source and do all of this with vanilla RKMs, LKMs, and IKMs. How cool would that be? To the extent that supporting Hyperion inside of ODI has been “problematic” for Oracle, you could perhaps sidestep it this way.

Anyway, I have a few folks that have valiantly offered to help test this thing out. If you have a spare cube, Java 1.8 (or greater), can download the free Squirrel SQL tool, and want to play with this, please hit me up, I would love some feedback.