I was writing a custom ODI Open Tool the other day so that I could update an Essbase dimension as a custom package step. Normally this would be better suited as a normal update metadata interface, but in this case it seems like a good solution to be able to call an existing load rule without having to worry about the models and topologies.
In the Essbase Java API, most data load and dimension build operations are initiated from the cube object (IEssCube). I’m already familiar with the various loadData() methods that have several variations (variants of providing a rules file name, local/remote file, SQL username/password, error file name, etc), but hadn’t tried to start a dimension build from a SQL datasource from the Java API before.
Interestingly, there is no method in the IEssCube interface for directly performing a dimension build from a SQL datasource. But obviously we know it’s possible – and generally speaking, there’s a pretty clear mapping from EAS functionality to the underlying Java API.
So I was curious, what the hell does ODI do when you’re performing a dimension build (metadata update) from a datasource? Well, after a bit of digging through the ODI internals, I found it: ODI pulls the data from SQL, dumps it to a temporary local text file (a temporary file starting with the name EssData and ending with the .txt extension), then calls one of the normal data load methods on what is just a normal text file.
So, the solution in my case (to implement the custom package step) is just to dump a ResultSet to a text file (no big deal), then call a normal method. It’s a bit of a small workaround, but it works just fine. I thought it was pretty interesting that there wasn’t a built-in method to do this.
“ODI pulls the data from SQL, dumps it to a temporary local text file (a temporary file starting with the name EssData and ending with the .txt extension), then calls one of the normal data load methods on what is just a normal text file.”
Wow. Doesn’t this seem surprising / shocking? Inefficient, at best.
It is quite a hack, isn’t it?
Inefficient is why I just run the whole thing through a MaxL script launched from a Scenario.
It seems to be quite a bit faster as well although this makes me an ODI apostate.
Yes I think this is interesting that there is no dimension build from relational. Agree with TimG/Cameron’s comments.
I’m a big fan of Cameron’s dynamic SQL load rule for metadata scripted through ODI (or anywhere else you can call SQL and script it) to automate the metadata loads.
I don’t use the API’s because for the work I do I don’t want my clients to have to know much about programming. At least not at that level. And the other reason, (ahem, the real reason) is I’m not such a great Java programmer. :-)