Hacking the Essbase Java API to run Application Calcs

This post might alternately be titled, “So you’re really stubborn and wasted a couple of hours messing with the Essbase Java API”, or something. I was in a discussion the other day and asked about the ability to run an application-level calc script.

Well, back up, actually. Did you know that calc scripts can exist at the application level in Essbase? For a very long time, Essbase has had this notion of applications and databases (with databases often just being called cubes), such that there is usually one database/cube inside of an application, but there can technically be more (at least in the case of BSO). It’s almost always the best practice to have just one cube to an application. This is largely for technical reasons.

Continue Reading…

Papercut: Calc script verification with FDM tokenization

Here’s a papercut I’d like to present in the context of my thoughts on papercuts in the Essbase ecosystem. I’ve recently been doing a bit more work with FDM. After an FDM data load you need to calc data (related to what you just loaded, although I suppose you could just calc the whole thing if you wanted to) related to the intersections you just loaded. In other words, if you are loading data for a certain time period or location or whatever, you’ll want to roll that data up. Nothing special there. So you have a normal calc script except it has been parameterized for FDM – it searches for tokens in the script, such as in FIX statements, then replaces a template variable with the real variable. So it’s like [T.Location] gets replaced with the actual location. But guess what, when you go to validate the calc script now (and you do always validate your calc scripts, right?), it doesn’t validate.

Hmm.

So, I’m not an FDM expert. Maybe there’s an option to work around this that I don’t know about. Maybe you can stuff these tokenized names into a dummy alias table so that you can at least validate. But it seems like the “right” way to handle this would be to find a solution where you can still validate the calc. I guess one straightforward way to do this might be an option to ignore values with brackets around them that are enclosed in quotes. But it feels wrong that using FDM and tokenizing your calc script leads you down this path. If you have worked with this and have a solution that I don’t know because I’m not an FDM expert and you are, please let me know. But right now it’s just one of those quirky little less than ideal issues that I consider an Essbase Papercut.

Fixing an esoteric Oracle Incremental Merge Update Knowledge module bug with journalizing and an aggregate function

Editing knowledge modules is not for the faint of heart. KMs translate your interfaces and other fun business things into executable code. In short, they convert ODIs mostly declarative what definition of jobs into executable units by taking care of the how. There are numerous options available to interfaces and these largely affect the code that is generated for your job.

I ran into a problem the other week where my integration into an Oracle schema by way of the Oracle Incremental Update (MERGE) IKM from a journalized data source using an aggregate function was not working.

Thankfully, someone else had run into this issue and Google turned up a quite helpful OTN post that solved 99% of the problem for me. But the post is from a little while ago and didn’t fully work for me.

Here’s the relevant fix code from the brave and helpful OTN poster:

<%=odiRef.getGrpBy()%>
<%if ((odiRef.getGrpBy().length() > 0) && (odiRef.getPop("HAS_JRN").equals("1"))) {%>
,JRN_FLAG,JRN_SUBSCRIBER,JRN_DATE
<%}%>
asdffd

This doesn’t quite work, owing to a relatively new ODI feature: multiple data sets in the source of an interface. Multiple data set are a really, really cool feature that were added a bit ago. They basically let you take the UNION of different data sources and integrate them to the same target. This is useful in contrast to needing to interfaces for the same target. You can also do the intersection and minus out rows too but I haven’t quite had occasion to use that just yet.

As it pertains to us, though, there’s a parameter on much of the ODI generated code now that is used to specify which of the data sets to perform an action on. Here’s the updated updated (yes, that’s two updated’s in a row).

<%=odiRef.getGrpBy(i)%>
<% if ((odiRef.getGrpBy(i).length() > 0) && (odiRef.getPop("HAS_JRN").equals("1"))) {%>
,JRN_FLAG,JRN_SUBSCRIBER,JRN_DATE
<%}%>

Note the addition of the (i) in two places which is how the dataset is specified. Many thanks to the original poster for helping out in the first place and helping me fix my issue.

One extra thing to validate in that load rule, even if all looks well

This isn’t one you will run into too often, if ever, but seems to be a small issue. I was working on a load rule the other day (yay load rules!) and everything validated just fine. There’s nothing better than the smell of an “everything validated correctly!” dialog in the morning. I am on a project refactoring a cube and one of the small changes made was that the dimension name has changed for one of the dimensions from Accounts to Account. One of the core load rules for this cube uses the “sign flip on UDA” feature. Of course, for this feature to work (which I have used many times before and love [but not as much as I love EIS…]) it needs to specify the dimension and the UDA of the members in that dimension to flip the sign for. Well, sign flipping wasn’t working and even though the load rule validates just fine, the problem was that non-existant dimension in the sign flip configuration. So, although there could be a reason to not want to validate this, it seems somewhat reasonable (if nothing else, for the sake of completeness) that the load rule verification should include verifying that if the dimension name for sign flipping does exist. It would have saved me a little bit of time.