Fixing an esoteric Oracle Incremental Merge Update Knowledge module bug with journalizing and an aggregate function

Editing knowledge modules is not for the faint of heart. KMs translate your interfaces and other fun business things into executable code. In short, they convert ODIs mostly declarative what definition of jobs into executable units by taking care of the how. There are numerous options available to interfaces and these largely affect the code that is generated for your job.

I ran into a problem the other week where my integration into an Oracle schema by way of the Oracle Incremental Update (MERGE) IKM from a journalized data source using an aggregate function was not working.

Thankfully, someone else had run into this issue and Google turned up a quite helpful OTN post that solved 99% of the problem for me. But the post is from a little while ago and didn’t fully work for me.

Here’s the relevant fix code from the brave and helpful OTN poster:

<%=odiRef.getGrpBy()%>
<%if ((odiRef.getGrpBy().length() > 0) && (odiRef.getPop("HAS_JRN").equals("1"))) {%>
,JRN_FLAG,JRN_SUBSCRIBER,JRN_DATE
<%}%>
asdffd

This doesn’t quite work, owing to a relatively new ODI feature: multiple data sets in the source of an interface. Multiple data set are a really, really cool feature that were added a bit ago. They basically let you take the UNION of different data sources and integrate them to the same target. This is useful in contrast to needing to interfaces for the same target. You can also do the intersection and minus out rows too but I haven’t quite had occasion to use that just yet.

As it pertains to us, though, there’s a parameter on much of the ODI generated code now that is used to specify which of the data sets to perform an action on. Here’s the updated updated (yes, that’s two updated’s in a row).

<%=odiRef.getGrpBy(i)%>
<% if ((odiRef.getGrpBy(i).length() > 0) && (odiRef.getPop("HAS_JRN").equals("1"))) {%>
,JRN_FLAG,JRN_SUBSCRIBER,JRN_DATE
<%}%>

Note the addition of the (i) in two places which is how the dataset is specified. Many thanks to the original poster for helping out in the first place and helping me fix my issue.

One extra thing to validate in that load rule, even if all looks well

This isn’t one you will run into too often, if ever, but seems to be a small issue. I was working on a load rule the other day (yay load rules!) and everything validated just fine. There’s nothing better than the smell of an “everything validated correctly!” dialog in the morning. I am on a project refactoring a cube and one of the small changes made was that the dimension name has changed for one of the dimensions from Accounts to Account. One of the core load rules for this cube uses the “sign flip on UDA” feature. Of course, for this feature to work (which I have used many times before and love [but not as much as I love EIS…]) it needs to specify the dimension and the UDA of the members in that dimension to flip the sign for. Well, sign flipping wasn’t working and even though the load rule validates just fine, the problem was that non-existant dimension in the sign flip configuration. So, although there could be a reason to not want to validate this, it seems somewhat reasonable (if nothing else, for the sake of completeness) that the load rule verification should include verifying that if the dimension name for sign flipping does exist. It would have saved me a little bit of time.

In case you missed it: Oracle Data Integrator success story webinar

A little while ago I presented a success story using Oracle Data Integrator. The webinar goes over ODI functionality at a high-level, giving a quick tour of the benefits of major functional areas and how they were used to greatly improve and build out the ETL and automation at a high profile client. If you are interested in using ODI in your organization or aren’t sure (but want to know more!) then I strongly encourage you to check it out. ODI is one of my specialization areas now where I have been doing a tremendous amount of work, so I look forward to becoming an expert in this technology and sharing this expertise with others.

Note that this webinar, while technical, is not a deep drive into implementation details. I am planning a followup webinar to very comprehensively look at using ODI as an ETL/automation solution and implementing it with best practices. This will be the webinar to look for if you are more technical and really want to see the ins and outs of how things work and can be architected for robustness, maintainability, and efficiency.

Using ODI error tables (and tables with dollar signs in their names) as models for transformations

I setup a solution awhile ago where ODI facilitates data movement from an ODI error table into another data store. As you may know, ODI has a naming convention where its temporary and work tables are prefixed with some combination of a letter (or two), a dollar sign, and perhaps an underscore. So if my original table in a relational data store is CUSTOMERS, then the error table for this table would be E$_CUSTOMERS.

So of course I fire up my RKM to build out the model metadata for these new tables, just like I normally would. Everything goes fine but when I go to build the interface and run it, things don’t work so well. A dollar sign ($) is a very common character for scripts and programming languages to indicate that variable interpolation should occur (in other words, to fill in a placeholder). For example, if the variable $color = “orange” and you interpolate a string such as echo “My favorite color is $color!” then the result is “My favorite color is orange!” Nothing too fancy here.

That being said, sometimes a dollar sign can wreak havoc when used where some code doesn’t expect it. I’m not sure if it ever got fixed, but I developed at a place once that would dole out Active Directory IDs based on letters from the first and last name – three letters from each, to be precise. But people with very short last names would have dollar signs appended to them. There was a bug in Hyperion where you couldn’t login with one of these user names, likely because it just never got tested (oops). I’ll assume that’s fixed now.

In any case, back to ODI. In a given ODI datastore, you have the name, resource name, and alias name. Name is typically how you see it in the ODI design context. Resource name is the actual resource in the schema/database. Alias is used by ODI in interfaces as, well, an alias. I didn’t record the exact error message but given that my interface was failing and the output in Operator was a bit odd, I immediately suspected the dollar sign throwing things off either for ODI’s generated code, on the target system (and Oracle database server), or both.

My quick solution? Just replace the dollar sign with an underscore in the alias in the datastore. You can delete the dollar sign entirely but I decided to just go with an underscore. By making the change in the datastore, subsequent interfaces leveraging this datastore will also benefit from the fix, as opposed to trying to adjust just the one interface. Of course, leave the resource name alone (and the name) since they don’t need to be changed (and in the case of the resource name, shouldn’t be changed).

Another approach to all of this would be to change the physical topologie’s naming convention so that error tables don’t have a dollar sign (or other entities that ODI names) but I quite like ODI’s preferred naming strategy (for the most part).

Linux troubleshooting guide for system admins!

Most, but not all of my Essbase administration experience is on Windows servers. Linux support appeared years ago and has gotten much better – and more common – as the years have progressed. I ran Linux as my desktop for many years (Slackware, Fedora, Gentoo [shudder], Ubuntu, and more) before falling in love with OS X so I’m pretty comfortable on a Linux command line (and an OS X command line for that matter). But I came across this server troubleshooting article awhile back that has some absolutely awesome stuff in it, much of it new to me. If you need to get into a Linux system and start digging around to see what’s going on, this is an absolutely awesome guide.

My ODTUG Kscope13 presentation: Practical Essbase Web Services

It has been a few years since I last presented at Kscope, but I am back this year! I will be presenting on “Practical Essbase Web Services” – this will be my take on the new web services features from recent Essbase versions, as well as drawing on my experience developing mobile solutions, developing Essbase middle tiers with the Java API, and other approaches to extracting data from Essbase. For those of you in C# shops or wanting to get at Essbase data from your other favorite languages (I’m looking at you, PHP, Python, and Clojure), this should be a fun overview of your options. I’ll look forward to seeing you there – and if you are interested in the presentation but aren’t going to ODTUG’s Kscope, let me know!

Portable Firefox to the rescue!

Just a quick tip that I’ve been meaning to mention as my schedule allows for a few more posts these days… Have you ever heard of PortableApps.com? I used to use them quite bit, although not too much lately. These are popular software applications that have been packaged with the specific intent of running them from a USB stick. They have web browsers, email clients, virus scanners, image editors, and more. All of these are open source applications. You don’t have to install them, though, which is the nice thing. They are ready to run as-is. This also comes in particularly handy if you have an existing installation of some software (again, using Firefox as an example) and don’t want to mess with it. I ran into an issue with a client awhile back where Internet Explorer was having some issues with logging in to Workspace/Planning, but this particular version (something around 11.1.1.3) didn’t work with the current version of Firefox. It was, however, certified to work with a much older version of Firefox – version 3.5. What to do? Go to PortableApps.com, download the old Firefox binary from version 3.5, run it from the local hard drive without having to install, and presto, I was good to go. Next time you have a browser acting up or need to switch things up, this might be a decent way to go.