Essbase Renegade Members Revisited

For some reason the other day I was thinking “Whatever happened to that renegade members feature?” So I did some digging.

Renegade members, by the way, refers to this concept where instead of a data record being rejected, you can map it to some other member. Other names for this feature might have been “shovel members”, but renegade members sounds cooler. That said, it’s a feature with a cool name but an apparently terrible publicist.

Renegade members were blogged about as early as a few years ago, such as on Cameron’s blog (during the 2013 OpenWorld), in Russian (apparently), and even over at Rittman Mead’s blog (before Mark spent his days trying to get tea kettles to work with the internet, but I digress).

But there’s a a curious lack of information on renegade members since then. There is, however, just enough information on the internet to piece this together. There’s a little documentation about renegade members over on the official documentation. Just as important (for my purposes), there are two methods relating to renegade members that are in the Essbase JAPI Javadoc.

Continue Reading…

Database stats and outline viewing with Vess

I had a little bit of time this weekend and dusted off Vess for some updates. I was able to greatly enhance the functionality and perhaps more important, widen the number of use cases that it could be used in.

I already wrote about a use case previously: using SQL to create a substitution variable. Vess supports fully symmetric substitution variable updates. That is to say, you can create, read, update, and delete substitution variables on your Essbase server just by changing the data in the proper table. So just to elaborate on my thoughts on the previous post, I think there are some nice use cases here. Of course, there’s nothing wrong with MaxL, but if you have an instance where you are dumping substitution variables via MaxL and scraping the ASCII art off of them in order to get to the real variable values, Vess offers a cleaner approach.

As of now, Vess also provides a window in to a whole slew of properties on various Essbase objects: server, application, and database. Here’s a view of some database properties (as shown in a generic JDBC GUI with Vess configured as a driver):

Using Vess to view Essbase database properties for Sample.Basic

There’s a use case here too. I know of more than a few people that are dumping database stats using MaxL (such as to monitor a database/server stats over time).  So instead of running a script, dumping to a file, parsing that out (or whatever), you could slam the data directly into SQL (using ODI or just some tool that can execute SQL statements).

Next up, Vess models tables for the various dimensions in a cube. This is not unlike how ODI would create a model of a cube: one table per dimension. Here’s a view of some information from Sample/Basic:

Using Vess to view outline information

Outline functionality in Vess is not symmetric: you can read but updates are disabled for now. The use case here is that you could pull outline information without needing to use ODI (or something else) and possibly if you are using the Essbase Outline Extractor in your automation. Since Vess is already a fully functional SQL database, you could pretty easily do some interesting things like just grabbing all of the level-0 members in an outline (“SELECT * FROM SAMPLE.BASIC_DIM_MARKET WHERE LEVEL = 0”).

For now, Vess supports data loads to a cube (look mom, no load rules!). I wrote about this before too. Data reads are turned off for now (as in, you can INSERT but SELECT always returns nothing). It’s a bit of an interesting issue when you want to map this into a relational model.

Lastly, I started writing up the ODI technology for Vess. I think this is an absolute win for Vess because it would mean that you could build a nice clean technology/Knowledge Module that doesn’t have to use a bunch of Jython to glue things together: you can treat everything as a vanilla JDBC database model. No idea when this could be ready but you can rest assured that I’ll blog about it.

Essbase data loads with Windows integrated authentication

This is a followup to a post I made on the Network54 Essbase forum: Yes, you can use Windows integrated authentication (indirectly) to perform Essbase data loads. It’s not necessarily supported, but it works.

First you start off with the connection definition in the Windows ODBC connection manager. This is where you setup a normal ODBC connection to SQL Server:

ODBC connections in Windows to be used by Essbase data loads

ODBC connections in Windows to be used by Essbase data loads

One of the configuration screens is where you specify the credentials, like this:

Connecting via integrated authentication

Connecting via integrated authentication

The SQL Server connection configuration lets you specify how to connect to the database instance. I would guess that much of the time this is configured with a particular native SQL Server ID. Note that it’s also possible to connect using “Windows NT authentication”. You can select this option to try and have the local machine connect via its own credentials to the SQL Server instance. That is, the user running the Essbase service on the local machine would need to have access to the given SQL Server instance.

With the ODBC connection setup, you can reference it in a load rule as normal:

Essbase Data Load SQL Definition

Next when you go to test the connection/data load, just specify anything you want for the user name and password when it prompts you in this dialog box:

Essbase SQL Data Load Credential Dialog

I have gotten into the habit of typing in ‘dummy’. As Peter Nitschke unhelpfully points out, in this case you I don’t need to document the automation author because because it’s already documented (heyoooo). Pete, I know you read this, by the way, is that rash cleared up yet? You actually have to put in something, otherwise the load won’t work.

Similarly if you do a normal data load, you have to put something in:

Essbase Data Load from SQL Load Rule

Essbase Data Load from SQL Load Rule

Or else you’ll get this helpful error message:

The error when the username is not specified

The error when the username is not specified

Do note that the username and password are both discarded or otherwise unneeded assuming the network credentials are sufficient to connect to the database. As a somewhat interesting aside, this seems to be a quirk in the Essbase code path that processes the data loads. It guards against NULL (empty) values and if it picks up one or the other for username and password, it errors out the load. If memory serves, you can get away with only filling in a username in the data load rule SQL editor test but you need both when doing a data load. This is due to differences in how the EAS GUI is verifying data. Also, if you write MaxL automation that relies on this Windows integrated authentication, you need to fill in some dummy or otherwise ignored values in the MaxL.

My guess is that Essbase actually tries using the username/password but when it is able to make a successful connection via the integrated authentication, the SQL Server driver just ignores any other connection efforts, so the “bad” username and password have no effect.

Apparently connecting this way is unsupported but I have set this up and used it numerous times without any issue. I think if Oracle really wanted to polish this up and make integrated authentication a more supported option, that’d be great, and I might start with some consistency on the EAS side of things as well as a modified MaxL syntax alternative that lets you explicitly specify integrated authentication (or put another way, a MaxL syntax that doesn’t need username/password in it).

Glitch with a non-printing Unicode character in member name

This is an Essbase bug, kind of. I’ve been working on a project lately that uses the relatively new MaxL Essbase outline export command (yes, Pete, I called it relatively new again, even though according to you it’s not… well, it’s relatively NEW TO ME SO THAT’S WHAT MATTERS… :-). Anyway, I wrote a quick XML parser for the output, using Java.

The nice thing about the parser is that it uses something in the Java language called JAXB. It’s a really nice way of modeling the contents of an XML file using Java classes, so that you don’t have to write your own parsing code, which is tedious and error prone. There are reasons you might use either approach, but overall I have been very happy over the last few years with the ability to write XML readers in Java with JAXB.

Curiously, I came across an outline export that would cause the parser to throw an exception. The Java stack trace indicated that an illegal character (0x1f – that’s not the character itself, rather, the Unicode character ID) was at fault. Specifically, character 0x1f is the “unit separator” character. In a nutshell you might say that while most of us are used to writing things with letters and numbers and things like tabs, spaces, and newlines, there happen to be all of these other weird characters that exist that have arcane uses or historical reasons for existing. It’s such a prevalent issue (or at least, can be) that many advanced text editors have various commands to “show invisibles” or non-printing characters. One such tool that many of us Essbase boffins are adept with is Notepad++ – a veritable Swiss army knife of a text editor.

Nicely enough, the Java stack trace indicated that the problem in the XML was with parsing a “name” attribute on a <Member> tag – in other words, an Essbase member name in the source outline contained an invisible character. As it turns out, in XML 1.0 it is illegal to have this particular character. So while Essbase happily generates invalid XML during the export, when I try to import it with Java, I get the exception. But how to find the offending member? I mean, how do you do a text search for an invisible character (seriously, this is like some “what is the sound of one hand clapping” kind of stuff).

In Notepad++ you can search for a regular expression. So I turned on Show Invisibles, pulled up the Find dialog, checked on the “Use Regular Expressions” option, then typed in [\x1f] which is is the Regex code to tell Notepad++ to search for this little bastard of a character. Sure enough, there was exactly one in the output file that surely snuck in from an otherwise innocuous copy and paste to EAS some time ago. I fixed the member name in EAS, reran the export, reprocessed with the parser, and all was well again in the universe.

Thoughts on deprecated Essbase 11.1.2.4 features and the future of EAS

The Hyperion blogging-verse has been quite aflutter with the release of 11.1.2.4. So I won’t bore you with a recap on new features, since that has been done quite effectively by my esteemed colleagues. I will say, however, that by all accounts it seems to be a great release.

Oracle is really starting to hit their stride with EPM releases.

As a brief aside: I seem to be in the relative minority of EPM developers in that I come from the computer science side of things (as opposed to Finance), so believe me when I say there is a tremendous amount of energy and time spent to develop great software: writing code, testing, documenting, and more. Software is kind of this odd beast that gets more and more complex the more you add on to it.

Sometimes the best thing a developer can do is delete code and remove features, rather than add things on. This is a very natural thing to happen in software development. Removing features can result in cleaner sections of code that are faster and easier to reason about. It typically sets the stage for developing something better down the road.

In the software world there is this notion of “deprecating” features. This generally means the developer is saying “Hey, we’re leaving this feature in – for now – but we discourage you from building anything that relies on it, we don’t really support it, and there’s a good chance that we are going to remove it completely in a future release.”

With that in mind, it was with a curious eye that I read the Essbase 11.1.2.4 release notes the other day – not so much with regard to what was added, but to what was taken away (deprecated). EIS is still dead (no surprise), the Visual Basic API is a dead end (again, not a secret), some essbase.cfg settings are dead, direct I/O (I have yet to hear a success story with direct I/O…), zlib block compression is gone (I’m oddly sad about this), but interesting for now is this little tidbit: the Essbase Administration Services Java API is deprecated.

For those of you who aren’t aware, there is a normal Java API for Essbase that you may have heard of, but lurking off in the corner has been a Java API for EAS. This was a smallish API that one could use to create custom modules for EAS. It let you hook into the EAS GUI and add your own menu items and things. I played with it a little bit years ago and wrote a couple of small things for it, but nothing too fancy. As far as I know, the EAS Java API never really got used for anything major that wasn’t written by Oracle.

So, why deprecate this now? Like I said, it’s kind of Oracle’s way of saying to everyone, “Hey, don’t put resources into this, in fact, it’s going away and if you do put resources into it, and then you realize you wasted your time and money, we’re going to point to these release notes and say, hey, we told you so.”

Why is this interesting? A couple of things. One, I’m sad that I have to cross off a cool idea for a side project I had (because I’d rather not develop for something that’s being killed).

Two (and perhaps actually interesting), to me it signals that Oracle is reducing the “surface area” of EAS, as it were, so that they can more easily pivot to an EAS replacement. I’m not privy to any information from Oracle, but I see two possible roads to go down, both of which involve killing EAS:

Option 1: EAS gets reimplemented into a unified tool alongside Essbase Studio’s development environment.

Option 2: EAS functionality gets moved to the web with an ADF based front-end similar in nature to Planning’s web-based front-end.

I believe Option 2 the more likely play.

I always got the impression from the Essbase Studio development environment that it was meant to more or less absorb EAS functionality (at least, more than it actually ever did). I say this based on early screenshots I saw and my interpretation of its current functionality. Also, Essbase Studio is implemented on the same framework that Eclipse (one of the most popular Java programming environments) is, which is to say that it’s implemented on an incredibly rich, modular, flexible environment that looks good on multiple OS environments and is easy to update.

In terms of choosing a client-side/native framework to build tools on, this would be the obvious choice for Oracle to make (and again, it seems like they did make this choice some time ago, then pulled back from it).

The alternative to a rich “fat client” is to go to the web. The web is a much more capable place than it was back in the Application Manager and original EAS days. My goodness, look at the Hyperion Planning and FDMEE interfaces and all the other magic that gets written with ADF. Clearly, it’s absolutely technically possible to implement the functionality that EAS provides in a web-based paradigm. Not only is it possible, but it also fits in great with the cloud.

In other words, if you’re paying whatever per month for your PBCS subscription, and you get a web-based interface to manage everything, how much of a jump is it for you to put Essbase itself in the cloud, and also have a web interface for managing that? Not much of a leap at all.

Do this, not that: Current vs. Prior Year dynamic calc in Scenario

Here’s just a quickie I saw the other day. Imagine a normal cube with a Years dimension, a Scenario dimension, and any other normal dimensions. Years contains FY2012, FY2013, FY2014 or similar and so on. Scenario contains Actual, Budget, and all the other normal stuff you’d expect to see.

Naturally, the Scenario dimension will contain all sorts of handy dynamic calcs, starting with our trusty Actual to Budget variance:

Actual vs. Budget: @VAR("Actual", "Budget");

So far so good.

How about a scenario that gives us the current year versus the prior year? Don’t do this:

@VAR("FY2014", "FY2013");

Or this (which is I guess slightly better but still not quite great):

@VAR(&CurrentYear, &PriorYear);

Why shouldn’t you do this? One, it requires maintenance – the kind of maintenance that is easily forgotten about util a user calls up and says that something doesn’t look quite right.

Second and more importantly, it’s semantically wrong. Hard-coding the year effectively breaks the inter-dimensional promise that our cube is ostensibly making – which is that the Scenario value we’re looking at should be based on the current Year member – not some arbitrary member irrespective of the POV.

(This all being said, yes, there could be a legitimate design reason to code a dynamic calc in Scenario that is always the current year irrespective of the POV, but I digress).

A simple formula can get us the prior value:

@PRIOR("Actual", 1, @CHILDREN("Years"))

As well as the actual versus prior:

@VAR("Actual", @PRIOR("Actual", 1, @CHILDREN("Years")));

Note that this assumes there is nothing else in the Years dimension and that it’s got a typical “ascending” sort (2010, 2011, 2012, in that order). If you have a years dimension going in descending order you could put -1 in for the @PRIOR command or just switch to @NEXT.

There you have it – a simple cleanup that saves maintenance, doesn’t rely on outline variables being updated, is intuitive, and more importantly, is doesn’t break the semantics of the cube.

Inputting to level 1 and a crazy app idea

Happy Friday! These weeks are flying by like a blur, it seems.

As you may know from my previous posts, I’m constantly thinking up little Hyperion-related app ideas. And since it’s Friday, I’m feeling a little whimsical and have YET ANOTHER app idea. How about an app that detects when you are designing a cube that takes input at level 1 (or level 2 for good measure), then you run the app and it automatically emails you too tell you that you’re an idiot. BONUS POINTS for turning off Aggregate Missing Values!

Genius. I like this.

Possible idea for a tool: cube delta

I have a question for my audience about a tool idea. Would it be useful to be able to tell what the data differences between two cubes with the same (or highly similar) dimensional structure is? For example, let’s say you had Sample/Basic on one server, and Sample/Basic on another server. Would it be useful to check for differences in the data loaded to them, if any?

I could see this as possible being helpful in checking for differences between cubes in development/qa/production, between archive cubes and ‘real’ cubes, and possibly during testing when you spin off a side cube to check some calcs.

Just a thought. Let me know! After HUMA is kicked over the wall I’ll be looking for my next side project (as time permits) and I am trying to focus on things that will increase the productivity of Hyperion developers.

Thank you to Hyperion Unused Member Analyzer testers, and thoughts on future tools

Thank you all so much for helping out. I am absolutely blown away at the response that this utility has generated from all of you. Please let me know if you run into any issues.

Changing subjects (and zooming out) a bit, back to my efforts to understand what you (as consultants and Hyperion professionals) check during your “health check hit list“, it’s my goal over the next year to put together a suite of power tools that enable all of us to create, analyze, and maintain more robust solutions. HUMA is one such tool in the toolbox.

I have a few other ideas up my sleeve, but if you ever find yourself saying, “Self, I wish I had a tool for [fill in the blank]” or “I wish there were an easy way to…” then I would love to know about it. Even if it’s something you do already that’s perhaps manual and laborious, perhaps it can be automated, sped up, improved, and made useable by the community at large.

Beta testers wanted for Hyperion Unused Member Analyzer tool

I have been working on a tool called HUMA – Hyperion Unused Member Analyzer. The idea for it came out of some side discussions at Kscope a couple of months ago. The idea is simple: Wouldn’t it be nice if there was an easy way to determine if any members are unused in a given cube?

Given a server, database, and credentials, HUMA will connect to a a cube, analyze its stored members, generate a list of all possible values, then iterate over it, analyzing the resulting data grids for the presence of data. If there are members with no data in them, they are shown to the user running the program. To increase performance, HUMA orders the grids and sequences of members within the sub grids so that they are aligned to the dense/sparse structure of the cube, so that it can pound on the same hot blocks before moving on to grids with different sparse permutations.

On a pretty gutless VM of mine with Essbase running in 1GB of RAM, a standard Sample/Basic cube can be ransacked for data in about three seconds. Also, given the way the tool works, it’s not necessary to do a full export of a cube or anything since the analysis is based on the data that is queried and immediately discarded. So far it seems to work pretty well.

The goal of the tool is to be a tool in the toolbox for Hyperion/Essbase admins that want to analyze their environment and act on possible improvements. This goes hand in hand with my research and efforts to find out what we all do when we dive into a new system as part of a health check hit list.  Doing so on a BSO database can yield improvements (particularly on dense members).

In any case, version 1.0 of the tool is basically ready to go and I’d love to have a few people test it out and let me know of any issues!