Inputting to level 1 and a crazy app idea

Happy Friday! These weeks are flying by like a blur, it seems.

As you may know from my previous posts, I’m constantly thinking up little Hyperion-related app ideas. And since it’s Friday, I’m feeling a little whimsical and have YET ANOTHER app idea. How about an app that detects when you are designing a cube that takes input at level 1 (or level 2 for good measure), then you run the app and it automatically emails you too tell you that you’re an idiot. BONUS POINTS for turning off Aggregate Missing Values!

Genius. I like this.

Possible idea for a tool: cube delta

I have a question for my audience about a tool idea. Would it be useful to be able to tell what the data differences between two cubes with the same (or highly similar) dimensional structure is? For example, let’s say you had Sample/Basic on one server, and Sample/Basic on another server. Would it be useful to check for differences in the data loaded to them, if any?

I could see this as possible being helpful in checking for differences between cubes in development/qa/production, between archive cubes and ‘real’ cubes, and possibly during testing when you spin off a side cube to check some calcs.

Just a thought. Let me know! After HUMA is kicked over the wall I’ll be looking for my next side project (as time permits) and I am trying to focus on things that will increase the productivity of Hyperion developers.

Thank you to Hyperion Unused Member Analyzer testers, and thoughts on future tools

Thank you all so much for helping out. I am absolutely blown away at the response that this utility has generated from all of you. Please let me know if you run into any issues.

Changing subjects (and zooming out) a bit, back to my efforts to understand what you (as consultants and Hyperion professionals) check during your “health check hit list“, it’s my goal over the next year to put together a suite of power tools that enable all of us to create, analyze, and maintain more robust solutions. HUMA is one such tool in the toolbox.

I have a few other ideas up my sleeve, but if you ever find yourself saying, “Self, I wish I had a tool for [fill in the blank]” or “I wish there were an easy way to…” then I would love to know about it. Even if it’s something you do already that’s perhaps manual and laborious, perhaps it can be automated, sped up, improved, and made useable by the community at large.

Beta testers wanted for Hyperion Unused Member Analyzer tool

I have been working on a tool called HUMA – Hyperion Unused Member Analyzer. The idea for it came out of some side discussions at Kscope a couple of months ago. The idea is simple: Wouldn’t it be nice if there was an easy way to determine if any members are unused in a given cube?

Given a server, database, and credentials, HUMA will connect to a a cube, analyze its stored members, generate a list of all possible values, then iterate over it, analyzing the resulting data grids for the presence of data. If there are members with no data in them, they are shown to the user running the program. To increase performance, HUMA orders the grids and sequences of members within the sub grids so that they are aligned to the dense/sparse structure of the cube, so that it can pound on the same hot blocks before moving on to grids with different sparse permutations.

On a pretty gutless VM of mine with Essbase running in 1GB of RAM, a standard Sample/Basic cube can be ransacked for data in about three seconds. Also, given the way the tool works, it’s not necessary to do a full export of a cube or anything since the analysis is based on the data that is queried and immediately discarded. So far it seems to work pretty well.

The goal of the tool is to be a tool in the toolbox for Hyperion/Essbase admins that want to analyze their environment and act on possible improvements. This goes hand in hand with my research and efforts to find out what we all do when we dive into a new system as part of a health check hit list.  Doing so on a BSO database can yield improvements (particularly on dense members).

In any case, version 1.0 of the tool is basically ready to go and I’d love to have a few people test it out and let me know of any issues!

Stupid Essbase names for a fish

Happy Monday.

Do you love random, not-quite-Essbase-related posts? Well, I’m going to find a way to relate this to Essbase, believe it or not. I am the proud owner of an AquaFarm. Actually, I’m not sure if proud is quite the right word. I am very enthusiastic about aquaponics, but my current living arrangements don’t quite accommodate my interest. I received an AquaFarm as a gift a while back, which is a very miniature aquaponics setup. The kit has everything you need: the tank, pump, shale rocks, seeds, growing baskets… but no fish. Nicely enough they give you a coupon for a betta fish from Petco. Right now the fish is named Mr. Fish.

Just for fun and to cement my Essbase geek cred, I propose giving the fish an Essbase inspired name. Here are my thoughts so far:

  • Low Block Density
  • Inter-dimensional Irrelevance
  • Unable to Save Custom Views
  • Multiple Retrievals On A Single Sheet

Have your own stupid name for my new fish? I’d love to hear it. :D

Eclipse tips for you Java folks out there

I know there’s lot of you Java and Essbase folks out there in addition to myself. I’m always interested in ways to use an IDE more effectively. I came across this list of Eclipse annoyances and easy solutions for them earlier today. I am already using some of these but some of them are new to me. So if you want to get your environment dialed in a little bit, check these out. And for you IntelliJ folks… just keep being your normal smug selves. ;-)

Oh, and I’m thinking about doing a small series on “Essential Java for Essbase Developers” or something along those lines. It’d be something like a tutorial series on how to navigate around all of the Java terminology, features, and ecosystem in order to get straight to developing the solutions you want to develop, just a little bit quicker. Let me know if you have any thoughts or suggestions!

 

 

Using the Unix paste command to join files together by column

I can’t believe I didn’t know about command-line utility until very recently. I was doing a little research on some text processing utilities and came across the “paste” command. As a Mac user I have this installed already, and this appears to be a fairly common LInux/Unix tool as well. It’s part of a suite of text processing utilities that are fairly standard. Oddly, I am very familiar with the likes of sed, grep, awk, and so on, and yet have not stumbled upon this.

Anyway, imagine the following files, starting with names.txt:

Jason
Cameron
Tim

And numbers.txt:

555-1234
555-9876
555-2468

Then we just run paste:

paste names.txt numbers.txt

And we get this:

Jason   555-1234
Cameron 555-9876
Tim     555-2468

Paste just marries the files up by column, reading from each file. You can supply more than two files.

I don’t have an immediate need for this utility for processing Essbase data, but it just might come in handy someday, so I’m going to keep it in my back pocket. And for you Windows users out there, well, you know the deal: get cygwin or whatever the latest and greatest Unix-on-Windows environment is.

Processing a file line by line in Java

I seem to use this pattern a lot, particularly in projects without dependencies (i.e., no Guava). What’s the cleanest way to iterate through a file line by line in Java? There are a lot of examples that are pretty messy. Here’s one I keep coming back to that is nice because it uses a traditional for loop to manage things. In this case it’s not just a counter variable but instead, a more general version of the for loop where we have an initializer, while condition, and iteration statement.

public void process(Reader reader) throws IOException {
  BufferedReader bufferedReader = new BufferedReader(reader);
  for (String line = bufferedReader.readLine(); line != null; line = bufferedReader.readLine()) {
    System.out.println("Line: " + line);
  }
  bufferedReader.close();
}
That’s it. That’s about as clean as I think I can get it. Note that the argument to the method isn’t a String filename but rather a Reader object, which alleviates the need for this method to have to catch FileNotFoundException as the caller would presumably supply a FileReader or some other type of Reader.

Pre-seeding Hyperion Planning User Preferences with values for a smoother user experience

Wow, I think I am actually writing an article on Hyperion Planning. I think pigs are flying right now. I have been helping out on a system upgrade for the last few months where we are in many ways “refactoring” a Planning deployment. I’m borrowing that term from the software world. In other words, we are changing how things work under the hood without the explicit intention of changing how things look to users. One of the changes we are making, however, is to introduce some variables for users to be able to change their Version and Scenario.

Just to be clear, the variables are the ones that are set in the preferences menu. And we’d like to provide some defaults so that the users have the most likely choices pre-selected. We can export the User Preferences option from LCM. The corresponding XML file has a section for each user’s variables. It’s like this:

<UserPreferences>
  <UserPreference UserName="jason">
    <!-- some stuff here -->
    <UserVariables>
      <Variable Name="Scen_UserVar" Value="Forecast"/>
      <Variable Name="Ver_UserVar" Value="Working"/>
    </UserVariables>
  </UserPreference>
</UserPreferences>

There’s, of course, a UserPreference section for each user. We can edit the variables here in this config file and the import it to the target system (or back into the current one) to fill the values. A couple of notes to consider:

  • If you try to strip out the other stuff in the UserPreference section so that it doesn’t get touched, it’ll just load defaults for that user. You might not want to blow out the user’s settings that way.
  • The reason for trying to do the above bullet item would be if you’re just trying to copy and paste the same block of code for each user.
  • A user in the target system might not be in the User Preferences export – you can create that manually by copying and pasting a different user.

It’s incredibly likely that there’s a better way to do this or some magical option I don’t know of somewhere that’ll take care of it, but I wasn’t aware of it and decided to “brute” force it. The copying/pasting and editing was the “hardest” part as I couldn’t think if this procedure could be reasonably automated in UltraEdit or Notepad++ or something, so I just did it by hand.

Hope this helps someone!

Flipping an ODI model to a different technology and kicking the interfaces

One of my recent ODI projects is a relatively complex transformation job. I am effectively building up a master/detail set of records from a single table. The single table isn’t really a single table in the source, it’s multiple tables. Within ODI I make several passes on it, dialing in the fields with interfaces and procedures. I opted to use the in-memory engine (MEMORY_ENGINE) because I thought the architecture would be a little cleaner, and the amounts of data being pushed through are not huge.

Everything was fine, until I hit a legitimate ODI bug. I actually found a relevant case in Oracle support for it: ODI-1228 “statement is not in batch mode”. There was even a patch! Unfortunately, the patch required a version of ODI higher than what I had available. So on a tight deadline my choices were to push through an ODI upgrade or to find some workaround.

I decided to see if I had a low-cost option of switching from using the memory engine to just using an Oracle schema as a stage (note that the package and interfaces themselves are all just moving data between various Oracle servers, nothing Hyperion related even). So I went into my model for the staging table that I was using, and just switched it from the In-memory Engine to Oracle (using the drop down). No complaints from ODI there.

Next I went into one of the interfaces that was previously setup to have source/staging/target in terms of its whole process. I went straight to the Flow tab but had some issues and fun little NullPointerException errors which is always a fun time. The thing is, I changed a technology on a model being used in various interfaces but it’s not like any part of ODI went into those interfaces to say “Hey, this changed…” – in fact, when you change the technology of a model, ODI helpfully and plainly just says “Hey, this is likely to break stuff. Proceed at your own risk… THAR BE DRAGONS.” Or something like that.

Anyway, I found that I could sort of ‘kick’ the interface when I opened it, by checking the “Staging Area Other Than Target” option, then turning it off again (it was off in most of my interfaces). This forced the interface to sort of recalculate and reset the flow, which took into account the updated technology of the model. There might be a better way to do this rather than this “fuzzy” method, but it worked and I didn’t have to redo the plethora of interfaces in this package.

Hopefully this helps someone else out someday!