Essbase Java API digging: data load from FTP?

I spend a lot of time working with the Essbase Java API, so I have become pretty familiar with it. I have used it to build a middle tier for a mobile reporting solution, a core part of the innovative Drillbridge drill-through solution for Essbase, Planning, and Financial Reporting, and countless other little apps. (In fact, my experience with Essbase, Java, ODI, and Jython are all at the confluence of one of my upcoming endeavors, but more on that later…)

In any case, much like a trip to Costco, going through the Essbase Java API can be a bit of a treasure hunt at times. One such thing I came across recently had to do with the loadData() method on a cube object (IEssCube).

There are actually a few loadData() methods – in programming parlance, the method is overloaded. That is, there are multiple functions with the same name, but they differ with their calling argument types, so behind the scenes, Java is intelligently able to figure out which one to call. Method overloading is frequently done for programming convenience.

For example, an object might have a method getGreeting(String name) that takes a name and returns “Hello ” plus that name. Another method with the same name might be getGreeting(String name, Date time) and this returns a greeting that is customized by the time of day.

The Essbase cube object in the Java API contains three loadData methods. One of them caught my eye. There’s a loadData() method that takes a username and password. According to the docs, you can actually load a file over FTP (in which case the file name would be an FTP path, presumably with a server and maybe even prefixed with ftp://), and the username/password are at the FTP username/password.

I thought this was kind of cool because it’s not something that’s ostensibly visible in EAS. So it could be something that’s buried in the API that is used behind the scenes, or maybe it was some plumbing done for PBCS. Maybe it has even been there forever. Like I said, I thought this was interesting… there are also a few other fun tidbits I’ve seen over the years in the API so I’ll try and point those out in the future. If you know of some please send them my way!

Essbase Outline Export Parser released

I had a use-case today where I needed to parse an XML file created by the relatively new MaxL command “export outline”. This command generates an XML file for a given cube for either all dimensions or just all dimensions you specify. I just needed to scrape the file for the hierarchy of a given dimension, and that’s exactly what this tool does: pass in an XML file that was generated by export outline, then pass in the name of a dimension, and the output to the console will be a space-indented list of members in the dimension. More information on usage at the Essbase Outline Export Parser GitHub page including sample input, sample output, and command-line usage.

Also note that the venerable Harry Gates has also created something similar that includes a GUI in addition to working on the command line. While both written in Java, we’re using different methods to parse the XML. Since I’m more familiar/comfortable with JAXB for reading XML I went with that, which in my experience gives a nice clean and extensible way to model the XML file and read it without too much trouble. The code for this project could be easily extended to provide other output formats.

Essbase Java API Consulting & Custom Development Available

I recently finished developing a solution for a client that involved writing a custom Java program using the Essbase API. The client was really amazed at how quickly I was able to develop the solution because their previous experience with using it (or hiring someone to develop with it for them) was not nearly as productive or smooth.

I graciously accepted their compliment and then told them that I’ve simply been working with the Essbase Java API for a long time – almost a decade now. Not only that, but I have several helper libraries that I use in most of my projects that prevent me from having to reinvent the wheel. By this time the libraries are quite battle-tested and robust and help speed up the development of many common operations such as pulling information out of the outline, running MDX queries, programmatically doing a data load, pulling statistical information, and more. Instead of spinning my wheels writing and rewriting the same boilerplate code, I accelerate development and focus on creating a good solution for the task at hand.

That all being said, for those of you out there finding this blog post now or in the future, whether you’re an administrator, consultant, manager, or other, and find yourself needing some help with a solution that involves Java development and utilizing the Essbase Java API, don’t hesitate to contact me. I am available through my consulting firm to do custom work or even fixing existing solutions you have already that are now exhibiting some quirk or need an enhancement. My extensive experience with Java and this particular API means than I can get up and running fixing your problem, not learning how to do it while on the clock.

Possible idea for a tool: cube delta

I have a question for my audience about a tool idea. Would it be useful to be able to tell what the data differences between two cubes with the same (or highly similar) dimensional structure is? For example, let’s say you had Sample/Basic on one server, and Sample/Basic on another server. Would it be useful to check for differences in the data loaded to them, if any?

I could see this as possible being helpful in checking for differences between cubes in development/qa/production, between archive cubes and ‘real’ cubes, and possibly during testing when you spin off a side cube to check some calcs.

Just a thought. Let me know! After HUMA is kicked over the wall I’ll be looking for my next side project (as time permits) and I am trying to focus on things that will increase the productivity of Hyperion developers.

Thank you to Hyperion Unused Member Analyzer testers, and thoughts on future tools

Thank you all so much for helping out. I am absolutely blown away at the response that this utility has generated from all of you. Please let me know if you run into any issues.

Changing subjects (and zooming out) a bit, back to my efforts to understand what you (as consultants and Hyperion professionals) check during your “health check hit list“, it’s my goal over the next year to put together a suite of power tools that enable all of us to create, analyze, and maintain more robust solutions. HUMA is one such tool in the toolbox.

I have a few other ideas up my sleeve, but if you ever find yourself saying, “Self, I wish I had a tool for [fill in the blank]” or “I wish there were an easy way to…” then I would love to know about it. Even if it’s something you do already that’s perhaps manual and laborious, perhaps it can be automated, sped up, improved, and made useable by the community at large.

Beta testers wanted for Hyperion Unused Member Analyzer tool

I have been working on a tool called HUMA – Hyperion Unused Member Analyzer. The idea for it came out of some side discussions at Kscope a couple of months ago. The idea is simple: Wouldn’t it be nice if there was an easy way to determine if any members are unused in a given cube?

Given a server, database, and credentials, HUMA will connect to a a cube, analyze its stored members, generate a list of all possible values, then iterate over it, analyzing the resulting data grids for the presence of data. If there are members with no data in them, they are shown to the user running the program. To increase performance, HUMA orders the grids and sequences of members within the sub grids so that they are aligned to the dense/sparse structure of the cube, so that it can pound on the same hot blocks before moving on to grids with different sparse permutations.

On a pretty gutless VM of mine with Essbase running in 1GB of RAM, a standard Sample/Basic cube can be ransacked for data in about three seconds. Also, given the way the tool works, it’s not necessary to do a full export of a cube or anything since the analysis is based on the data that is queried and immediately discarded. So far it seems to work pretty well.

The goal of the tool is to be a tool in the toolbox for Hyperion/Essbase admins that want to analyze their environment and act on possible improvements. This goes hand in hand with my research and efforts to find out what we all do when we dive into a new system as part of a health check hit list.  Doing so on a BSO database can yield improvements (particularly on dense members).

In any case, version 1.0 of the tool is basically ready to go and I’d love to have a few people test it out and let me know of any issues!

Eclipse tips for you Java folks out there

I know there’s lot of you Java and Essbase folks out there in addition to myself. I’m always interested in ways to use an IDE more effectively. I came across this list of Eclipse annoyances and easy solutions for them earlier today. I am already using some of these but some of them are new to me. So if you want to get your environment dialed in a little bit, check these out. And for you IntelliJ folks… just keep being your normal smug selves. ;-)

Oh, and I’m thinking about doing a small series on “Essential Java for Essbase Developers” or something along those lines. It’d be something like a tutorial series on how to navigate around all of the Java terminology, features, and ecosystem in order to get straight to developing the solutions you want to develop, just a little bit quicker. Let me know if you have any thoughts or suggestions!

 

 

Using the Unix paste command to join files together by column

I can’t believe I didn’t know about command-line utility until very recently. I was doing a little research on some text processing utilities and came across the “paste” command. As a Mac user I have this installed already, and this appears to be a fairly common LInux/Unix tool as well. It’s part of a suite of text processing utilities that are fairly standard. Oddly, I am very familiar with the likes of sed, grep, awk, and so on, and yet have not stumbled upon this.

Anyway, imagine the following files, starting with names.txt:

Jason
Cameron
Tim

And numbers.txt:

555-1234
555-9876
555-2468

Then we just run paste:

paste names.txt numbers.txt

And we get this:

Jason   555-1234
Cameron 555-9876
Tim     555-2468

Paste just marries the files up by column, reading from each file. You can supply more than two files.

I don’t have an immediate need for this utility for processing Essbase data, but it just might come in handy someday, so I’m going to keep it in my back pocket. And for you Windows users out there, well, you know the deal: get cygwin or whatever the latest and greatest Unix-on-Windows environment is.

Processing a file line by line in Java

I seem to use this pattern a lot, particularly in projects without dependencies (i.e., no Guava). What’s the cleanest way to iterate through a file line by line in Java? There are a lot of examples that are pretty messy. Here’s one I keep coming back to that is nice because it uses a traditional for loop to manage things. In this case it’s not just a counter variable but instead, a more general version of the for loop where we have an initializer, while condition, and iteration statement.

public void process(Reader reader) throws IOException {
  BufferedReader bufferedReader = new BufferedReader(reader);
  for (String line = bufferedReader.readLine(); line != null; line = bufferedReader.readLine()) {
    System.out.println("Line: " + line);
  }
  bufferedReader.close();
}
That’s it. That’s about as clean as I think I can get it. Note that the argument to the method isn’t a String filename but rather a Reader object, which alleviates the need for this method to have to catch FileNotFoundException as the caller would presumably supply a FileReader or some other type of Reader.