cubeSavvy Review

One of my personal blogging goals this year is to take a tour of apps, code, libraries, and other third-party tools in the Hyperion ecosystem. I have some cool stuff on deck to be reviewed, starting with today.

Today I’d like to take a look at Harry GatescubeSavvy. cubeSavvy ostensibly purports to be “Planning without Planning”. Or, put another way, it’s a web-based interface for Essbase cubes, without all of the additional infrastructure and setup that Planning entails. This is an interesting approach. Let’s think about it for a moment.

As many of you know, by design, Hyperion Planning sits on top of Essbase and is synchronized down to Essbase. This design has some drawbacks and some advantages that are possibly worth musing on in a future post. Planning also brings a lot of extra functionality to the table that manifests itself in the user interface and/or is pushed down in some way to the underlying cube. cubeSavvy comes to the table and more or less says, “Hey, let’s do away with all of that and get a little more purist about this: let’s have grids (similar in concept to forms in Planning) defined that work with our vanilla Essbase functionality – and let’s just manage the cube instead of pushing and synchronizing things down to Essbase.”

So in theory, if you have an Essbase server up and running and then stick a cubeSavvy server in front of it, define some grids and provision some users, you’ve got a web-based budgeting and planning system on top of your cubes. Interesting.

In a first for me and this blog, this article will be split up in to several pages, covering Installation & Setup, Configuring Grids, User Experience, and Closing Thoughts. Please enjoy this whirlwind tour of cubeSavvy!

Pages: 1 2 3 4 5

Pre-seeding Hyperion Planning User Preferences with values for a smoother user experience

Wow, I think I am actually writing an article on Hyperion Planning. I think pigs are flying right now. I have been helping out on a system upgrade for the last few months where we are in many ways “refactoring” a Planning deployment. I’m borrowing that term from the software world. In other words, we are changing how things work under the hood without the explicit intention of changing how things look to users. One of the changes we are making, however, is to introduce some variables for users to be able to change their Version and Scenario.

Just to be clear, the variables are the ones that are set in the preferences menu. And we’d like to provide some defaults so that the users have the most likely choices pre-selected. We can export the User Preferences option from LCM. The corresponding XML file has a section for each user’s variables. It’s like this:

<UserPreferences>
  <UserPreference UserName="jason">
    <!-- some stuff here -->
    <UserVariables>
      <Variable Name="Scen_UserVar" Value="Forecast"/>
      <Variable Name="Ver_UserVar" Value="Working"/>
    </UserVariables>
  </UserPreference>
</UserPreferences>

There’s, of course, a UserPreference section for each user. We can edit the variables here in this config file and the import it to the target system (or back into the current one) to fill the values. A couple of notes to consider:

  • If you try to strip out the other stuff in the UserPreference section so that it doesn’t get touched, it’ll just load defaults for that user. You might not want to blow out the user’s settings that way.
  • The reason for trying to do the above bullet item would be if you’re just trying to copy and paste the same block of code for each user.
  • A user in the target system might not be in the User Preferences export – you can create that manually by copying and pasting a different user.

It’s incredibly likely that there’s a better way to do this or some magical option I don’t know of somewhere that’ll take care of it, but I wasn’t aware of it and decided to “brute” force it. The copying/pasting and editing was the “hardest” part as I couldn’t think if this procedure could be reasonably automated in UltraEdit or Notepad++ or something, so I just did it by hand.

Hope this helps someone!

Hyperion Health Check Hit List

I am asking for your Hyperion wisdom again, oh beloved readers! In particular, I am soliciting information from you consultanty types and those of you who otherwise hop into a lot of different Hyperion systems.

Oftentimes a client needs help with speeding up an Essbase/Hyperion process/server/cube that has become unwieldy and slow. So you take a look at things. When you happen to hop in to an environment and assess its health, what do you look for, from a Hyperion point of view? For example, on BSO cubes I go right for the stats and check out the block density and average cluster ratio. From there I can go in any number of directions, looking at the overall outline, automation, cache settings, and so forth. So I have this already:

  1. Check block density and other cube stats
  2. Review outline for any red flags
  3. Check size of index cache with respect to the size of the index itself
  4. Take a look at outline for things that can be removed/deleted/dynamic calc, etc
  5. Ensure logs are not huge
  6. Look for XCP files, if any
  7. And a few others

I am really, really, curious if you have something you look for, particularly if it’s something you might dive into with EAS. I’m working on something interesting and your feedback is very appreciated! It can be anything at all: checking the server, checking the app or the cube, checking the file system, calc scripts, business rules, automation, and so on. Thanks!

Hyperion Essbase wish list: Import a compressed file

I thought this up while attending Dan Pressman’s Kscope presentation How ASO Works and How to Design for Performance, a presentation that definitely appealed to my inner Hyperion geek. Dan did a crazy deep dive on performance tuning with particular respect to loading ASO. He had some pretty bangin hardware to play with too.

Long story short, and many of us have known this for awhile, but there are ways to format your Essbase load files so that they load faster. Basically what you are trying to do is make things easier on Essbase: stream in less data, don’t repeat things you don’t need to repeat, don’t thrash blocks in and out of memory, and so on. That’s all well and good.

The advent and proliferation of SSDs in the enterprise has done wonderful things for Hyperion performance by  eliminating a lot of the performance quirks with rotational media and penalties from fragmentation. But at the end of the day we are still looking for ways to pump ever-increasing amounts of information into our cubes even faster than we were the day before.

For instances where we are loading a file that resides on the same machine as the Hyperion apps/cubes or even across the network, I wonder what, if any, performance benefits are to be had if we had the ability to import a zip file?

Zip files can get awesome compression on text files. They can also have their uncompressed contents streamed. In other words, it’s not necessary to extract the contents of a zip file before you can read the contents (starting at the beginning). In theory, if one achieved moderate to decent compression on their zip file and handed that to Essbase (say with a specialized import data MaxL command), it would be saving time on the disk-read aspect of the data load, at the expense of some additional CPU usage. Many Essbase load operations are disk I/O bound anyway so this seems like a reasonable tradeoff to make.

As an additional benefit or elaboration on the concept, perhaps multiple text files could be placed into the same zip file, perhaps with a “load manifest” or options on the load command, and Essbase would attempt to parallelize the data load to the extent it can. This would likely be an add-on feature once the basic support is in place. In all you would need to augment the data load process with a zip file reader routine (this would be an off-the-shelf library that is quite common), a couple new MaxL import data variants, and an augmentation to the Java API. I suppose you could leave the MaxL command alone and just program the interpreter to look for a .zip extension and treat it accordingly, but it seems like it’d be the better choice to specifically indicate the data load is from a compressed file.

Of course, if you’re loading just from SQL this whole thing wouldn’t apply to you. Loading data files may seem low-tech but it’s incredibly common and often times I prefer it as I have an exact text file to tie back to, if need be, versus a possibly changing SQL data store (but that’s a conversation for a different blog post). This feature would cater to the performance nuts out there – and if Kscope is any indication, there are plenty. I’d be curious to hear anyone’s thoughts on this.

Contribute to Open Source Hyperion Utilities and Ideas

Open Source Logo

Open Source Logo

Would you like to contribute to open source Hyperion utilities, or maybe just provide ideas for some? There is a small but growing number of third-party tools and utilities available in the Hyperion ecosystem. The most well known is probably the Outline Extractor.

But there are several others Hyperion tools – many written by yours truly. Many of these tools you can find under the Projects section of this website. These include such fun items as a way to generate test data for a cube, a hack for loading Essbase data without a load rule (even from any JDBC source!)a method for generating substitution variables based on time and date, summarizing rejected records from a reject file, and more.

At the moment, all of these tools are written in Java. It’s the language I am strongest with and fits very well within typical enterprise architectures. I am even working on a few more goodies that will be released in the upcoming weeks and months. Generally speaking, most of the utilities I have written are cleaned up versions of tools that I have created during the course of my work that I thought someone else might benefit from.

Once again, I have returned from another awesome Kscope armed with dozens of other ideas for utilities that the greater Hyperion community can benefit from. Many of these ideas are driven by other people expressing a pain point they have or starting off a sentence with something like, “Wouldn’t it be nice if…?”

That all being said, I just wanted to throw out there to the Hyperion technical community and world at large that if you are interested in helping create these kinds of things to benefit the community, please let me know! If you have your own ideas, I’d love to hear about them. You don’t even have to be a programmer! In fact, if you are a business person (who happens to read this geeky blog), but have an idea for some utility that would benefit Hyperion users and administrators, get that idea out there. There are many ways to help open source projects – testing, documentation, support, marketing, and so on.

Work comes first, of course. Generally speaking these tools and utilities get my attention on an “as possible” basis, so projects, such as they are, are released when they can be. I just want to get a feel for who is out there in the community and interesting in hacking on a few things.

Thanks,

Jason

Kscope 13 Day 1 – Deep Thoughts, part 2 of 2

I thought I’d be having a little more time to write things during the conference, and yet here I am sitting at the airport after a long and eventful week. Well, I had good intentions, at least. For those interested, I had a few other thoughts to go along with part 1 of my recap of the first day.

Cross-pollination of ODTUG sessions as an indicator of broader convergence in the Oracle space

Although I didn’t have a chance to attend them, this year at ODTUG featured some cross pollination sessions where an EPM guy could see what it’s like on the other side and an Oracle guy could see what it’s like on the EPM side. I thought this was a really cool idea but also sort of interpreted it in another way. More so than any other ODTUG I’ve been to, there were sessions available that were not ostensibly in the EPM track that appealed to me. And this isn’t necessarily because the scope of my interest has miraculously increased, either: it’s simply due to the fact that Essbase is being leveraged as the heart of other tools and the way that our tools work and we provide solutions to customers are converging. My prediction (one that is hardly insightful) is that the convergence continues to the point where the line between the different camps is almost non-existant.

Vanilla Essbase Shops Seem on the Decline

I was in a session where the presenter asked for a show of hands regarding who had Essbase, Planning, and other tools. One of the questions was “Who has just Essbase and nothing else?”  and given the sizable crowd, just a few hands went up. I can’t say I was surprised but I can say that I was… disappointed. I’ve been pretty vocal (though not on this blog I suppose) about my qualms with the way Oracle bundles and sells Essbase and other products. To be succinct, I find it regrettable that we are operating in a context where Essbase is arbitrarily bundled with other products in such a way so as to benefit Oracle’s bottom line first and its customers needs second. This is not to say that Essbase exists in a vacuum and that there are no other tools to go with it, just that I would challenge Oracle to explain that the current way of doing things is the best or even a good way.

WaMu is a Hyperion Customer

At some point during one of the presentations I was in I saw a slide with a list of dozens of company logos including one for the now defunct Washington Mutual. I had a brief conversation in my head with a fictitious Oracle marketing person about whether it makes sense or not to leave this logo on a customer slide. In any case, it’s just mildly amusing to think about.

“Finance is Still Stuck in Spreadsheets”

I hear this at some point. This is true, but I don’t believe the negative connotation is necessary. My real takeaway is this: spreadsheets are ubiquitous and useful. Many of them evolve into complex tools with mazes of VLOOKUps and byzantine logic. One wonders how much better these organizations might fare if they recognized their homegrown spreadsheet mazes evolving into something complex and unwieldy and then had a tool to use that had lower barriers to entry than, say, Planning, but with less onerous administrative requirements and an economic model that makes sense for less than 25 users or so.

Closing Thoughts

These are just some of my high-level thoughts to go with part 1 that I have taken from my notes. This rounds out my summary of things from the first day. As time permits I’ll post some thoughts on specific sessions and even my own session!

A lot of you – an incredibly and surprisingly high number of you – came up to me and said that you read my blog. I really appreciate the kind comments. As I’ve mentioned before I just find this to be an increasingly quasi-therapeutic place to post my inane thoughts on whatever, which is reason enough to do it. The fact that some of you out there enjoy this is just icing on the cake. Please don’t be a stranger in the comments section.

Papercut: Calc script verification with FDM tokenization

Here’s a papercut I’d like to present in the context of my thoughts on papercuts in the Essbase ecosystem. I’ve recently been doing a bit more work with FDM. After an FDM data load you need to calc data (related to what you just loaded, although I suppose you could just calc the whole thing if you wanted to) related to the intersections you just loaded. In other words, if you are loading data for a certain time period or location or whatever, you’ll want to roll that data up. Nothing special there. So you have a normal calc script except it has been parameterized for FDM – it searches for tokens in the script, such as in FIX statements, then replaces a template variable with the real variable. So it’s like [T.Location] gets replaced with the actual location. But guess what, when you go to validate the calc script now (and you do always validate your calc scripts, right?), it doesn’t validate.

Hmm.

So, I’m not an FDM expert. Maybe there’s an option to work around this that I don’t know about. Maybe you can stuff these tokenized names into a dummy alias table so that you can at least validate. But it seems like the “right” way to handle this would be to find a solution where you can still validate the calc. I guess one straightforward way to do this might be an option to ignore values with brackets around them that are enclosed in quotes. But it feels wrong that using FDM and tokenizing your calc script leads you down this path. If you have worked with this and have a solution that I don’t know because I’m not an FDM expert and you are, please let me know. But right now it’s just one of those quirky little less than ideal issues that I consider an Essbase Papercut.

Do you have an Essbase or Hyperion blog? Let me know!

I follow a plethora (you like that vocab word for the day?) of Essbase and Hyperion blogs, in addition to other technical blogs I love to follow such as on ODI, cloud computing, big data, and iOS development. I think I have a pretty solid list of blogs but there are always new ones popping up that I want to know about! Do you have an Essbase, Hyperion, EPM or other related topic blog? Please email or tweet it to me!

Also, some of my favorite Essbase related blogs are linked at the footer of this website so check them out. And similarly if you like this blog then please consider adding it your blogroll or list of links so we can all share the Essbase blog lovin’.

Thanks!

My ODTUG Kscope13 presentation: Practical Essbase Web Services

It has been a few years since I last presented at Kscope, but I am back this year! I will be presenting on “Practical Essbase Web Services” – this will be my take on the new web services features from recent Essbase versions, as well as drawing on my experience developing mobile solutions, developing Essbase middle tiers with the Java API, and other approaches to extracting data from Essbase. For those of you in C# shops or wanting to get at Essbase data from your other favorite languages (I’m looking at you, PHP, Python, and Clojure), this should be a fun overview of your options. I’ll look forward to seeing you there – and if you are interested in the presentation but aren’t going to ODTUG’s Kscope, let me know!

Book review: Oracle Essbase 11 Development Cookbook

I was recently given the opportunity to review another Essbase book from Packt: Oracle Essbase 11 Development Cookbook by Jose Ruiz. Overall I would say I am pleased with the book. It covers a lot of ground and a lot of disparate tools, many of which are scantily documented elsewhere.

Before I really get into the review, I must say that I have never been a big fan of the approach that technology cookbooks take. I’m also not a huge fan of having a book for a specific version of software. Of course, in order for the cookbook approach to work you don’t have a choice but to tie to a version of software. This is because the recipes are sequential and very explicit — as with cooking  a recipe in real life — and rely on the exact version of the software in order for the detailed steps of the recipe to work. I’ve grown up with software, and am a cross between a visual and a kinesthetic learner, so my preference is to have concepts and goals explained to me, then to go exploring on my own. To this end, I find technology/recipe books to be tedious as they laboriously lay out the steps: click this, then click that, enter this text in, and 15 steps later you have a result.

So, my personal preference for book styles aside, this book largely succeeds for what it is: specific, methodical ways to perform a certain task. You won’t get a lot of explanation on why you might do something a certain way. In this regard, the book is useful as a complement to your Essbase literature rather that the place you would go to understand why you might want to accomplish some task.

Okay, now that I have beat up on that horse enough.

As I said, I enjoyed the breadth of content in the book. There are detailed recipes for setting up your relational data store to load a cube with EIS and Essbase Studio, building load rules and loading data to BSO/ASO cubes, writing calc scripts, working with Star Analytics, using EAS, HFR, writing MaxL scripts, and provisioning security. It even covers working with the revered Outline Extractor tool.

All of this content was really nice to see in book form. One of the upsides to the recipe format book is that it won’t spend a lot of time laboring over what a cube is and your first steps retrieving data with Excel. In fact, the book even says it’s not for beginners. It just jumps right in. I think this book can be a very handy reference for someone that needs something a little more guided than the technical reference (and less heavy).

On my arbitrary rating system, I would give this book a four out of five star rating. And again, that’s me trying to be fair to the book even though I’m not in love with this format, but it largely accomplishes what it sets out to do. I’d say it’s a great addition to the pragmatic Essbase developer’s library, but certainly not the only book in it.