Did you know that ODTUG has a presence in Australia? Neither did I! The venerable Cameron Lackpour will be presenting on two of my favorite things: ODI and Dodeca. If you happen to read this blog down under (and I know some of you are based on the traffic logs!) I would seriously check this out. Now if only Cameron would spill the beans on some of the uber-secret Exalytics stuff he has been playing with…
In my ongoing effort to clean up some of my past creations and make them available to anyone that wants to use them, I am releasing Hyperion Rejected Record Summary. RRS is a small Java library/command-line program that analyzes one or multiple rejected record files (from a Hyperion data load) and provides stats on them.
I have used something similar in the past as part of an automation process that summarized anything that didn’t load to a cube and emailed it to me for further analysis. That’s exactly what this does.
Further, while this can be used as-is on the command-line, it is also fashioned into a tight Java library with a clean API and no dependencies that can be embedded into your own programs or a servlet. You could even call this from ODI to summarize a reject data file as part of an automation process.
For example let’s say that you have the following data that doesn’t load:
\\ Member Ac.0170001 Not Found In Database 09 0170001 900 11 .00 \\ Member Ac.0170001 Not Found In Database 09 0170001 904 11 .00 \\ Member Ac.0170010 Not Found In Database 09 0170001 905 11 .00 \\ Member Ac.0170012 Not Found In Database 09 0170001 906 11 .00
Then in total you have 2 records rejected because of Ac.0170001 and 1 record rejected for each of Ac.0170010 and Ac.0170012. The rejected record summary/class will tell you that there were 4 total rejected records, 3 members causing this, that 50% of your records were rejected because of the top 1 most rejected record, and a host of other stats.
The summary class acts as a DSL (domain specific language) custom to Essbase that can analyze these stats and report them in a meaningful fashion. Additionally, not just files can be analyzed — anything you can get into a Reader or InputStream is fair game, in case you happen to store your data in something besides a file or pull it down via other means.
As always, this software is free and open source (Apache 2) and available on Github. See the RRS page for more info and a link.
Due to all of my testing in H2 without a password I forgot to re-include the password parameter during execution — this should now be fixed. You could have still included it directly in a JDBC URL but it’s a little more convenient to just use a parameter. GitHub repository is here and has a link to the latest download jar.
In other news, my good friend/colleague Cameron Lackpour (link in sidebar) forwarded on a message about Hyperpipe to the fine folks at Oracle for their consideration, at least in terms of being a proof of concept. So who knows, maybe good things will happen. :D
I’m more of an RSS guy myself when it comes to blogs, but I’ve been wanting to fill out some of the content here for awhile and the old theme was getting a little dated. I am very proud to roll out a shiny new theme to freshen things up! Over time I’ll be adding more pages for my custom projects and whatnot.
Kind of a wordy blog title. There is plenty of more information on the Github project page.
Based on a conversation with Cameron Lackpour, I wrote a small utility that can move data from any JDBC data source to an Essbase cube. You don’t need MaxL, a load rule, ODBC, ODI, or any of that stuff. I mean, you might want to use those things instead of this odd little one-off utility, but if you like living on the edge you can give this a try.
Hyperpipe works by piggybacking off some functionality that is already in the Essbase Java API. Craft a SQL query in a particular format and you can load up an Essbase cube without having to make a load rule or jump through too many other hoops. This could be useful in some situations. Hyperpipe is believed to work with all Essbase versions 9.3.1 or higher but has not been extensively tested. Hyperpipe is an open-source project released under the liberal Apache Software License — a business-friendly license that you can do pretty much anything you want to.
Please try it out if you’re interested and let me know if you have any questions, comments, suggestions, or issues.
Maven is a comprehensive build system for Java projects. A lot of people, including myself, have a love/hate relationship with Maven. The reasons for this relationship can be discussed at another time. In any case, used judiciously, it can make managing dependencies in Java projects much easier than handling them by hand.
Eclipse has pretty good Maven integration. It’s possible to setup a new project and browse for dependencies and add them automatically to your project. Everything just works. I develop quite a few Java applications that rely on the Essbase Java API, so I have imported the Essbase jar files to my local repository (since they are not available from a central public repository) to make development a breeze.
Here’s how you can do the same. First, you need to go get your Essbase jar file. These are installed on the Hyperion server. You might have to search around a little bit since the directories seem to change from release to release, but in the case of this stock Hyperion 9.3.1 server (with Hyperion installed in C:\Hyperion) they can be found at C:\Hyperion\AnalyticProviderServices\lib.
Here’s what the directory looks like on one of my machines:
Right now we’re just interested in the ess_japi.jar file. We’re going to import this in to our local machine’s Maven repository. This assumes you have Maven installed locally, of course. If not it’s pretty straightforward. Just Google around and all will be clear.
Maven is very particular about the versions of everything. It allows you to store multiple versions of files. This means that our single repository can store the files for Essbase 9.3.1, 126.96.36.199.0, 188.8.131.52, and so on, all next to each other. Since we’re importing this resource manually we are going to tell it the version. First though, let’s rename this local file to something more consistent with Maven naming conventions. Let’s rename it from ess_japi.jar to essbase-japi-9.3.1.jar (since this is a file from a 9.3.1 server). Change it accordingly for other versions. If this were 184.108.40.206 then we would make it essbase-japi-220.127.116.11.jar. Note that Maven “prefers” a versioning scheme of major.minor.revision but not all software (particularly Essbase) adheres to this, so we’ll do our best.
So now we have essbase-japi-9.3.1.jar. A simple command line will import this. From a command prompt in the same folder as the jar file, execute this command:
mvn install:install-file -Dfile=essbase-japi-9.3.1.jar -DgroupId=com.essbase -DartifactId=essbase-japi -Dversion=9.3.1 -Dpackaging=jar
Each -D indicates a parameter we are filling out: the name of the file, a Maven group ID (which we’ll decide to make com.essbase), what the name of the artifact itself should be (essbase-japi), the version, and lastly that it is a jar file. You’d think Maven could infer some of this for us but we only have to do this once in a blue moon so it’s not so bad. Maven will copy the file to the local repository. To make it visible from Eclipse you will likely have to rebuild your Maven repository index which is no big deal.
Now when we are specifying the dependencies for our projects from Eclipse, we can easily browse it by name and add it in to our Maven POM file:
Now we’re good to go. We can easily include this artifact in future projects quickly and easily. This is particularly useful if you happen to download the source code for some of my Essbase-related open source projects, which as of late rely on Maven for dependency management.
I was feeling a little bit whimsical last week and wanted to get a little use out of my SurveyMonkey account, so I decided to do a quick poll: what is the proper file extension for MaxL scripts?
This issue initially arose for me when I was heckling Cameron Lackpour at one of his presentations a few years ago. My memory must be a little faulty because at the time I could have swore that he liked .mxl, whereas I am more of an .msh guy. So I wanted to settle this once and for all.
Oracle, for its part, doesn’t provide a ton of consistency on this issue, as scripts created from EAS seem to suggest a .mxl extension, whereas the script interpreter and commands seem to suggest that .msh is a little more on the recommended side. I have seen both in environments. Literally both, as in, some scripts are .mxl and some are .msh, and sometimes this naming inconsistency even exists in the same set of automation. Shudder.
Without further ado, here are the results.
- Total responses: 21
- .msh: 9 (42.9%)
- .mxl: 10 (47.6%)
- .maxl 2 (9.5%)
- Other: 1 (this wound up being entered in as .mxls)
So, there you have it. I would like to note, by the way, that if you chose other then I implied with your answer that you were a ‘monster’. I’m only half-joking. Way to think outside the box. Anyway, I haven’t personally seen .maxl scripts in production but someone on the Network54 forum commented that, hey, down with 8.3 file naming and in with the whole name as extension. I have to admit, I never really thought about this in the context of MaxL scripts, but oddly I do find it a little disgusting when HTML files have a .htm extension rather than a full .html extension.
Suffice it to say, I am more than a little disappointed with these results and than the .msh file extension lost in a neck and neck battle. I’m going to pretend that this survey never happened and that .msh is the one true script extension to rule them all.
Thank you all for submitting answers to this somewhat lighthearted survey. If you have ideas for further issues to explore and survey the community about, please send them to me and I’ll get another survey going!
I received my copy of Developing Essbase Applications – Advanced Techniques for Finance and IT Professionals some time ago and spent every spare minute I could find eagerly devouring the book. I was eagerly anticipating the arrival of the book to my doorstep, hoping and wanting it to be the Essbase book to end all Essbase books. While it has its flaws, it is a must-have for the dedicated Essbase practitioner’s bookshelf. As I always say in these reviews, there are precious few Essbase books out there, so anything that helps the cause is welcome as far as I am concerned.
Developing Essbase Applications sports an impressive and diverse array of contributors. Most of the names are easily recognizable to anyone that has taken a break from writing a calc script to seek out an Essbase blog article or Google for some help on the Essbase forums. Seriously, check out this who’s who list of Essbase folks that helped create this thing: Cameron Lackpour, Dave Anderson, Joe Aultman, John Booth, Gary Crisci, Natalie Delemar, Dave Farnsworth, Michael Nader, Dan Pressman, Robb Salzmann, Tim Tow, Jake Turrell, and Angela Wilcox.
While this is probably the greatest strength of the book, it also inevitably contributes to my complaints about the book, which is not so much that it is a book, but more, a collection of books. This divide and conquer approach is probably the only realistic way to get so many people together to create such a thing, and the necessity of doing so is perhaps a reflection of the increasing breadth and scope of the Essbase ecosystem itself.
So for those of you looking to start on page 1 and work your way front to back (as I am want to do with a good programming book), you can do that, but it’s not necessary since the chapters don’t really build on each other. They just really do feel like a dozen books bound together, each with their own table of contents and style. The voice of each chapter author really shines through. In this regard, the book can be thought of as more of a reference, and indeed, some chapters are so packed full of information that there’s no reasonable way to absorb it all in one reading.
That all being said, what about the actual content of the book? It’s impressive, if slightly disjointed. In order, the chapters cover the following: Essbase infrastructure, tackling bad data, Essbase Studio, BSO, converting BSO to ASO, MDX, ASO and performance, Essbase Java API, system automation with Groovy, Advanced Smart View, and how to successfully manage an Essbase system.
Some chapters are more useful than others, and some chapters are definitely “stronger” than others. I can easily see anyone reading the book and being asked to rank their chapters in order of preference or usefulness coming up with completely different rankings from anyone else.
The way in which chapters convey their information to the reader also varies significantly from chapter to chapter. There are various degrees of difficulty that the reader has to go through to extract the useful information from the chapters of the book. This thing is fill with gold and diamonds but its up to the reader, in no small part, to extract it, as it is rarely handed over on a silver platter.
For example, starting things off is John Booth’s chapter about Essbase infrastructure. Clearly, John is a very smart guy witha lot of experience, a recent Oracle ACE who graciously donates his time in the form of numerous forum posts and even creating Amazon EC2 images of Essbase servers. However, his chapter , but I found that his chapter reads more like an animated brain dump of Essbase infrastructure. I imagine having a beer with John and saying “Tell me everything you know about infrastructure” and Chapter 1 is the result of that conversation. At the end of the day, I learned a lot, but as far as book form is concerned, would have appreciated more structure and progression to the manner in which the information was doled out.
Cameron picks things up in Chapter 2 which is ostensibly about slaying bad data in Essbase, a ubiquitous problem for any database administrator. While this is ostensibly the focus of the chapter, after a few pages, Cameron takes an immediate left turn into all things ODI and covers that. While also useful and insightful, as pedantic as it sounds, I found myself wishing the chapter had just been made to focus on ODI itself.
The “voice” in each chapter also varies, and in many instances while a chapter ostensibly wants to explain some concept to you, seems to talk as if you already know the concept. Going along again with my “you have to mine the information” analogy, I would say it’s also the responsibility of the reader to have been a somewhat astute observer of various acronyms and terms in the Essbase ecosystem even if they aren’t familiar with them beyond being able to simply unwind the acronym.
The book covers the Essbase Java API and brings in Mr. Essbase Java API himself, Tim Tow to help out. As an also experienced Essbase Java API guy myself, I certainly read this chapter very enthusiastically, and Tim does not disappoint. That being said, however, in my experience I haven’t found a lot of overlap between the Essbase administrator and Java programmers (much to my chagrin). So while I welcome the content, in some ways I question its usefulness to the average Essbase administrator. Of course, I could be mistaken and for the least part, exposing more people to the Java API in the hopes that they will pick it up and do something cool with it (as I have tried to do myself with various Essbase-related open source projects) is nevertheless extremely welcome.
The rest of the chapters round out topics from MDX to Groovy. Gary Crisci is the quintessential MDX practitioner, having presented on it and posted many an Essbase forum comment on it and does a nice job, as always, of conveying information on a topic that can be challenging. I have looked at Groovy in the past but haven’t made the jump just yet (too busy with Clojure and Go, I guess) but was found it to be an enjoyable read for what surely must be an uncommon, if esoteric, way to automate Essbase.
Lastly, on a purely aesthetic and non-functional note, I would like to say that I would have enjoyed more and better visuals in the book, in fact, the formatting of the book leaves something to be desired. For a tech book I would have loved to see something more modern than Times New Roman (for example, the formatting in O’Reilly and Apress books is second to none). Many of the tables and figures are laid out so as to be unhelpful and awkwardly cram information into a weird presentation, and a perennial pet peeve of mine is inconsistent framing, cropping, and styling of screenshots. Of course, this is all pretty predantic stuff and has more to do with my almost closet obsession with fonts and data visualization.
All that being said, all of the little things that I didn’t like about the book or thought could be better are easily and handily overshadowed by all of the things that I did like about the book. This thing is jam-packed full of content and, perhaps more importantly, content that is based on the vast experience of some of the top Essbase people in the industry. These aren’t people pretending to know their stuff or trying to fool you, this is people at the top of their game in the top of their field that are sharing some top shelf knowledge with the rest of us. You just have to dig a little to get at it. At $69.99 list price on Amazon, this book is a little bit pricey for the average tech book. But you know what? Even if you only got one or two useful things out of it, it would absolutely be worth the price. This is an absolutely essential book for anyone that is serious about using Essbase and being a successful administrator or consultant. On a classic scale from one to five stars, I easily give this four stars.
Continuing in the same spirit as the release of Jessub, I am happy to announce the release of another open source tool meant to benefit Hyperion Essbase administrators: cubedata. cubedata is a simple tool that makes it easy to generate a text file that can be loaded to a cube. Well, of course, there’s nothing too special about this. The real purpose of the tool is to be able to generate huge text files based on the permutations of data that you specify. For example, let’s look at a simple data definition:
dimensions=Time,Scenario,Location,Departments members.Time=P01,P02 members.Scenario=Actual,Budget members.Location=Lo.806,Lo.808,Lo.822 members.Departments=Dt.01,Dt.02,Dt.03,Dt.04
So we just have a really simple definition in a configuration file. We run cubedata and tell it to use this file to generate some data for us. Out comes 48 rows of data: 2 time periods x 2 scenarios x 3 locations x 4 departments = 48 combinations. The generated data file looks like this:
P01,Actual,Lo.806,Dt.01,911.85 P01,Actual,Lo.806,Dt.02,887.100 P01,Actual,Lo.806,Dt.03,251.49 P01,Actual,Lo.806,Dt.04,115.64 P01,Actual,Lo.808,Dt.01,197.60 P01,Actual,Lo.808,Dt.02,704.71 P01,Actual,Lo.808,Dt.03,512.76 .. more rows ..
The configuration file lets you specify a few other options such as the column delimiter (default is comma), the numerical range of fact values to generate, and a few other things such as the “load factor” (what percentage of data combinations will have data).
cubedata, like Jessub, is licensed under the Apache Software License 2.0, a very permissive license that basically says you can do whatever you want to the code. The project is shared at GitHub in one of my public repositories.
I haven’t done extensive testing on the program but it does do a reasonable job of telling you if the configuration in incomplete or otherwise incorrect. I have tested it with quite a few dimensions and members and was able to generate a file with many millions of records quite easily. I don’t see any reason why it wouldn’t support generating absolutely massive amounts of data. It’s programmed in such a way as to iterate over the dataset, rather than try to keep it all in memory at once, meaning that there shouldn’t be any memory issues with regard to generating massive data sets.
So, there you have it. Another simple tool that might make developing and testing a little easier for you, particularly if you hate generating dummy data by hand and/or you don’t have a system to source data from that is ready or convenient.
As always, please feel free to let me know any suggestions or comments you may have and I will be happy to look in to improving the program. If you end up downloading the code and making tweaks please share them back if they would be useful to more people.
I was recently given the opportunity to review another Essbase book from Packt: Oracle Essbase 11 Development Cookbook by Jose Ruiz. Overall I would say I am pleased with the book. It covers a lot of ground and a lot of disparate tools, many of which are scantily documented elsewhere.
Before I really get into the review, I must say that I have never been a big fan of the approach that technology cookbooks take. I’m also not a huge fan of having a book for a specific version of software. Of course, in order for the cookbook approach to work you don’t have a choice but to tie to a version of software. This is because the recipes are sequential and very explicit — as with cooking a recipe in real life — and rely on the exact version of the software in order for the detailed steps of the recipe to work. I’ve grown up with software, and am a cross between a visual and a kinesthetic learner, so my preference is to have concepts and goals explained to me, then to go exploring on my own. To this end, I find technology/recipe books to be tedious as they laboriously lay out the steps: click this, then click that, enter this text in, and 15 steps later you have a result.
So, my personal preference for book styles aside, this book largely succeeds for what it is: specific, methodical ways to perform a certain task. You won’t get a lot of explanation on why you might do something a certain way. In this regard, the book is useful as a complement to your Essbase literature rather that the place you would go to understand why you might want to accomplish some task.
Okay, now that I have beat up on that horse enough.
As I said, I enjoyed the breadth of content in the book. There are detailed recipes for setting up your relational data store to load a cube with EIS and Essbase Studio, building load rules and loading data to BSO/ASO cubes, writing calc scripts, working with Star Analytics, using EAS, HFR, writing MaxL scripts, and provisioning security. It even covers working with the revered Outline Extractor tool.
All of this content was really nice to see in book form. One of the upsides to the recipe format book is that it won’t spend a lot of time laboring over what a cube is and your first steps retrieving data with Excel. In fact, the book even says it’s not for beginners. It just jumps right in. I think this book can be a very handy reference for someone that needs something a little more guided than the technical reference (and less heavy).
On my arbitrary rating system, I would give this book a four out of five star rating. And again, that’s me trying to be fair to the book even though I’m not in love with this format, but it largely accomplishes what it sets out to do. I’d say it’s a great addition to the pragmatic Essbase developer’s library, but certainly not the only book in it.