The %COMPUTERNAME%/MaxL trick for syncing test to production

There’s an automation trick I’ve been using for awhile that I like.  Having talked about automation and such with other people at ODTUG this year, it seems that several people are using this technique or a variant of it on their own systems.

Basically, the idea is that you want to be able to sync your production automation scripts from your test server as easily as possible.  The number one cause of scripts not working after copying them from test to production is because they have some sort of hard-coded path, filename, server name, user id, password, or other value that simply doesn’t work on the server you are dropping your scripts onto.

Therefore, you want to try and write your automation scripts as generically as possible, and use variables to handle anything that is different between test and production.  As an added bonus for making the sync from test to prod just a little bit easier, why not dynamically choose the proper configuration file?

Assuming you are running on Windows (the same concept will work on other platforms with some tweaks for your local scripting environment), one way to handle it is like this: your main script is main.bat (or whatever).  One of the environment variables on a windows server is the COMPUTERNAME variable.  Let’s say that your test server is essbase30 and your production server is essbase10.  On the production server, COMPUTERNAME is essbase10.

Knowing that we can set environment variables in a batch file that will be available in MaxL, we could setup a file called essbase30.bat that has all of our settings for when the script runs on that server.  For example, the contents of essbase30.bat might be this:

SET ESSUSR=admin
SET ESSPW=password
SET ESSSERVER=essbase30

From main.bat, we could then do this:

cd /d %~dp0
call %COMPUTERNAME%.bat
essmsh cleardb.msh

Assuming that the two batch files and the cleardb.msh are in the same folder, cleardb.msh could contain the following MaxL:

login $ESSUSR identified by $ESSPW on $ESSSERVER;
alter database Sample.Basic reset data;
logout;
exit;

Now for a little explanation.  Note that in essbase30.bat I am explicitly setting the name of the server.  We could assume that this is localhost or the COMPUTERNAME but why not set it here so that if we want to run the script against a remote server, we could do that as well (note that if we did run it remotely, we’d have to change the name of our batch file to match the name of the server running the script).  In general, more flexibility is a good thing (don’t go overboard though).  The first line of main.bat (the cd command) is simply a command to change the current directory to the directory containing the script.  This is handy if our script is launched from some other location — and using this technique we don’t have to hard-code a particular path.

Then we use a call command in the batch file to run the batch file named %COMPUTERNAME%.bat, where %COMPUTERNAME% will be replaced with the name of the computer running the automation, which is in this case essbase30.  The batch file will run, all of the SET commands inside it will associate values to those environment variables, and control flow will return to the calling batch file, which then calls essmsh to run cleardb.msh (note that essmsh should be in your current PATH for this to work).  The file cleardb.msh then runs and can “see” the environment variables that we set in essbase30.bat.

If we want to, we can set variables for folder names, login names, SQL connection names/passwords, application names, and database names.  Using this technique can make your MaxL scripts fairly portable and more easily reusable.

In order to get this to work on the production server, we could just create another batch file called essbase10.bat that has the same contents as essbase30.bat but with different user names and passwords or values that are necessary for that server.

For all you advanced batch file scripters out there, it may be necessary to use a setlocal command so the variables in the batch file don’t stomp on something you need that is already an environment variable.  As you can see, I’m a big fan of the %COMPUTERNAME% technique, however, there are a few things to watch out for:

  • You might be putting passwords in clear text in a batch file.  You can alleviate some of this by using MaxL encryption, although I haven’t actually done this myself.  The folder with these files on my servers already have filesystem level security that prevent access, and for the time being, this has been deemed good enough.
  • It’s difficult with this technique to run automation for test and prod from the same server (say, some central scheduling server).  It’s not too difficult to address this, though.
  • If you run the automation from a remote server instead of the Essbase server itself, you may end up with lots of different config files — in which case, you might want to adjust this technique a bit.

As for deploying your test files to your production server, if you’ve set everything up in a generic way, then simply changing the variables in the configuration file should allow you to run the exact same script on each server.  Therefore, your deploy method could literally be as simple as copying the folder from the test server to the production server.  I’m actually using Subversion to version control the automation systems now, but a simple file copy also works just as well

ODTUG 2009 Recap

I started writing this recap at the airport in San Jose, waiting for my flight back to Seattle, but the article got a bit longer than I thought so I’m just not finishing it up here on Monday.  As expected, the conference itself was great, the food was even better, and it was fun meeting all the people.  One of the things that the ODTUG organizers pointed out was this is one of the very few conferences that has grown in attendance this year.  It’s very easy to believe that due to economic conditions,  companies are cutting back on travel dollars — and I think it’s a testament to the quality of content [and food] of ODTUG that even with these conditions, the attendance has grown.

I flew in Saturday evening.  Due to scheduling issues, I wasn’t able to get in earlier and help pull “evil ice plant” (an invasive plant species) from one of the local nature areas, but it’s my understanding that the other ODTUG attendees pulled an absolutely ginormous amount of the stuff, so kudos to my colleagues for that.

The conference-proper started on Sunday morning and as expected there were lots of important people talking about roadmaps and future plans and synergy and all of that fun stuff.  That’s all well and good, but it can be a little difficult for me to get jazzed up about such things because they are still months away from being released, and on top of that, it would be even longer until I could get a hold of it, understand it, and put it to use, not to mention the fact that new releases of software (particularly in the Hyperion world) tend to be a little rough around the edges (to be charitable).  The general consensus, though, seems to be that Oracle is dumping a lot more money on R&D, development, and polishing things up, so one is hopeful that new releases earn a better reputation than their predecessors.

One of the more entertaining aspects of Sunday was Toufic Wakim giving a presentation on the state of Smart View, which was not only funny because of Toufic’s energetic and humorous delivery style, but also because during the presentation, the audience was quite vocal about the shortcomings of the product and why they can’t roll it out to their users.  As has happened before, much love was expressed for the power and simplicity of the classic Excel add-in.  Toufic seemed very interested in collecting and recording people’s feedback so that future releases of Smart View will be better.

It was hard to go more than a few minutes during the conference without hearing about Twitter and tweets and everything else along those lines.  Twitter was extensively embraced.  I’m not really much of a Twitter guy so I didn’t really follow the conference in that way.  It did, however,  seem like the usage and mentioning of Twitter died down significantly as the conference went on, which I can’t really say was a bad thing.  If nothing else, it allowed for more bandwidth to the conference and hotel rooms, which was grossly inadequate, to say the least (with thousands of geeks pounding the network, transfer speeds were dismal).  Apparently the checklist for next year’s ODTUG involves verifying that the hotel can accommodate all of the net traffic that is sure to occur at such a conference.

Among the conferences I attended on Monday was Ed Roske doing one of his calc script presentations, Matt Millella’s Top 5 Essbase CDFs, and “Building Sustainability Dashboards with Oracle EPM.” All were interesting and the CDF presentation really hit home since I’ve been using CDFs lately, and have a toolchain setup to roll new functionality into Jar files and deploy it.

Monday was also the day that I gave my presentation on Dodeca during the vendor slot.  Tim Tow and I had a good sized audience while I showed how Dodeca has been a good fit for my company.  It was the first presentation I’ve given (ever), so although I’d say it generally went okay, now that I have a couple presentations under my belt I am feeling more comfortable and thinking of ways to improve them.  I suspect that as with most things, the more you do it the better you get.  So for anyone out there that hasn’t given a presentation and has been thinking about it but couldn’t quite pull the trigger, I whole heartedly encourage you to throw your name in the hat and give it a shot.

On Tuesday I checked out Ruddy Zucca and Steve Liebermensch talking about ASO.  I had seen Steve’s presentation on ASO last year as well.  I had gone home from ODTUG New Orleans with a list of things to try out to make some of my ASO cubes a little snappier, and I am happy to say that I am going home from this ODTUG as well with a few more ideas on how to squeeze a little more performance out of my ASO cubes — and improve the Essbase reporting experience for my users.

As an aside, the week of the conference was “closing week” at work, meaning that the fiscal period is over and mountains of data need to be loaded up to various databases.  Normally taking time off during closing week is almost impossible, but the automation systems are pretty dialed in so we thought it’d be okay.  Although things largely went off without a hitch, there were a couple of small issues so I had to skip a few sessions to VPN to work and check in on things.  In most cases, re-running a job seemed to fix things up.  So I’m very happy to say that I have just about automated myself out of a job.  Oh wait…

On Wednesday I checked out Glenn Schwartzberg talking about “Little Used Features in Essbase,” which was pretty interesting.  He talked about Triggers and all sorts of miscellaneous things that don’t quite warrant their own session but combined together made for an interesting presentation.  He didn’t quite get to Data Mining, which could warrant a session all on it’s own.  Oddly enough, it’s now my understanding that the data mining features of Essbase were supposed to be removed from the latest release (presumably because this functionality will be provided elsewhere in the Oracle stack).  It came as a surprise to at least one Oracle employee that the functionality in question was still around. (Oops!)

Later in the day I presented on Optimizing Loads and Builds with Kevin Corlett.  Kevin was an awesome co-presenter and had a ton of  great content to add to the presentation.  As with my Dodeca presentation, I felt pretty good about the content and I’m confident that if I presented again I could make it even better.  The original abstract I wanted to present on was about not just loads and builds but about taking an entire automation process and squeezing every last drop of performance out of it.  I currently have a job that takes about an hour and a half to run and one of my projects this summer is to rebuild it to get the processing time down a bit.  I am fairly confident that I can get this below 5 minutes.  So with any luck, next year I’ll be able to present on this.

After my presentation I checked out Rohit Amaranth’s presentation about Essbase in the Clouds.  This was a very interesting topic.   The idea is that instead of buying or leasing your own hardware, you put Essbase in the cloud.   “In the cloud” is a bit of a buzzword right now, so Rohit took a moment to explain what it was and what it wasn’t.  My big takeaway is that the cloud computing model treats computational resources as elastic and a platform to build on, and this can significantly alter the economics of utilzing computing capacity.  Instead of having to go through the process of acquiring or leasing hardware that you are “stuck” with, why not rent out the service and pay for what you use.  Although there are still some licensing issues to sort out with Oracle, I am extremely interesting in seeing how this model could be used to setup test servers.

The next thing I did on Wednesday was sit on the Optimization Roundtable with Ed Roske, Steve Liebermensch, and Cameron Lackpour.  Glenn Schwartzberg was the moderator and went around the room letting people ask their optimization questions.  The content of the questions tended to focus largely on hardware issues.  Although I find hardware interesting and necessary to provide a service to my users, I don’t spend a lot of time dwelling on it since for the most part ‘It Just Works’ and performance issues [for me] tend to be dealt with on the software side.  So, sadly, I didn’t have much of a chance to show off much Essbase knowledge, although Steve wound up fielding a bunch of ASO questions and providing some insight on things as a current Oracle employee.

Finally, Thursday was a light day filled with “Best of” sessions from earlier in the week.  There was a bit of a mixup for the first session so a bunch of us that had gathered for that wound up having a bit of an adhoc session discussing various Essbase topics.  In a lot of ways this was actually one of the more fun sessions.  After that I went to Cameron’s session on mastering Essbase with MaxL automation.  Although he didn’t think I’d pick much out of it, I actually did.  For example, I now know what MaxL stands for (probably) and saw a couple of neat things on encrypting MaxL files.  Plus, as an added bonus, I got to heckle him during his presentation, and that alone made it worth it.

Last, but certainly not least, I went to Angela Wilcox’s session on automation with Perl scripting.  This topic is interesting to me for a number of reasons, one, I used to be quite the Perl programmer back in the day (although sadly I can barely do a Hello World program anymore), and two, getting up an running with Perl for Essbase can be a formidable challenge — especially due to the scant documentation provided by Hyperion/Oracle.  The presentation was quite interesting and it looks like her group has done some nice things to simplify their automation.  I am definitely going to take a look at where I could simplify some of my systems with some Perl scripts.

So, that was the conference in a nutshell for me.  For all of you that couldn’t attend, it’s my understanding that the ODTUG website will have the slides up within 90 days so you’ll be able to check out the content then.  If you’d like my slides from the Data Load Optimization presentation, just shoot me an email and I’d be happy to send them your way.

An Essbase Optimization Story

I got a call the other day from one of my fellow Essbase developers. One of the cubes in their budgeting system was taking a long time to finish calculations, whereas just days before the calculation time was just fine.  I have an almost unhealthy obsession with cube performance and optimization, so I jumped in to take a look.

The first thing I looked at (although not generally the first thing I look at) was the average cluster ratio.  The average cluster ratio is roughly analogous to the fragmentation of your hard drive: data gets added, removed, and the performance with regard to accessing that data is increasingly suboptimal.  Although the configuration of the sparse and dense dimensions in a BSO cube in theory dictates where data gets placed in the corresponding data files, quite often, for performance reasons, the data files will just grow to accomodate the new data instead of putting it where it “should” go.  A cluster ratio of 1.00 is optimal.  In this case, the database with performance issues was reporting a significantly lower cluster ratio.  So the first order of business was to export all of the data in the cube to a raw export file, clear the cube, then reimport.

This helped performance a little bit, but not much.  Something else needed to be done.  I checked out the index and page files in the cube directory to discover that the index files were almost four gigabytes, with about 20 gigabytes of page files.  This size of data isn’t inherently difficult to work with, but it does warrant a look into the dense and sparse dimension settings,  the calc script that is having issues, and trying to understand the way in which the database data is being used.

The database is a six dimension cube with the following dimensions: Scenario, Years, Time, Location, Customer, Commodities.  Time and Scenario are marked as dense.  I can live with Time being dense, especially since it’s just periods rolling to quarters rolling to a fiscal year (with non-Level 0 members tagged as Dynamic Calc), but the dense Scenario dimension didn’t quit jive with me.  In this particular database, it turns out that the reason for the block explosion and subsequent poor calc performance is that users and loading in tons of data to the Forecast member in Scenario.  Scenario also has four other dense members.  In addition to the cube being to to create blocks new blocks on equations, the size of the page files was blowing out with all of the data being sent in.

The first thing that jumps out at me is that even though all of this data is being loaded to the Forecast member, we’re taking a hit on storage because the dense data blocks are being allocated for the other four members in Scenario that are very sparingly utilized.  Surely there is a better candidate for a dense dimension?  You betcha.  Given the sparse nature of the database, most of the other dimensions aren’t really good candidates for dense because of all the interdimensional irrelevance that it would incur, however, the Years dimension is a good candidate because the Forecast data being loaded in is being loaded in for a particular year and all periods.  Perfect.

Given that there are not a significant number of calc, report, and load scripts in this database, it is relatively safe to change the dimensional configuration here.  This isn’t always necessarily the case and changing dense/sparse settings could very easily deoptimize the performace of calc, load, and report scripts, not to mention adversely affecting other numerous aspects of the database.

And of course, this was all tested on the test server before loading to production.  The process was fairly straightforward.  I cleared the database again, and loaded the export file.  Since export files are sensitive to the dense/sparse settings in the database, I did this so I could restructure the database with the new settings and have Essbase do the work for me instead of having to load the data from somewhere else.  I also made sure to do this before doing a calculation on the database so there would be less data to restructure.  After the data loaded in, I changed the dense and sparse settings to set Years to dense and Scenario to sparse.  After the restructure, database statistics were looking fairly good, but I didn’t want to get my hopes up.

Clearly one of the issues with this database was that the calculation script that was taking too long had to span through 20 gigabytes of page files in order to do what it needed to do.  Generally my databases are set to bitmap encoding as the compression scheme (because in most cases it is “good enough”), but I decided to go with zlib compression instead.  Although technically speaking, zlib is more computationally intensive to compress and uncompress, because our servers’ CPUs aren’t generally getting hammered, I decided that I would rather hit the CPUs a little harder if it meant that I could read the data off the disk a little faster (even though it will still uncompress in memory to the same amount as bitmap encoding or any other compression scheme).  Conversely, the stronger compression will mean less data is also written to the disk.

I then had to do some tweaking to the calc script to align it with the next dense and sparse settings.  This was fairly easy.  As with most calc script optimization, this involved changing the FIX statements and CALC DIM statements so that I was fixing on sparse dimensions and doing as much calculating within the dense data block as possible.  Remember, you are always trying to help Essbase do its job and give it hints as to how it can do things more optimally.  In this case, we are trying to minimize the amount of data that must be scanned through.  By calculating within the dense blocks, we are doing all of the work in one place that we can, then moving on to the next area.  For good measure, I also added a SET command to utilize the server’s higher calc lock block setting (see your technical reference for more details).

I fired off the calculation and checked out the results.  The calculation’s first pass took about 20 minutes (a lot of blocks were being created the first time around).  After this pass was done, I checked out the index and page files.   The index file seemed to be holding steady at about 300 megabytes — quite an improvement from four gigs!  The page file was now about 700 megabytes.  Given this new page file size, I decided to set the index cache to 512 megabytes.  Although this will eat 512 megabytes directly out of memory on the server, we have decided to allocate more memory for this app until forecasts are done, then we can pull back on the amount of memory it gets.  But for now, with enough memory to hold the entire index at one time, calculations and other operations will be much faster.

At the time of this writing, the database calculation that was the original problem is still holding steady at about 10 minutes to calculate.  The page and index files have grown a bit but still appear to be holding steady, and the database statistics indicate a pretty good block density as well.  Not a bad bit of optimization — a twentyfold decrease in size on disk, and a calculation that comes back in minutes rather than never.

Like I said, I love performance optimization, and as I’ve said before, Essbase is a bit of an art and a science.  The art part is comprehensively understanding how things work and how they are meant to work, then using that information to make the right technical tradeoffs, then putting the changes into the system, keeping in mind that you are simply trying to give “hints” to the system to help it do its job a little better or faster.