An Essbase Optimization Story

I got a call the other day from one of my fellow Essbase developers. One of the cubes in their budgeting system was taking a long time to finish calculations, whereas just days before the calculation time was just fine.  I have an almost unhealthy obsession with cube performance and optimization, so I jumped in to take a look.

The first thing I looked at (although not generally the first thing I look at) was the average cluster ratio.  The average cluster ratio is roughly analogous to the fragmentation of your hard drive: data gets added, removed, and the performance with regard to accessing that data is increasingly suboptimal.  Although the configuration of the sparse and dense dimensions in a BSO cube in theory dictates where data gets placed in the corresponding data files, quite often, for performance reasons, the data files will just grow to accomodate the new data instead of putting it where it “should” go.  A cluster ratio of 1.00 is optimal.  In this case, the database with performance issues was reporting a significantly lower cluster ratio.  So the first order of business was to export all of the data in the cube to a raw export file, clear the cube, then reimport.

This helped performance a little bit, but not much.  Something else needed to be done.  I checked out the index and page files in the cube directory to discover that the index files were almost four gigabytes, with about 20 gigabytes of page files.  This size of data isn’t inherently difficult to work with, but it does warrant a look into the dense and sparse dimension settings,  the calc script that is having issues, and trying to understand the way in which the database data is being used.

The database is a six dimension cube with the following dimensions: Scenario, Years, Time, Location, Customer, Commodities.  Time and Scenario are marked as dense.  I can live with Time being dense, especially since it’s just periods rolling to quarters rolling to a fiscal year (with non-Level 0 members tagged as Dynamic Calc), but the dense Scenario dimension didn’t quit jive with me.  In this particular database, it turns out that the reason for the block explosion and subsequent poor calc performance is that users and loading in tons of data to the Forecast member in Scenario.  Scenario also has four other dense members.  In addition to the cube being to to create blocks new blocks on equations, the size of the page files was blowing out with all of the data being sent in.

The first thing that jumps out at me is that even though all of this data is being loaded to the Forecast member, we’re taking a hit on storage because the dense data blocks are being allocated for the other four members in Scenario that are very sparingly utilized.  Surely there is a better candidate for a dense dimension?  You betcha.  Given the sparse nature of the database, most of the other dimensions aren’t really good candidates for dense because of all the interdimensional irrelevance that it would incur, however, the Years dimension is a good candidate because the Forecast data being loaded in is being loaded in for a particular year and all periods.  Perfect.

Given that there are not a significant number of calc, report, and load scripts in this database, it is relatively safe to change the dimensional configuration here.  This isn’t always necessarily the case and changing dense/sparse settings could very easily deoptimize the performace of calc, load, and report scripts, not to mention adversely affecting other numerous aspects of the database.

And of course, this was all tested on the test server before loading to production.  The process was fairly straightforward.  I cleared the database again, and loaded the export file.  Since export files are sensitive to the dense/sparse settings in the database, I did this so I could restructure the database with the new settings and have Essbase do the work for me instead of having to load the data from somewhere else.  I also made sure to do this before doing a calculation on the database so there would be less data to restructure.  After the data loaded in, I changed the dense and sparse settings to set Years to dense and Scenario to sparse.  After the restructure, database statistics were looking fairly good, but I didn’t want to get my hopes up.

Clearly one of the issues with this database was that the calculation script that was taking too long had to span through 20 gigabytes of page files in order to do what it needed to do.  Generally my databases are set to bitmap encoding as the compression scheme (because in most cases it is “good enough”), but I decided to go with zlib compression instead.  Although technically speaking, zlib is more computationally intensive to compress and uncompress, because our servers’ CPUs aren’t generally getting hammered, I decided that I would rather hit the CPUs a little harder if it meant that I could read the data off the disk a little faster (even though it will still uncompress in memory to the same amount as bitmap encoding or any other compression scheme).  Conversely, the stronger compression will mean less data is also written to the disk.

I then had to do some tweaking to the calc script to align it with the next dense and sparse settings.  This was fairly easy.  As with most calc script optimization, this involved changing the FIX statements and CALC DIM statements so that I was fixing on sparse dimensions and doing as much calculating within the dense data block as possible.  Remember, you are always trying to help Essbase do its job and give it hints as to how it can do things more optimally.  In this case, we are trying to minimize the amount of data that must be scanned through.  By calculating within the dense blocks, we are doing all of the work in one place that we can, then moving on to the next area.  For good measure, I also added a SET command to utilize the server’s higher calc lock block setting (see your technical reference for more details).

I fired off the calculation and checked out the results.  The calculation’s first pass took about 20 minutes (a lot of blocks were being created the first time around).  After this pass was done, I checked out the index and page files.   The index file seemed to be holding steady at about 300 megabytes — quite an improvement from four gigs!  The page file was now about 700 megabytes.  Given this new page file size, I decided to set the index cache to 512 megabytes.  Although this will eat 512 megabytes directly out of memory on the server, we have decided to allocate more memory for this app until forecasts are done, then we can pull back on the amount of memory it gets.  But for now, with enough memory to hold the entire index at one time, calculations and other operations will be much faster.

At the time of this writing, the database calculation that was the original problem is still holding steady at about 10 minutes to calculate.  The page and index files have grown a bit but still appear to be holding steady, and the database statistics indicate a pretty good block density as well.  Not a bad bit of optimization — a twentyfold decrease in size on disk, and a calculation that comes back in minutes rather than never.

Like I said, I love performance optimization, and as I’ve said before, Essbase is a bit of an art and a science.  The art part is comprehensively understanding how things work and how they are meant to work, then using that information to make the right technical tradeoffs, then putting the changes into the system, keeping in mind that you are simply trying to give “hints” to the system to help it do its job a little better or faster.

Essbase and SQL Server Express: just don’t

I’m working on a writeup on how to get a fully functional Essbase server up and running in a virtual machine (using Sun VirtualBox).  I’m currently working on the steps — setting up the Essbase funtionality is relatively straightforward, although you need a relational database to store various information.

“No problem,” I thought to myself, “I’ll just pop on a copy of SQL Server Express.” Oh how naive I was.  In theory, I’m sure it’s possible to use SQL Server Express as the backend for a Hyperion installation, but after fiddling around with various options, enabling TCP/IP, and doing everything else I could think of, I just couldn’t get things to work.  So I yanked it off the virtual machine and put the real deal on — a full copy of SQL Server 2005 — and presto!  I was up and running in no time.  So if you happen to reach this blog article because you googled “SQL Server Express Essbase oh for the love of god why do you mock me” or something similar, I hope this popped first to let you know you may be in for a bumpy ride.

Update: Several of my readers have commented on how they’ve been successfully using SQL Server Express for their purposes.  I didn’t spend an incredible amount of time trying to configure Express to work, and given that the full version of SQL Server is available to me, I went ahead and installed that without looking back.  As I suspected, you need to play with the connection settings a bit in order to get things to work, and if someone cares to do a writeup of what that procedure looks like, I’d be more than happy to post it here.  Thanks for the suggestions, all.

Essbase Performance Optimization: it’s not just the calc script

Here’s a quick post that is a bit of a precursor to some of my more in-depth performance analysis articles that will be coming out in the future.  One of my automation systems takes a bit over an hour to run.  There are a lot of people I know that need to squeeze performance out of their systems and immediately look to their calc scripts.  Yes, calc time can be a large part of your downtime, as can data loads, reports, and other activities.  But I always stress that it is useful and important to understand your systems in their entirety.

As part of looking at the bigger picture, I put together the following graph showing each step and how long it takes in this system that takes around an hour.  It’s not hard to tell that the majority of the time that it takes to run this job (the brownish bar that takes about an hour) is in one task!  And what is that task?  It’s a bunch of report scripts running on a staging database.  This is clearly an obvious place for me to look at ways of saving time.

Duration of Steps for an Essbase Automation Process

Duration of Steps for an Essbase Automation Process

The staging database is is a rather clever cube that is essentially used to scrub, aggregate, and associate raw account level data to some more meaningful dimensional combinations for all of the other databases.  Data comes in, it’s calculated, and it outputs a bunch of report scripts.  Fundamentally, the reason that this approach takes so much time is that there are two highly sparse dimension combinations with tens of thousands of members each, and the report script writer has to go through a ton of on-disk data in order to figure out what to write.  I could spend some time trying to optimize this process, in fact, I could probably play with some settings and get at least 20% improvement right now.

But, this is one of those times where it pays to stand back and look at what we’re trying to accomplish.  As it turns out, I actually have all of the infrastructure I need to accomplish this task, but it’s in a SQL database.  And, the task that is being performed is actually much more conducive to the way that a relational database works.  I’m still putting the finishing touches on this process, but it’s mostly complete as of right now, and the performance is amazing.  I can pump through the same amount of data in mere minutes now, with no loss of functionality.

My specific goal is to get this process that takes an hour or longer, to run in less than five minutes.  I chose this instead of “as fast as possible” because I wanted something concrete and attainable.  (My secret goal, just for kicks, is to get this all to run in under a minute).  Once the automation for the SQL staging is all in place, I will be going through all of the individual databases and tweaking any and all settings in order to shave their downtime as well.

Historically, not a lot of effort has gone into extensive profiling on these cubes, so as nerdy as it sounds, I’m actually very interested to see where else I can shave a few seconds off.  At first this will undoubtedly involve using more write threads in the dataload, rewriting the calc scripts to tighten them up from just their current CALC ALL, aligning the order of the data fields and rows with the dense/sparse-ness of the outlines and the outline order, choosing better cache settings that are customized for the size of the index and page files, and perhaps looking at benefits of zlib compression (theoretically more CPU time to compress/decompress, however, generally the CPUs on these servers are not slammed very hard, so if I can get the size of the physical page files down, I may be able to read it into memory faster).

So remember — you spend a lot of time doing calculations, but that might not alway be where the low hanging fruit is.  I cannot stress enough the importance of understanding where you spend your time, and using that as a basis for helping Essbase do its job faster.

Some performance observations changing an ASO hierarchy from Dynamic to Stored

There are numerous ASO cubes among my flock.  Usually the choice to use ASO was not arrived at lightly — it was/is for very specific technical reasons.  Typically, the main reason I have for using ASO is to get the fast data loads and the ability to load oodles of noodles… I mean data.  Yes, oodles of data.  The downsides (with version 7.x) is that I’m giving up calc scripts, incremental loading (although this has been somewhat addressed in later versions), native Dynamic Time Series, some flexiblity with my hierarchies, and I have to have just one database per application (you… uhh… were doing that already, right?).  Also, due to the sparsity of much of the data, trying to use BSO would result in a very unwieldy cube in this particular instance.

I have a set of four cubes that are all very similar, except for different Measures dimensions.  They range from 10,000 to 40,000 members.  This isn’t huge, but in conjunction with the sizes of the other dimensions, there is an incredible “maximum possible blocks” potential (sidenote for EAS: one of the most worthless pieces of information to know about your cube.  Really, why?).  The performance of these cubes is generally pretty acceptable (considering the amount of data), but occasionally user Excel queries (especially with attribute dimensions) really pound the server and take awhile to come back.  So I started looking into ways to squeeze out a little more performance.

Due to the nature of the aggregations in the cubes, they all have Dynamic hierarchies in the Accounts/Measures dimension.  This is due to using the minus (-) operator, some label only stuff, and shared members, all of which ASO is very particular with, especially in this version.  All of the other dimensions have Stored hiearchies or are set to Multiple Hierarchies (such as the MDX calcs in the Time dimension to get me some Year-To-Date members).

Actually, it turns out that all of the these cubes have Measures dimensions that make it prohibitively difficult to set Measures to Stored instead of Dynamic, except for one.  So, even though I would need to spin off a separate EIS metaoutline in order to build the dimension differently (these cubes are all generated from the same metaoutline but with different filters), it might be worth it if I can get some better performance on retrieves to this cube — particularly when the queries start to put some of the attribute dimensions or other MDX calcs into play.

What I need is some sort of method to test the performance of some typical retrieves against the two variants of the cube.  I setup one cube as normal, loaded it up with data, and materialized 1 gig worth of aggregations.  Prior to this I had also copied the cube within EAS, made the tweak to Measures to change it from Dynamic to Stored, loaded the data, did a gig of aggregations.  At this point I had two cubes with identical data but one with a Dynamic hierarchy (Measures with 10,000 or so members) and one with stored.  Time to compare.

I cooked up some report scripts, MaxL scripts, and some batch files.  The batch file loads a configuration file which specifies which database to hit and which report to run.  It then runs the report against the database, sets a timestamp before and after it runs, and dumps it all to a text file.  It’s not an exact science, but in theory it’ll give me somewhat of an idea as to whether making the hierarchy Stored is going to help my users’ retrieval operations.  And without further ado, here are the results:

Starting new process at Tue 01/20/2009 10:11:08.35 Time Duration Winner
start-report_01-DB (Dynamic) 11:10.0
finish-report_01-DB (Dynamic) 11:13.4 00:03.4
start-report_01-DB (Stored) 11:14.6
finish-report_01-DB (Stored) 11:21.4 00:06.8 Dynamic
start-report_02-DB (Dynamic) 11:22.6
finish-report_02-DB (Dynamic) 11:51.9 00:29.3
start-report_02-DB (Stored) 11:53.0
finish-report_02-DB (Stored) 12:00.0 00:07.0 Stored
start-report_03-DB (Dynamic) 12:01.3
finish-report_03-DB (Dynamic) 12:02.2 00:00.9
start-report_03-DB (Stored) 12:03.9
finish-report_03-DB (Stored) 12:42.1 00:38.2 Dynamic
start-report_04-DB (Dynamic) 12:43.6
finish-report_04-DB (Dynamic) 12:50.2 00:06.6
start-report_04-DB (Stored) 12:51.3
finish-report_04-DB (Stored) 14:26.4 01:35.1 Dynamic
start-report_05-DB (Dynamic) 14:36.3
finish-report_05-DB (Dynamic) 15:18.3 00:42.0
start-report_05-DB (Stored) 15:19.6
finish-report_05-DB (Stored) 17:32.0 02:12.4 Dynamic
Starting new process at Tue 01/20/2009 10:30:55.65
start-report_01-DB (Dynamic) 30:57.5
finish-report_01-DB (Dynamic) 30:59.9 00:02.4
start-report_01-DB (Stored) 31:01.0
finish-report_01-DB (Stored) 31:05.8 00:04.7 Dynamic
start-report_02-DB (Dynamic) 31:07.7
finish-report_02-DB (Dynamic) 31:40.8 00:33.1
start-report_02-DB (Stored) 31:42.5
finish-report_02-DB (Stored) 31:46.1 00:03.5 Stored
start-report_03-DB (Dynamic) 31:50.4
finish-report_03-DB (Dynamic) 31:51.0 00:00.6
start-report_03-DB (Stored) 31:52.4
finish-report_03-DB (Stored) 31:52.8 00:00.3 Tie
start-report_04-DB (Dynamic) 31:54.0
finish-report_04-DB (Dynamic) 32:06.3 00:12.3
start-report_04-DB (Stored) 32:12.1
finish-report_04-DB (Stored) 32:51.4 00:39.3 Dynamic
start-report_05-DB (Dynamic) 32:55.5
finish-report_05-DB (Dynamic) 33:38.1 00:42.6
start-report_05-DB (Stored) 33:39.7
finish-report_05-DB (Stored) 36:42.5 03:02.8 Dynamic

So, interestingly enough, the Dynamic dimension comes out on top, at least for most of the tests I wrote. There is one of the tests though (report_02) that seems to completely smoke the Dynamic hierarchy.  I wrote these report scripts kind of randomly, so I definitely need to do some more testing, but in the mean time I think I feel better about using a Dynamic hierarchy.  Since the ASO aggregation method for these cubes is simply to process aggregations until the database size is a certain multiple of it’s original size, one of the next steps I could look at for query optimization would be to enable query tracking, stuff the query statistics by running some reports, and then using those stats to design the aggregations.  In any case, I’m glad I am looking at some actual data rather than just blindly implementing a change and hoping for the best.

This isn’t to say that Dynamic is necessarily better than Stored or vice versa, however, I ran this very limited number of tests numerous times and got essentially the same results.  For the least part, this goes to show that there isn’t really a silver bullet for optimization and that experimentation is always a good way to go (except on your production servers, of course).  I am curious, however to go back and look at report_02 and see what it is about that particular report that is apparently so conducive to Stored hierarchies.

Delete that stubborn Essbase application in EAS!

Sometimes you need to delete an Application but you can’t.  You still see it in Essbase Administration Services or even Application Manager (hey, there’s nothing wrong with still being on 6.5.x, if it ain’t broke…), but the app is broken or doesn’t exist.  The simplest cause of this is that someone [probably you] deleted the folder that contained the app, but you didn’t delete it through EAS.  And now, paradoxically enough, when you go in to EAS to delete the app, you can’t, because it can’t be started, because it doesn’t exist.  So essentially, the Essbase server tracks databases based not just on the existences of their folder, but also via some other means.

The easiest and most consistent method that I’ve come up with to kill the unruly app is this (assuming you have an extra Essbase server laying around):

  1. On a separate Essbase server (your test server, for example), create an app of the same name, then create a database of the same name (you just have one database to an app, right?   No?  Well, go ahead and create the same-named databases — and shame on you, for cramming more than one database in an application).  If you already had a copy of the app/database on your test server, you can skip this part and just use the existing files.
  2. Unload/stop the application you just created.
  3. Stop the Essbase service on the server giving you troubles.
  4. Copy the entire folder containing the new app to the proper location on the server that is messed up.  For example, if using Windows, if the name of your bad application is BadApp, right click on this folder from Windows Explorer and select Copy, then paste it into the app/folder on the server with the messed up app.  Use the appropriate cp -R command for Unix variants.
  5. Restart the Essbase service, if necessary.
  6. Start the App that was giving you problems.
  7. Delete it (and make sure when you right click on Delete, you press it with authority.  You show that Essbase server who the boss is.)

This approach has always worked for me.  If you don’t have access to another Essbase server, you may be able to get away with following these steps, to an extent, using the same server —  I think I got that approach to work once but I really had to play with it and create the app with a different name, then go in and edit some of the files.

In any case, I hope this helps someone out there that is sick of looking at non-existant apps in their EAS view or just needs to fix something that went corrupt on them — so good luck y’all.

How to copy an Essbase application from one server to another

I got a question from a reader about how to do this.  Specifically, they were copying an application from one server to another and everything seemed to be going fine, except that there was no data in the resultant database on the target server.  The reason for this is that when you copy Essbase apps between servers, the data does not get copied.  If you copy the app on the server, it will copy the data.  So, how do we accomplish this?

For BSO cubes, the easiest option to do a cross server data copy is to copy the application by right-clicking on it, selecting Copy, then choosing the target server.  Then right-click on the database and select Export…. The export file will show up in your App folder were all of the Essbase applications are.  On your new (but empty) database that you just created from a copy, you can load this data.  If you have access to the File System locations, you can load the file across the servers, otherwise, you may have to copy/move the newly created export text file to a location that you can get to through the EAS Load file dialog box.  You don’t need any load rules since the data is already formatted in a way that is native to the application (just don’t make any changes to the outline before you import the data).

As I mentioned, when you copy an application to a new name on the same server, it will take the data with it — and anything in same folder as the app, for that matter.  So if you’re in the habit of storing gigs and gigs of text files in your database folder, get ready for a long wait as everything copies.  At least in version 7 of Essbase, copying huge applications is not a very graceful operation — it can stall the server while files are copying.  Even the best RAID setup can really take a pounding from all the reading and writing necessary to duplication an application.

For ASO databases, your options are a bit more limited since you can’t just do a database export.  You can still copy the applcation (and all it’s rule files and report scripts and such) across servers, though.  As I’m sure you’re aware by now, ASO databases can be quite a bit more fickle than BSO — and you’re quite used to ASO dumping all of your data when you even so much as look at the outline in the wrong way.  But part of the reason you are using ASO in the first place is for the fast loading times, even with massive datasets.  You can follow your same steps and load back data to ASO through EAS, or if you have setup your automation correctly, you can run your scripts and populate your new copy of the ASO application/database.

MaxL Essbase automation patterns: moving data from one cube to another

A very common task for Essbase automation is to move data from one cube to another.  There are a number of reasons you may want or need to do this.  One, you may have a cube that has detailed data and another cube with higher level data, and you want to move the sums or other calculations from one to the other.  You may accept budget inputs in one cube but need to push them over to another cube.  You may need to move data from a “current year” cube to a “prior year” cube (a data export or cube copy may be more appropriate, but that’s another topic).  In any case, there are many reasons.

For the purposes of our discussion, the Source cube is the cube with the data already in it, and the Target cube is the cube that is to be loaded with data from the source cube.  There is a simple automation strategy at the heart of all these tasks:

  1. Calculate the source cube (if needed)
  2. Run a Report script on the source cube, outputting to a file
  3. Load the output from the report script to the target cube with a load rule
  4. Calculate the target cube

This can be done by hand, of course (through EAS), or you can do what the rest of us lazy cube monkeys do, and automate it.  First of all, let’s take a look at a hypothetical setup:

We will have an application/database called Source.Foo which represents our source cube.  It will have dimensions and members as follows:

  • Location: North, East, South, West
  • Time: January, February, …, November, December
  • Measures: Sales, LaborHours, LaborWages

As you can see, this is a very simple outline.  For the sake of simplicity I have not included any rollups, like having “Q1/1st Quarter” for January, February, and March.  For our purposes, the target cube, Target.Bar, has an outline as follows:

  • Scenario: Actual, Budget, Forecast
  • Time: February, …, November, December
  • Measures: Sales, LaborHours, LaborWages

These outlines are similar but different.  This cube has a Scenario dimension with Actual, Budget, and Forecast (whereas in the source cube, since it is for budgeting only, everything is assumed to be Budget).  Also note that Target.Bar does not have a Location dimension, instead, this cube only concerns itself with totals for all regions.  Looking back at our original thoughts on automation, in order for us to move the data from Source.Foo to Target.Bar, we need to calculate it (to roll-up all of the data for the Locations), run a report script that will output the data how we need it for Target.Bar, use a load rule on Target.Bar to load the data, and then calculate Target.Bar.  Of course, business needs will affect the exact implementation of this operation, such as the timing, the calculation to use, and other complexities that may arise.  You may actually have two cubes that don’t have a lot in common (dimensionally speaking), in which case, your load rule might need to really jump through some hoops.

We’ll keep this example really simple though.  We’ll also assume that the automation is being run from a Windows server, so we have a batch file to kick things off:

cd /d %~dp0
essmsh ExportAndLoadBudgetData.msh

I use the cd /d %~dp0 on some of my systems as a shortcut to switch the to current directory, since the particular automation tool installed does not set the home directory of the file to the current working directory.  Then we invoke the MaxL shell (essmsh, which is in the PATH) and run ExportAndLoadBudgetData.msh.  I enjoy giving my automation files unnecessarily long filenames.  It makes me feel smarter.

As you may have seen from an earlier post, I like to modularize my MaxL scripts to hide/centralize configuration settings, but again, for the sake of simplicity, this example will forgo that.  Here is what ExportAndLoadBudgetData.msh could look like:

/* Copies data from the Budget cube (Source.Foo) to the Budget Scenario
   of Target.Bar */
/* your very standard login sequence here */
login AdminUser identified by AdminPw on EssbaseServer;
/* at this point you may want to turn spooling on (omitted here) */

/* disable connections to the application -- this is optional */
alter application Source disable connects;

/* PrepExp is a Calc script that lives in Source.Foo and for the purposes
   of this example, all it does is makes sure that the aggregations that are
   to be exported in the following report script are ready. This may not be
   necessary and it may be as simple as a CALC ALL; */

execute calculation Source.Foo.PrepExp;

/* Budget is the name of the report script that runs on Source.Foo and outputs a
   text file that is to be read by Target.Bar's LoadBud rules file */

export database Source.Foo
    using report_file 'Budget'
    to data_file 'foo.txt';

/* enable connections, if they were disabled above */
alter application Source enable connects;
/* again, technically this is optional but you'll probably want it */
alter application Target disable connects;

/* this may not be necessary but the purpose of the script is to clear out
   the budget data, under the assumption that we are completely reloading the
   data that is contained in the report script output */

execute calculation Target.Bar.ClearBud;

/* now we import the data from the foo.txt file created earlier. Errors
   (rejected records) will be sent to errors.txt */

import database Target.Bar data
    from data_file 'foo.txt'
    using rules_file 'LoadBud'
    on error write to 'errors.txt';

/* calculate the new data (may not be necessary depending on what the input
   format is, but in this example it's necessary */

execute calculation Target.Bar.CalcAll;

/* enable connections if disabled earlier */
alter application Target enable connects;
/* boilerplate cleanup. Turn off spooling if turned on earlier */

logoff;
exit;

At this point , if we don’t have them already, we would need to go design the aggregation calc script for Source.Foo (PrepExp.csc), the report script for Source.Foo (Budget.rep), the clearing calc script on Target.Bar (ClearBud.csc), the load rule on Target.Bar (LoadBud.rul), and the final rollup calc script (CalcAll.csc).  Some of these may be omitted if they are not necessary for the particular process (you may opt to use the default calc script, may not need some of the aggregations, etc).

For our purposes we will just say that the PrepExp and CalcAll calc scripts are just a CALC ALL or the default calc.  You may want a “tighter” calc script, that is, you may want to design the calc script to run faster by way of helping Essbase understand what you need to calculate and in what order.

What does the report script look like?  We just need something to take the data in the cube and dump it to a raw text file.

<ROW ("Time", "Measures")

{ROWREPEAT}
{SUPHEADING}
{SUPMISSINGROWS}
{SUPZEROROWS}
{SUPCOMMAS}
{NOINDENTGEN}
{SUPFEED}
{DECIMAL 2}

<DIMBOTTOM "Time"
<DIMBOTTOM "Measures"
"Location"
!

Most of the commands here should be pretty self explanatory.  If the syntax looks a little different than you’re used to, it’s probably because you can also jam all of the tokens in one line if you want like {ROWREPEAT SUPHEADING} but historically I’ve had them one to a line.  If there were more dimensions that we needed to represent, we’d put thetm on the <ROW line.  As per the DBAG, we know that the various tokens in between {}’s format the data somehow — we don’t need headings, missing rows, rows that are zero (although there are certainly cases where you might want to carry zeros over), no indentation, and numbers will have two decimal places (instead of some long scientific notation). Also, I have opted to repeat row headings (just like you can repeat row heading in Excel) for the sake of simplicity, however, as another optimization tip, this isn’t necessary either — it just makes our lives easier in terms of viewing the text file and loading it to a SQL database or such.

As I mentioned earlier, we didn’t have rollups such as different quarters in our Time dimension.  That’s why we’re able to get away with using <DIMBOTTOM, but if we wanted just the Level 0 members (the months, in this case), we could use the appropriate report script.  Lastly, from the Location dimension we are taking use the Location member (whereas <DIMBOTTOM “Time” tells Essbase to give us all the members to the bottom of the Time dimension, simply specifying a member or members from the dimension will give us those members), the parent to the different regions.  “Location” will not actually be written in the output of the report script because we don’t need it — the outline of Target.Bar does not have a location dimension since it’s implied that it represents all locations.

The output of the report script will look similar to the following:

January Sales 234.53
January LaborHours 35.23
February Sales 532.35

From here it is a simple matter of designing the load rule to parse the text file.  In this case, the rule file is part of Target.Bar and is called LoadBud.  If we’ve designed the report script ahead of time and run it to get some output, we can then go design the load rule.  When the load rule is done, we should be able to run the script (and schedule it in our job scheduling software) to carry out the task in a consistent and automated manner.

As an advanced topic, there are several performance considerations that can come into play here.  I already alluded to the fact that we may want to tighten up the calc scripts in order to make things faster.  In small cubes this may not be worth the effort (and often isn’t), but as we have more and more data, designing the calc properly (and basing it off of good dense/sparse choices) is critical.  Similarly, the performance of the report script is also subject to the dense/sparse settings, the order of the output, and other configuration settings in the app and database.  In general, what you are always trying to do (performance wise) is to help the Essbase engine do it’s job better — you do this by making the tasks you want to perform more conducive to the way that Essbase processes data.  In other words, the more closely you can align your data processing to the under-the-hood mechanisms of how Essbase stores and manipulates your data, the better off you’ll be.  Lastly, the load rule on the Target database, and the dense/sparse configurations of the Target database, will impact the data load performance.  You may not and probably will not be able to always optimize everything all at once — it’s a balancing act — since a good setting for a report script may result in suboptimal calculation process.  But don’t let this scare you — try to just get it to work first and then go in and understand where the bottlenecks may be.

As always, check the DBAG for more information, it has lots of good stuff in it.  And of course, try experimenting on your own, it’s fun, and the harder you have to work for knowledge, the more likely you are to retain it.  Good luck out there!

A quick and dirty substitution variable updater

There are a lot of different ways to update your substitution variables.  You can tweak them with EAS by hand, or use one of several different methods to automate it.  Here is one method that I have been using that seems to hit a relative sweet spot in terms of flexibility, reuse-ability, and effectiveness.

First of all, why substitution variables?  They come in handy because you can leave your Calc and Report scripts alone, and just change the substitution variable to the current day/week/month/year and fire off the job.  You can also use them in load rules.  You would do this if you only want to load in data for a particular year or period, or records that are newer than a certain date, or something similar.

The majority of my substitution variables seem to revolve around different time periods.  Sometimes the level of granularity is just one period or quarter (and the year of the current period, if in a separate Years dimension), and sometimes it’s deeper (daily, hourly, and so on).

Sure, we could change the variables ourselves, manually, but where’s the fun in that?  People that know me know that I have a tendency to automate anything I can, although I still try to have respect for what we have come to know as “keeping an appropriate level of human intervention” in the system.  That being said, I find that automating updates to timing variables is almost always a win.

Many organizations have a fiscal calendar that is quite different than a typical (“Gregorian”) calendar with the months January through December.  Not only can the fiscal calendar be quite different, it can have some weird quirks too.  For example, periods may have only four weeks one year but have five weeks in other years, and on top of that, there is some arcane logic used to calculate which is which (well, it’s not really arcane, it just seems that way).  The point is, though, that we don’t necessarily have the functionality on-hand that converts a calendar date into a fiscal calendar date.

One approach to this problem would be to simply create a data file (or table in a relational database, or even an Excel sheet) that maps a specific calendar date to its equivalent fiscal date counterparts.  This is kind of the “brute-force” approach, but it works, and it’s simple.  You just have to make sure that someone remembers to update the file from year to year.

For example, for the purposes of the date “December 22, 2008” in a cube with separate years, time, and weekday dimensions, I need to know three things: the fiscal year (probably 2008), the fiscal period (we’ll say Period 12 for the sake of simplicity, and the day of the week: day “2”).  Of course, this can be very different across different companies and organizations.  Monday might be the first day of the week or something.  If days are included in the Time dimension, we don’t really need a separate variable here.  So, the concepts are the same but the implementation will look different (as with everything in Essbase, right?).

I want something a bit “cleaner,” though.  And by cleaner, I mean that I want something algorithmic to convert one date to another, not just a look-up table.  Check with the Java folks in your company, if you’re lucky then they may already have a fiscal calendar class that does this for you.  Or it might be Visual Basic, or C++, or something else.  But, if someone else did the hard work already, then by all means, don’t reinvent the wheel.

Here is where the approaches to updating variables start to differ.  You could do the whole thing in Java, updating variables with the Java API.  You could have a fancy XML configuration file that is interpreted and tells the system what variables to create, where to put them, and so on.  In keeping with the KISS philosophy, though, I’m going to leave the business logic separate from the variable update mechanism.  Meaning this: in this case I will use just enough program code to generate the variables, then output them to a space-delimited file.  I will then have a separate process that reads the file and updates the Essbase server.  One of the other common approaches here would be to simply output MaxL or ESSCMD script itself, then run the file.  This works great too, however, I like having “vanilla” files that I can load in to other programs if needed (or, say, use in a SQL Server DTS/SSIS job).

At the end of the day, I’ve generated a text file with conents like this:

App1 Db1 CurrentYear 2008
App1 Db1 CurrentPeriod P10
App1 Db1 CurrentWeek Week4
App2 Db1 CurrentFoo Q1

Pretty simple, right?  Note that this simplified approach is only good for setting variables with a specific App/database.  It needs to be modified a little to set global substitution variables (but I’m sure you are enterprising enough to figure this out — check the tech ref for the appropriate MaxL command).

At this point we could setup a MaxL script that takes variables on the command line and uses them in its commands to update the corresponding substitution variable, but there is also another way to do this: We can stuff the MaxL statement into our invocation of the MaxL shell itself.  In a Windows batch file, this whole process looks like this:

SET SERVER=essbaseserver
SET USER=essbaseuser
SET PW=essbasepw

REM generates subvar.conf file
REM this is your call to the Java/VB/C/whatever program that
REM updates the variable file
subvarprogram.exe

REM this isn't strictly needed but it makes me feel better
sleep 2

REM This is batch code to read subvar.conf's 4 fields and pipe
REM them into a MaxL session
REM NOTE: this is ONE line of code but may show as multiple in
REM your browser!

FOR /f "eol=; tokens=1,2,3,4 delims=, " %%i in (subvar.conf) do echo
alter database %%j.%%k set variable %%i %%l; | essmsh -s %SERVER% -l
%USER% %PW% -i 

REM You would use the below statement for the first time you need
REM to initialize the variables, but you will use the above statement
REM for updates to them (you can also just create the variables in
REM EAS)

REM FOR /f "eol=; tokens=1,2,3,4 delims=, " %%i in (subvar.conf) do
echo alter database %%j.%%k add variable %%i; | essmsh -s %SERVER% -l
%USER% %PW% -i

Always remember — there’s more than one way to do it. And always be mindful of keeping things simple — but not too simple.  Happy holidays, ya’ll.

Getting everything done with just Essbase and Dodeca

Many people I talk to are amazed that a company can get by with just Essbase and Dodeca.  No Planning deployment, no HFM, none of that other stuff.  The core Essbase analytic engine brings a lot to the table, which is this: multi-dimensional analysis and all of the associated functionality for building, automating, and managing cubes (EAS, EDS, EIS, and such, in Essbase 7.x terminology).

So, out of the box with a typical Essbase setup, you can unlock the benefits of multi-dimensional analysis just by virtue of using using the Excel add-in.  We all know the Excel add-in has some issues (it’s not exactly aging gracefully), but it’s simple, fast, and many users absolutely love it.  The financial types tend to “get it” right away and go on their merry way.  They also seem pretty sad when they are “forced” to go to Business Objects for some data that isn’t in Essbase.

Providing access to cubes via the add-in gives your users a lot already.  Historically, it seems that many companies have grown organically around the Excel add-in.  They evolve to using VBA automation for retrieval and reporting jobs, creating formal Excel solutions, and of course facilitating various processes with a good old “Lock and Send,” with some calcs thrown in for good measure.  This is all well and I good, but for larger endeavors this approach can suffer from maintenance, security, and scalability issues.

Without clearly defined standards, the VB implementations seem to go south and turn into a mess of spaghetti code that breaks at the most inopportune times.  There are different versions of everything floating around.  The deployment method ranges from emailing updated files, to copying them off the network drive, to even throwing them on a USB stick and moving them around when all else fails.  It’s not always bad, but surely there is a way to improve upon this paradigm?

So, let’s step back for a moment.  Our big win in the first place was these useful cubes that slice and dice data in a way that other systems in the enterprise simply can’t compare to.  If we want to expand upon this existing success, what do we put on the wish list?  What are my goals, and what do I want to provide for my users?

I want to give my users something that’s easy to use, has a short learning curve, leverages existing Excel/ad-hoc knowledge, and is cohesive. As for myself , I’m a bit more concerned about the back-end.  I want something that is easy to deploy, easy to update, adapts to changing business needs, and is scalable.  Since I know and love cubes so much, I also want to leverage my existing knowledge of Essbase.

Here’s where Dodeca comes in.  Dodeca complements Essbase functionality (particularly my existing Essbase functionality) very well, by providing everything on top of the cubes that I want to give to my users.  My deployment and update issues are essentially solved since Dodeca uses Microsoft ClickOnce technology — each time the user runs Dodeca on their workstation, a process checks against the current version that has been published to a central server, and updates files if it needs to.  This functionality is dependent on the client workstation having the .NET Framework installed — this is not some obscure third-party library either, so if your client workstations don’t already have it, it is not incredibly difficult to get setup.  Essentially in this scenario, we get the benefits of a client-side application (responsive interface, complex widgets), coupled with the distribution benefits that the net provides.  Also, since all content is stored and retrieved from a central server, we just have to update content in one place and our clients will pull it down the next time they use it.

Once a user launches Dodeca, they are presented with a list of Views that they have access to.  Views can be different reports, web pages, and other things.  Dodeca is highly configurable, so my following comments will reflect a typical usage scenario, but not necessarily the absolute way that things have to be setup.  I also find it useful to set a user’s default view to a web page that provides status updates and other informational messages.

A common Essbase activity with Excel is to refresh a particular report (or numerous reports, or mountains of reports) at the end of a period.  This means updating the time period in some fashion, and refreshing everything.  This is either done by hand or with a little home brew VBA action.  This functionality lends itself quite well to being implemented in Dodeca.  We can provide the same report in Dodeca by modifying the current Excel sheet a little bit, importing it, and setting a few other things (who has access to it, how the report gets updated, what database it comes out of, etc).  By doing this, we can now provide the same report, with user-selectable dimensional members (such as Time period, but just as easily Location, Department, Scenario, and so on) to anybody in the organization.  If we need to modify the report, we can do it in one place (the server) and seconds later, if a user pulls up the report, they now see the latest version.

So far we’ve just scratched the surface, but as you can see, we’ve addressed our needs and wants fairly well.  Deployment and upgrades are simple (and I am largely abstracted from a sometimes lethargic IT department), and content updates ridiculously simple.  My users have a cohesive (one window with multiple tabs) environment to do their work in, with all of the functionality of Excel.  As an administrator I have gotten to keep, leverage, and extend all of my existing Essbase infrastructure and functionality.  And best of all, these tools are not mutually exclusive in the least bit: my users can keep using the Excel add-in the love so much.  In fact, while in Dodeca, any sheet they pull up can be instantly sent to Excel just by clicking a button.

It works for me, it gets things done, and it’s a winning combination.  In the coming weeks and months I’ll be expanding more on this.  Happy Holidays!

MaxL tricks and strategies on upgrading a legacy automation system from ESSCMD

The Old

In many companies, there is a lot of code laying around that is, for lack of better word, “old.” In the case of Essbase-related functionality, this may mean that there are some automation systems with several ESSCMD scripts laying around.  You could rewrite them in MaxL, but where’s the payoff?  There is nothing inherently bad with old code, in fact, you can often argue a strong case to keep it: it tends to reflect many years of tweaks and refinements, is well understood, and generally “just works” — and even when it doesn’t you have a pretty good idea where it tends to break.

Rewrite it?

That being said, there are some compelling reasons to do an upgrade.  The MaxL interpreter brings a lot to the table that I find incredibly useful.  The existing ESSCMD automation system in question (essentially a collection of batch files, all the ESSCMD scripts with the .aut extension, and some text files) is all hard-coded to particular paths.  Due to using absolute paths with UNC names, and for some other historical reasons, there only exists a production copy of the code (there was perhaps a test version at some point, but due to all of the hard-coded things, the deployment method consisted of doing a massive search and replace operation in a text editor).  Because the system is very mature, stable, and well-understood, it has essentially been “grandfathered” in as a production system (it’s kind of like a “black box” that just works).

The Existing System

The current system performs several different functions across its discreet job files.  There are jobs to update outlines, process new period data, perform a historical rebuild of all cubes (this is currently a six hour job and in the future I will show you how to get it down to a small fraction of its original time), and some glue jobs that scurry data between some different cubes and systems.  The databases in this system are setup such that there are  about a dozen very similar cubes.  They are modeled on a series of financial pages, but due to differences in the way some of the pages work, it was decided years ago that the best way to model cubes on the pages was to split them up in to different sets of cubes, rather than one giant cube.  This design decision had paid off in many ways.  One, it keeps the cubes cleaner and more intuitive; interdimensional irrelevance is also kept to a minimum.  Strategic dense/sparse settings and other outline tricks like dynamic calcs in the Time dimension rollups also keep things pretty tight.

Additionally, since the databases are used during the closing period, not just after (for reporting purposes), new processes can go through pretty quickly and update the cubes to essentially keep them real-time with how the accounting allocations are being worked out.  Keeping the cubes small allows for a lot less down-time (although realistically speaking, even in the middle of a calc, read-access is still pretty reliable).

So, first things first.  Since there currently are no test copies of these “legacy” cubes, we need to get these setup on the test server.  This presents a somewhat ironic development step: using EAS to copy the apps from the production server, to the development server.  These cubes are not spun up from EIS metaoutlines, and there is very little compelling business reason to convert them to EIS just for the sake of converting them, so this seems to be the most sensible approach.

Although the outlines are in sync right now between production and development because I just copied them, the purpose of one of the main ESSCMD jobs is to update the outlines on a period basis, so this seems like a good place to start.  The purpose of the outline update process is basically to sync the Measures dimension to the latest version of the company’s internal cross-reference.  The other dimensions are essentially static, and only need to be updated rarely (e.g., to add a new year member).  The cross-reference is like a master list of which accounts are on which pages and how they aggregate.

On a side note, the cross-reference is part of a larger internal accounting system.  What it lacks in flexibility, it probably more than makes up for with reliability and a solid ROI.  One of the most recognized benefits that Essbase brings to the table in this context is a completely new and useful way of analyzing existing data (not to mention write-back functionality for budgeting and forecasting) that didn’t exist.  Although Business Objects exists within the company too, it is not recognized as being nearly as useful to the internal financial customers as Essbase is.  I think part of this stems from the fact that BO seems to be pitched more to the IT crowd within the organization, and as such, serves mostly as a tool to let them distribute data in some fashion, and call it a day.  Essbase really shines, particularly because it is aligned with the Finance team, and it is customized (by finance team members) to function as a finance tool, versus just shuttling gobs of data from the mainframe to the user.

The cross-reference is parsed out in an Access database in order to massage the data into various text files that will serve as the basis of dimension build load rules for all the cubes.  I know, I know, I’m not a huge Access fan either, but again, the system has been around forever, It Just Works, and I see no compelling reason to upgrade this process, to say, SQL Server.  Because of how many cubes there are, different aliases, different rollups, and all sorts of fun stuff, there are dozens of text files that are used to sync up the outlines.  This has resulted in some pretty gnarly looking ESSCMD scripts.  They also use the BEGININCBUILD and ENDINCBUILD ESSCMD statements, which basically means that the cmd2mxl.exe converter is useless to us.  But no worries — we want to make some more improvements besides just doing a straight code conversion.

In a nutshell, the current automation script logs in (with nice hard-coded server path, user name, and password, outputs to a fixed location, logs in to each database in sequence, and has a bunch of INCBUILDDIM statements.  ESSCMD, she’s an old girl, faithful, useful, but just not elegant.  You need a cheatsheet to figure out what the invocation parameters all mean.  I’ll spare you the agony of seeing what the old code looks like.

Goals

Here are my goals for the conversion:

  • Convert to MaxL. As I mentioned, MaxL brings a lot of nice things to the table that ESSCMD doesn’t provide, which will enable some of the other goals here.
  • Get databases up and running completely in test — remember: the code isn’t bad because it’s old or “legacy code,” it’s just “bad” because we can’t test it.
  • Be able to use same scripts in test as in production.  The ability to update the code in one place, test it, then reliably deploy it via a file-copy operation (as opposed to hand-editing the “production” version) is very useful (also made easier because of MaxL).
  • Strategically use variables to simplify the code and make it directory-agnostic.  This will allow us to easily adapt the code to new systems in the future, for example, if we want to consolidate to a different server in the future, even one on a different operating system).
  • And as a tertiary goal: Start using a version control system to manage the automation system.  This topic warrants an article all on itself, which I fully intend to write in the future.  In the meantime, if you don’t currently use some type of VCS, all you need to know about the implications of this are that we will have a central repository of the automation code, which can be checked-in and checked-out.  In the future we’ll be able to look at the revision history of the code.  We can also use the repository to deploy code to the production server.  This  means that I will be “checking-out” the code to my workstation to do development, and I’m also going to be running the code from my workstation with a local copy of the MaxL interpreter.  This development methodology is made possible in part because in this case, my workstation is Windows, and so are the Essbase servers.

For mostly historical reasons the old code has been operated and developed on the analytic server itself, and there are various aspects about the way the code has been developed that mean you can’t run it from a remote server.  As such, there are various semantic client/server inconsistencies in the current code (e.g. in some cases we are referring to a load rule by it’s App/DB context, and in some cases we are using an absolute file path).  Ensuring that the automation works from a remote workstation will mean that these inconsistencies are cleaned up, and if we choose to move the automation to a separate server in the future, it will be much easier.

First Steps

So, with all that out of the way, let’s dig in to this conversion!  For the time being we’ll just assume that the versioning system back-end is taken care of, and we’ll be putting all of our automation files in one folder.  The top of our new MaxL file (RefreshOutlines.msh) looks like this:

msh "conf.msh";
msh "$SERVERSETTINGS";

What is going on here?  We’re using some of MaxL features right away.  Since there will be overlap in many of these automation jobs, we’re going to put a bunch of common variables in one file.  These can be things like folder paths, app/database names, and other things.  One of those variables is the $SERVERSETTINGS variable.  This will allow us to configure a variable within conf.msh that points to where the server-specific MaxL configuration file.  This is one method that allows us to centralize certain passwords and folder paths (like where to put error files, where to put spool files, where to put dataload error files, and so on).  Configuring things this way gives us a lot of flexibility, and further, we only really need to change conf.msh in order to move things around — everything else builds on top of the core settings.

Next we’ll set a process-specific configuration variable which is a folder path.  This allows us to define the source folder for all of the input files for the dimension build datafiles.

SET SRCPATH = "../../Transfer/Data";

Next, we’ll log in:

login $ESSUSER identified by $ESSPW on $ESSSERVER;

These variables are found in the $SERVERSETTINGS file.  Again, this file has the admin user and password in it.  If we needed more granularity (i.e., instead of running all automation as the god-user and instead having just a special ID for the databases in question), we could put that in our conf.msh file.  As it is, there aren’t any issues on this server with using a master ID for the automation.

spool stdout on to "$LOGPATH/spool.stdout.RefreshOutlines.txt";
spool stderr on to "$LOGPATH/spool.stderr.RefreshOutlines.txt";

Now we use the spooling feature of MaxL to divert standard output and error output to two different places.  This is useful to split out because if the error output file has a size greater than zero, it’s a good indicator that we should take a look and see if something isn’t going as we intended.  Notice how we are using a configurable LOGPATH directory.  This is the “global” logpath, but if we wanted it somewhere else we could have just configured it that way in the “local” configuration file.

Now we are ready for the actual “work” in this file.  With dimension builds, this is one of the areas where ESSCMD and MaxL do things a bit differently.  Rather than block everything out with begin/end build sections, we can jam all the dimension builds into one statement.  This particular example has been modified from the original in order to hide the real names and to simplify it a little, but the concept is the same.  The nice thing about just converting the automation system (and not trying to fix other things that aren’t broken — like moving to an RDBMS and EIS) is that we get to keep all the same source files and the same build rules.

import database Foo.Bar dimensions

    from server text data_file "$SRCPATH/tblAcctDeptsNon00.txt"
    using server rules_file 'DeptBld' suppress verification,

    from server text data_file "$SRCPATH/tblDept00Accts.txt"
    using server rules_file 'DeptBld'

    preserve all data
    on error write to "$ERRORPATH/JWJ.dim.Foo.Bar.txt";

In the actual implementation, the import database blocks go on for about another dozen databases.  Finally, we finish up the MaxL file with some pretty boilerplate stuff:

spool off;
logout;
exit;

Note that we are referring to the source text data file in the server context.  Although you are supposed to be able to use App/database naming for this, it seems that on 7.1.x, even if you start the filename with a file separator, it still just looks in the folder of the current database.  I have all of the data files in one place, so I was able to work around this by just changing the SRCPATH variable to go up two folders from the current database, then back down into the Transfer\Data folder.  The Transfer\Data folder is under the Essbase app\ folder.  It’s sort of a nexus folder where external processes can dump files because they have read/write access to the folder, but it’s also the name of a dummy Essbase application (Transfer) and database (Data) so we can refer to it and load data from it, from an Essbase-naming perspective.  It’s a pretty common Essbase trick.  We are also referring to the rules files from a server context.  The output files are to a local location.  This all means that we can run the automation from some remote computer (for testing/development purposes), and we can run it on the server itself.  It’s nice to sort of “program ahead” for options we may want to explore in the future.

For the sake of completeness, when we go to turn this into a job on the server, we’ll just use a simple batch file that will look like this:

cd /d %~dp0
essmsh RefreshOutlines.msh

The particular job scheduling software on this server does not run the job in the current folder of the job, therefore we use cd /d %~dp0 as a Windows batch trick to change folders (and drives if necessary) to the folder of the current file (that’s what %~dp0 expands out to).  Then we run the job (the folder containing essmsh is in our PATH so we can run this like any other command).

All Done

This was one of the trickier files to convert (although I have just shown a small section of the overall script).  Converting the other jobs is a little more straightforward (since this is the only one with dimension build stuff in it), but we’ll employ many of the same concepts with regard to the batch file and the general setup of the MaxL file.

How did we do with our goals?  Well, we converted the file to MaxL, so that was a good start.  We copied the databases over to the test server, which was pretty simple in this case.  Can we use the same scripts in test/dev and production?  Yes.  Since the server specific configuration files will allow us to handle any folder/username/password issues that are different between the servers, but the automation doesn’t care (it just loads the settings from whatever file we tell it), I’d say we addressed this just fine.  We used MaxL variables to clean things up and simplify — this was a pretty nice cleanup over the original ESSCMD scripts.  And lastly, although I didn’t really delve into it here, this was all developed on a workstation (my laptop) and checked in to a Subversion repository, further, the automation all runs just fine from a remote client.  If we ever need to move some folders around, change servers, or make some other sort of change, we can probably adapt and test pretty quickly.

All in all, I’d say it was a pretty good effort today.  Happy holidays ya’ll.