Cascading Report Summary Sheets in Dodeca

Earlier this week I looked at setting up report cascading with Dodeca. With that foundation in mind, today I’d like to look at elaborating on it a little bit by adding on a summary sheet with Dodeca’s built-in summary sheet functionality.

What is a Summary Sheet?

A summary sheet is a sheet in our workbook that is derived from the other sheets generated from a report cascade. For example, let’s say that in my earlier cascade example, I chose two products and two markets: Colas and Diet Colas as my products, and Washington and Oregon as my markets. This would result in four sheets/tabs being generated. Now, what Dodeca allows me to do pretty easily is turn on a summary sheet, which will automatically generate a tab for me (this will be a fifth tab in this example), and contain formulas I designate that will sum up the sheets generated from the cascade.

This is really useful for a couple of reasons. Now, in a perfect world, you’d be able to get summary data from the outline itself, for example, a parent rollup in the Product, Market, and Time dimensions. That’s the whole point of Essbase, right? But sometimes we need arbitrary groupings of things. Frequently this is accomplished with alternate hierarchies. But what if we want the user to be able to create a report with a summary value that is neither in the primary hierarchy nor any alternate hierarchies?

One way we can provide this functionality to our users is to turn on a summary sheet. Here’s a look at the summary sheet options, they are located under the Cascade option group on our Essbase Excel view:

dodeca-summary-sheet-01-options

In particular, note that SummarySheetAdd is set to True. Additionally, I have specified a name for the sheet (SummarySheetName = Summary), specified a SummarySheetPosition (I can have my sheet be the first or the last sheet), and a range of cells to summarize (SummarySheetSummaryRangeAddress). Per best practices, I have created a defined name in my workbook so that I can refer to it easily. Additionally, if I adjust the rows and columns in my template, I don’t need to come back to these options and worry about updating some arbitrary range name.

Let’s go take a look at the simple source template:

dodeca-summary-sheet-02-template

As you can see in the template, I have defined a fairly bread and butter Dodeca Excel template, where I have tokenized the Year, Product, and Market, meaning that users will be able to make choices from the selectors and build the report with whatever data they want.

Remember the summary range I mentioned earlier? Here it is, highlighted, showing the cell ranges that will be added up on the summary tab:

dodeca-summary-sheet-03-defined-name

Just for completeness, check out the retrieval range that will be refreshed when the view is built:

dodeca-summary-sheet-04-retrieve-range

Remember, Dodeca can do multiple retrievals in a single sheet and from multiple, disparate data sources. In this case we just have a simple retrieve from the classic Sample/Basic app, but this could just as easily be relational data from SQL Server, Oracle, or all of the above.

Lastly, having made selections from multiple dimensions, let’s build the view and check out our new summary sheet. Note the Summary tab that appears first, just as we wanted:

dodeca-summary-sheet-05-built-summary

There we have it: with just a few additional options to configure, I have a nice, dynamic summary sheet in this book that users can easily use. What’s even more interesting, is that I gave a user the ability to dynamically sum up data that doesn’t have a member or alternate hierarchy to do the math for me. In this case, it’s the arbitrary sum of Colas and Root Beer for January and February.

Saving Everyone Time

Not to belabor the point, but I think this example shows off some really interesting ways that we provided flexibility to the user and saved the user and administrator time. Without such dynamic summing capabilities, our user might have resorted to pulling data down to Excel and summing things up by hand/formulas, or otherwise going out of their way to get to data that wasn’t directly in the cube. As a one off it’s not so bad, but if it’s a way of common way of seeing the data, then you are potentially saving yourself a minute here and a minute there on a repeated basis.

Our administrator saved time because this might represent an alternate member/group that would be nice to have in the outline but not necessary to have. And in today’s complex metadata management world, that might just represent time not spent marshaling a member request through a byzantine approval process.

In Summary (Pun Intended)

Summary sheets are a very easy to utilize feature in the Dodeca Spreadsheet Management System and can offer your users a convenient way of looking at data that might otherwise be inconvenient (or worse, error prone!) to look at. As with so many other Dodeca features, they can quickly add value to an existing solution and provide your users with that little bit of extra utility that takes your overall solution from good to great.

Cascading Reports with Dodeca

One of Dodeca’s banner features is its incredibly advanced support for cascading reports. In case you’re not familiar, cascading reports are where multiple views or reports are built for all different permutations of given parameters. How this generally relates to the world of reporting is that we may want to choose multiple items across several different dimensions, then generate a report or view for each different combination.

For example, consider an Income Summary report where a user would want to choose a particular location, scenario, and product combination. In Dodeca, these would each be a selector associated with a report. As an administrator, when I am designing my report, I  associate these selectors with my report. When an end-user of the report goes to build it, they will be presented with choices from each dimension to simply choose from.

Of particular note is that the end-user doesn’t really need to know what a selector is, how it’s configured, where its data comes from, or anything. All they need to know is that they can click on whatever they want and nicely formatted data will be presented to them in a familiar, intuitive format. Incidentally, quite often the format of the report that comes back is their own design – some gnarly spreadsheet that used to take minutes or hours to painstakingly refresh, and now comes back reliably and quickly, every time, with the push of a button.

Let’s take a look at how this plays out for a user, using a simple Income Summary report. First, let’s load the view by clicking on it:

dodeca-cascades-01-unconfigured-view

The report has not run yet, we must choose selections for each selector, then run it.

dodeca-cascades-02-selectors-configured

Now we have selections for each selector: notice that we want Colas, Cola, and Diet Cola from the product dimension, New York, Oregon, and Washington for the Location, and Actual for the scenario. The number of permutations of report is 9: 3 locations times 3 products times 1 scenario. Now that we have selections made, we can run the report.

dodeca-cascades-03-built-report

Notice the different tabs at the bottom of the sheet. Let’s collapse the selectors to get a little more real estate for the view and the tabs:

dodeca-cascades-04-tab-view

As you can see, a custom tab name based on the value of the selector for that tab was generated. The first tab is Colas – New York – Actual. This is one of those times when something simple goes a long way for the user: instead of some terse or generic name, the exact description is used. The text of the tab is even configurable (of course), if we need something a little more custom than just the names of the items.

Before I delve into the administrator configuration of cascading, I want to take a brief moment to wax technical on some of the power and flexibility we’re seeing.

Dodeca reports aren’t limited to a single datasource, a single type of data source, or even a single version of data sources. Dodeca’s flexible and robust spreadsheet paradigm allows it to arbitrarily combine, use, and build data from any number of data sources, even on a single sheet. For example, we could build a view that pulls some data from a relational database, and then uses it to build several retrievals from multiple Essbase cubes, all on the same sheet. So consider this: we can give our users the ability to build such a report, with disparate and heterogenous data sources, and have the report be built dynamically for every given permutation of selections – selections which, by the way, themselves may be from multiple data sources. The selections in the above screenshots all happened to come from the dimensionality of an Essbase cube but they could have just as easily come from a dynamic SQL query.

Under the Hood with Cascade Configuration

Now that I’ve shown an example of a cascading report, let’s talk a little about how it’s setup under the hood by the Dodeca administrator or report builder. Of the numerous properties we can use to configure and fine-tune the behavior of a view, there is a dedicated section for the cascading options:

dodeca-cascades-05-cascade-configuration

Generally we are okay with the defaults for the cascade options so we can leave them alone. The main thing we need to configure is to tell Dodeca which selectors will participate in the cascade. We can further configure options for Cascade Sources:

dodeca-cascades-06-cascade-sources-configuration

That’s it – that’s all there is to it. Keep in mind that in Dodeca, selectors are highly reusable components that are used across multiple views/reports. So we didn’t really have to do any special one-off setup for these selectors, just grab and go.

Stay Tuned

Stay tuned for more this week as I do a daily blog post about a particular feature of software in the Applied OLAP software suite. More to come!

Kscope16 is almost here!

Here we are again, just less than a week before Kscope16. I say this every year but this year’s Kscope promises to be the best yet, and I’m starting to get really pumped. There are a lot of exciting things going on in the greater Essbase world right now, and even my own little corner of this world is quite exciting (to me at least), including such fun things:

All of these items will be present at the conference, in one form or another. Dodeca will be featured in several presentations this year as customers talk about their implementation successes. Drillbridge has its own presentation again this year, given by yours truly, the outline extractor has a session put on by Tim, and the PBJ library will make up about half of a presentation with Cameron talking about on-prem versus PBCS.

That all said, this week I’m going to blog daily about some of the cool things in the world of Dodeca, Drillbridge, the Outline Extractor, and more. Stay tuned for a busy week!

Towards Spreadsheet Management

I follow quite a few Hyperion related blogs. Years ago there weren’t too many of them but now I’m quite pleased to be able to follow dozens of them with my RSS reader of choice. The other day I read an article posted by Edward Roske titled “7 Signs Your EPM Is Lagging Behind Your Competition”. Edward has been working with Hyperion for a long time and runs one of the more well known consulting firms, so he has seen quite a bit. And his thoughts are insightful. While I was reading through the article I couldn’t help but read it with a “Dodeca colored lens”, if you will.

For example, one of the signs is that strategy is planned verbally or in spreadsheets. The interesting thing here is that the spreadsheet modeling paradigm itself is a robust and essential. Where things go sideways is with how the files themselves are managed (and mismanaged).

For example, consider a typical analyst or a power user analyst that creates a spreadsheet: some pulls from Essbase, perhaps some relational data pasted in, some formulas, multiple tabs, formatting, and all that fun stuff. So far so good (well, not really, but let’s say it is). Now they email it out. Some feedback comes in from the CFO. Now the sheet Profitability Q3 2016 becomes something like Profitability Q3 2016 – Revised. More feedback comes in. More meetings. Now it’s Profitability Q3 2016 (2). Teresa down the hall needs a copy, so the analyst copes it to the LAN. Teresa makes some changes but the analyst has a lock on the file, so she saves Profitability Q3 2016 (2) – Teresa. I’ve seen some pretty heinous file management in my time.

This is an all too common scenario in the world of spreadsheets. Things get ugly quickly. And this is to say nothing of links across tabs and sheets and a host of other bad practices that make the situation even more error prone and hard to manage.

Another sign from Edward’s blog is that Excel is the key enabling technology in your FP&A department. I agree completely with the sentiment here. Note that Excel is specifically mentioned – not the spreadsheet paradigm itself. Excel as the enabler of FP&A is yesterday; the flexibility of spreadsheets combined with the power of Essbase is tomorrow.

Some of Edward’s other points such as there is only one version of the budget and budgets favor precision over timeliness are also spot on in terms of their accuracy.

Enter Dodeca

To reiterate from the beginning of this post, I couldn’t help but read this blog article while thinking about Dodeca. It’s because Dodeca takes all of the best things about spreadsheets: their power, their expressiveness, their familiarity to so many finance users, leverages the power of Essbase, leverages data from relational databases, and marries it all up in one cohesive interface. It’s saving numerous people and companies a lot of time: time that is not spent laboriously refreshing report decks, time that is not spent copying files around, time that is not spent emailing files, time that is not spent posting things to SharePoint or the LAN, and even a fair bit of time on end-user training.

I’m really looking forward to this year’s Kscope in Chicago. There are multiple presentations by Applied OLAP customers on how they built solutions with Dodeca while increasing productivity and reducing risk. And of course, we at Applied OLAP will have a booth. Please swing by and say hello!

 

 

Dependent Selectors in Dodeca

The dependent selectors feature in Dodeca is one of my favorite “simple” features. I say “simple” because the implementation is very straightforward. I like it so much because the crux of this feature is about getting users to their data and reports faster, with the most streamlined UI possible. Not to get on my architectural high horse too much, but I believe that solutions – whether it’s a cube, a view, software, an ODI solutions, or whatnot – should be elegant, maintainable, and intuitive. Perhaps in another life I was a UI or UX designer. Advanced software such as Dodeca and its more niche cousin Drillbridge strive to make users and administrators alike as productive as possible.

To that end, dependent selectors are a way to provide an enhanced report/view build experience. In a typical Dodeca report, you typically select values from one or more selectors: choose a time period, choose a year, choose a location, build the report. You might choose multiple selections such as multiple time periods – it all just depends on how the selector list has been configured. A dependent selector is one whose value changes depending on one or more of the other selectors.

As a straightforward example, consider a view where the user must select a state and a city to build a report. The available states to select are, say, the 50 states in the US. And the available cities may span all of these states. But let’s say in this example that when we choose a state, we want to choose from a list of cities in that state using another selector. By doing this we get people to their data faster and improve the user experiencce.

This can be accomplished easily with a dependent selector. Let me show you how, starting with some simple tables modeling the states and the cities.

Here we have the table STATES for states and STATE_CITIES for cities. Also note that even though this is a bit of a “quick and dirty” example, I have nevertheless given the STATES table a primary key of STATE_ID (the two letter postal state code), and a STATE_NAME column with the “nice name” of the state. This will come into play more a little bit later when I talk about selectors and the difference between an ID and a value. Next, over in the STATE_CITIES table, we have a compound primary key of the combination of the STATE_ID (a foreign key into our STATES table), and the city name. These data constraints ensure that every state/city combination is unique, and that every city is associated with a particular state that exists in our states table.

states-and-cities-sql-server-tables

The tables we have designed so far will be used as the basis of a Dodeca selector. Dodeca is very flexible with respect to how selectors are populated, with two of the primary methods of populating them being from Essbase dimensions or generated with SQL. In this case we’ll use the SQL Passthrough DataSet editor to create a query that pulls back the list of states from this table:

state-sql-selector

We can quickly and easily test the selector by using the Test Data Set button in the interface, showing that the query does indeed work (note that I only populated the table with a few states):

state-test-data-in-dodeca

Now things get a little more interesting where we go to define the query for our city selector. We write a normal query, but in place of a value for the WHERE STATE_ID clause, we put in our states token (written as [T.States] in this case. Come runtime, the value of [T.States] will be replaced with value from our state selector, based on what the user has highlighted.state-city-selector

One of the newer features in Dodeca is the ability to save and edit test tokens so we can test the query out immediately instead of having to build a view first. Pressing the Edit TestTokens button brings up the following editor:state-city-selector-test-token

Let’s plugin a value of WA for [T.States] so we can test out which cities come back when the state is Washington. Note that my tokens value corresponds to the ID of the state in this case. I’ve configured the selector to use an ID value in this case, but I could have used the state name if I wanted to. Wherever possible I prefer to use the “core” or identifying value of a piece of data. Let’s test out the query to verify that the query works and the correct data comes back:state-city-test-results

For good measure let’s swing by the Selectors editor and ensure that we have created selectors for [T.States] and [T.State_Cities]:

selector-list-with-states

Now let’s look at the Selector List that corresponds to our State from the Selectors configuration. In particular note that the type of the selector list is a SQLPassthroughDataSet where we have chosen the associated data set ID:

state-selector-list-configuration

Now let’s go over to the city selector list, where things get a little more interesting. Here we can choose the DependentOnSelectorIDs value and tell this selector which other selectors will influence it. We simply choose States:state-city-selector-list-configuration

With the selectors all configured, we can now include them in our view by including them as we normally would. For those of you not familiar with Dodeca, a view is like a report template, and when we administrators design the view for users, we can control an incredibly large amount of its behavior very easily. Each view can have its selectors configured, which will influence the options that a user gets to select from in terms of configuring and building the view. So by going to the Selectors configuration for a particular view/report, we get to choose which selectors should be associated with the view. In this case, it’s the State and State_Cities selectors. We could easily have more (and usually would, such as time periods, years, and other things).
view-selector-configuration

With that all set, let’s commit our changes and go take a look at the view. Note how I have selected Washington from the State selector in the top right of the window. In real-time when I select Washington, the City selector refreshes instantaneously, showing just those cities from Washington:view-selector-state-and-city-lists

Since I have Auto Build turned on for this report, once I have selected a city the report refreshes using my selection:
view-built-with-dep-city

The report shown is just an incredibly minimal report for testing purposes that simply uses the city as part of a simple retrieve, but an arbitrarily complex report could also have been designed and used. 

Selector ID vs. Display Value

I mentioned earlier that while we were showing the full name of the states, we were actually using the state code under the hood, so to speak. This is somewhat analogous to member names and aliases in Essbase, where we have a core name (one that is often a little less “friendly” to the user) and a display name that is more friendly. As a configurable option, Dodeca let’s you use a value for display and an internal value for the ID – if you want. In this case it came in handy for the SQL because the query I wanted to write to fetch the cities was simplest if it got to use the state ID rather than the name. Without this feature I might have had to just show terse IDs to the user or rewrite the SQL query to join against another table so I could dereference the full state name. I think the design win here is that we got to show the “nice” name to the user but not have to make any sacrifices in the design (in terms of enduring additional complexity).

Drillbridge acquired by Applied OLAP

The last few months have been rather hectic (as evidenced by the lack of blog posts, unfortunately), but I am happy to report some exciting news! There are two main things of note:

First, I am now a full-time employee of Applied OLAP. Applied OLAP is one of the biggest and best names in the EPM space, having worked with Essbase for many, many years. Applied OLAP develops and supports the Dodeca Spreadsheet Management System (much more to come on this!). Back in my Kroger days I helped roll out one of the earliest Dodeca implementations to what became a very enthusiastic user base and incredibly happy executives. Many of you are just as, if not more familiar, with the Outline Extractor – software that is also maintained by Applied OLAP as a free tool for the benefit of the community.

Second, Applied OLAP has acquired Drillbridge in its entirety. The free edition of Drillbridge will now be available alongside the Essbase Outline Extractor and other tooles that many people are so familiar with. Enterprise licensees will now have their software supported by Applied OLAP. Drillbridge Enterprise has an exciting roadmap and will continue to exist as a standalone product.

That’s the short version for now, many more exciting things to come in the days ahead!

Launch ClickOnce apps [such as Dodeca] through Firefox

I’m a Firefox man.  I’ve been a fan since version 1.5, liked version 2 (even though it was a memory pig), and I am quite happy with 3.0.  I am eagerly looking forward to some of the memory optimization and performance improvements that are coming down the pipe with 3.1.  I even look to Firefox as an example of subtley evolving a user-interface and polishing it as time goes on — I try to implement some of the same refinements in my own projects.

I also use Dodeca extensively as a front-end to much of my Essbase functionality.  As a ClickOnce app (.NET technologies), it has been necessary to launch it with Internet Explorer.  Of course, I can invoke it directly with a shortcut on my desktop, but frequently I find myself using a link to launch it since it’s just easier.  Sadly, this does not work out of the box with Firefox because Firefox just sees the .application file and doesn’t know what to do with it.  Some of my users have Firefox as their default web browser and have run into some slight issues as well.

Well, unbeknownst to me, there has been a Firefox ClickOnce add-in for some time.  One of the things I love and use in Firefox is it’s extension capabilities — I typically have the Foxmarks, Delicious, Greasemonkey, Web Developer, and Flashblock extensions installed as a minimum (I used to use Sage as well but I find myself in Google Reader now).  So I bounced over to the Mozilla addons page, clicked the button to install FFClickOnce, restarted my web browser, punched in my Dodeca URL, and without a hitch, I was prompted to run Dodeca.  Not that I have anything against Internet Explorer, but now I can do just about everything in Firefox and have less reason to fire up IE (I’m looking at you, Windows Update…).  Sometimes it’s the little things in life!

MaxL Essbase automation patterns: moving data from one cube to another

A very common task for Essbase automation is to move data from one cube to another.  There are a number of reasons you may want or need to do this.  One, you may have a cube that has detailed data and another cube with higher level data, and you want to move the sums or other calculations from one to the other.  You may accept budget inputs in one cube but need to push them over to another cube.  You may need to move data from a “current year” cube to a “prior year” cube (a data export or cube copy may be more appropriate, but that’s another topic).  In any case, there are many reasons.

For the purposes of our discussion, the Source cube is the cube with the data already in it, and the Target cube is the cube that is to be loaded with data from the source cube.  There is a simple automation strategy at the heart of all these tasks:

  1. Calculate the source cube (if needed)
  2. Run a Report script on the source cube, outputting to a file
  3. Load the output from the report script to the target cube with a load rule
  4. Calculate the target cube

This can be done by hand, of course (through EAS), or you can do what the rest of us lazy cube monkeys do, and automate it.  First of all, let’s take a look at a hypothetical setup:

We will have an application/database called Source.Foo which represents our source cube.  It will have dimensions and members as follows:

  • Location: North, East, South, West
  • Time: January, February, …, November, December
  • Measures: Sales, LaborHours, LaborWages

As you can see, this is a very simple outline.  For the sake of simplicity I have not included any rollups, like having “Q1/1st Quarter” for January, February, and March.  For our purposes, the target cube, Target.Bar, has an outline as follows:

  • Scenario: Actual, Budget, Forecast
  • Time: February, …, November, December
  • Measures: Sales, LaborHours, LaborWages

These outlines are similar but different.  This cube has a Scenario dimension with Actual, Budget, and Forecast (whereas in the source cube, since it is for budgeting only, everything is assumed to be Budget).  Also note that Target.Bar does not have a Location dimension, instead, this cube only concerns itself with totals for all regions.  Looking back at our original thoughts on automation, in order for us to move the data from Source.Foo to Target.Bar, we need to calculate it (to roll-up all of the data for the Locations), run a report script that will output the data how we need it for Target.Bar, use a load rule on Target.Bar to load the data, and then calculate Target.Bar.  Of course, business needs will affect the exact implementation of this operation, such as the timing, the calculation to use, and other complexities that may arise.  You may actually have two cubes that don’t have a lot in common (dimensionally speaking), in which case, your load rule might need to really jump through some hoops.

We’ll keep this example really simple though.  We’ll also assume that the automation is being run from a Windows server, so we have a batch file to kick things off:

cd /d %~dp0
essmsh ExportAndLoadBudgetData.msh

I use the cd /d %~dp0 on some of my systems as a shortcut to switch the to current directory, since the particular automation tool installed does not set the home directory of the file to the current working directory.  Then we invoke the MaxL shell (essmsh, which is in the PATH) and run ExportAndLoadBudgetData.msh.  I enjoy giving my automation files unnecessarily long filenames.  It makes me feel smarter.

As you may have seen from an earlier post, I like to modularize my MaxL scripts to hide/centralize configuration settings, but again, for the sake of simplicity, this example will forgo that.  Here is what ExportAndLoadBudgetData.msh could look like:

/* Copies data from the Budget cube (Source.Foo) to the Budget Scenario
   of Target.Bar */
/* your very standard login sequence here */
login AdminUser identified by AdminPw on EssbaseServer;
/* at this point you may want to turn spooling on (omitted here) */

/* disable connections to the application -- this is optional */
alter application Source disable connects;

/* PrepExp is a Calc script that lives in Source.Foo and for the purposes
   of this example, all it does is makes sure that the aggregations that are
   to be exported in the following report script are ready. This may not be
   necessary and it may be as simple as a CALC ALL; */

execute calculation Source.Foo.PrepExp;

/* Budget is the name of the report script that runs on Source.Foo and outputs a
   text file that is to be read by Target.Bar's LoadBud rules file */

export database Source.Foo
    using report_file 'Budget'
    to data_file 'foo.txt';

/* enable connections, if they were disabled above */
alter application Source enable connects;
/* again, technically this is optional but you'll probably want it */
alter application Target disable connects;

/* this may not be necessary but the purpose of the script is to clear out
   the budget data, under the assumption that we are completely reloading the
   data that is contained in the report script output */

execute calculation Target.Bar.ClearBud;

/* now we import the data from the foo.txt file created earlier. Errors
   (rejected records) will be sent to errors.txt */

import database Target.Bar data
    from data_file 'foo.txt'
    using rules_file 'LoadBud'
    on error write to 'errors.txt';

/* calculate the new data (may not be necessary depending on what the input
   format is, but in this example it's necessary */

execute calculation Target.Bar.CalcAll;

/* enable connections if disabled earlier */
alter application Target enable connects;
/* boilerplate cleanup. Turn off spooling if turned on earlier */

logoff;
exit;

At this point , if we don’t have them already, we would need to go design the aggregation calc script for Source.Foo (PrepExp.csc), the report script for Source.Foo (Budget.rep), the clearing calc script on Target.Bar (ClearBud.csc), the load rule on Target.Bar (LoadBud.rul), and the final rollup calc script (CalcAll.csc).  Some of these may be omitted if they are not necessary for the particular process (you may opt to use the default calc script, may not need some of the aggregations, etc).

For our purposes we will just say that the PrepExp and CalcAll calc scripts are just a CALC ALL or the default calc.  You may want a “tighter” calc script, that is, you may want to design the calc script to run faster by way of helping Essbase understand what you need to calculate and in what order.

What does the report script look like?  We just need something to take the data in the cube and dump it to a raw text file.

<ROW ("Time", "Measures")

{ROWREPEAT}
{SUPHEADING}
{SUPMISSINGROWS}
{SUPZEROROWS}
{SUPCOMMAS}
{NOINDENTGEN}
{SUPFEED}
{DECIMAL 2}

<DIMBOTTOM "Time"
<DIMBOTTOM "Measures"
"Location"
!

Most of the commands here should be pretty self explanatory.  If the syntax looks a little different than you’re used to, it’s probably because you can also jam all of the tokens in one line if you want like {ROWREPEAT SUPHEADING} but historically I’ve had them one to a line.  If there were more dimensions that we needed to represent, we’d put thetm on the <ROW line.  As per the DBAG, we know that the various tokens in between {}’s format the data somehow — we don’t need headings, missing rows, rows that are zero (although there are certainly cases where you might want to carry zeros over), no indentation, and numbers will have two decimal places (instead of some long scientific notation). Also, I have opted to repeat row headings (just like you can repeat row heading in Excel) for the sake of simplicity, however, as another optimization tip, this isn’t necessary either — it just makes our lives easier in terms of viewing the text file and loading it to a SQL database or such.

As I mentioned earlier, we didn’t have rollups such as different quarters in our Time dimension.  That’s why we’re able to get away with using <DIMBOTTOM, but if we wanted just the Level 0 members (the months, in this case), we could use the appropriate report script.  Lastly, from the Location dimension we are taking use the Location member (whereas <DIMBOTTOM “Time” tells Essbase to give us all the members to the bottom of the Time dimension, simply specifying a member or members from the dimension will give us those members), the parent to the different regions.  “Location” will not actually be written in the output of the report script because we don’t need it — the outline of Target.Bar does not have a location dimension since it’s implied that it represents all locations.

The output of the report script will look similar to the following:

January Sales 234.53
January LaborHours 35.23
February Sales 532.35

From here it is a simple matter of designing the load rule to parse the text file.  In this case, the rule file is part of Target.Bar and is called LoadBud.  If we’ve designed the report script ahead of time and run it to get some output, we can then go design the load rule.  When the load rule is done, we should be able to run the script (and schedule it in our job scheduling software) to carry out the task in a consistent and automated manner.

As an advanced topic, there are several performance considerations that can come into play here.  I already alluded to the fact that we may want to tighten up the calc scripts in order to make things faster.  In small cubes this may not be worth the effort (and often isn’t), but as we have more and more data, designing the calc properly (and basing it off of good dense/sparse choices) is critical.  Similarly, the performance of the report script is also subject to the dense/sparse settings, the order of the output, and other configuration settings in the app and database.  In general, what you are always trying to do (performance wise) is to help the Essbase engine do it’s job better — you do this by making the tasks you want to perform more conducive to the way that Essbase processes data.  In other words, the more closely you can align your data processing to the under-the-hood mechanisms of how Essbase stores and manipulates your data, the better off you’ll be.  Lastly, the load rule on the Target database, and the dense/sparse configurations of the Target database, will impact the data load performance.  You may not and probably will not be able to always optimize everything all at once — it’s a balancing act — since a good setting for a report script may result in suboptimal calculation process.  But don’t let this scare you — try to just get it to work first and then go in and understand where the bottlenecks may be.

As always, check the DBAG for more information, it has lots of good stuff in it.  And of course, try experimenting on your own, it’s fun, and the harder you have to work for knowledge, the more likely you are to retain it.  Good luck out there!

Getting everything done with just Essbase and Dodeca

Many people I talk to are amazed that a company can get by with just Essbase and Dodeca.  No Planning deployment, no HFM, none of that other stuff.  The core Essbase analytic engine brings a lot to the table, which is this: multi-dimensional analysis and all of the associated functionality for building, automating, and managing cubes (EAS, EDS, EIS, and such, in Essbase 7.x terminology).

So, out of the box with a typical Essbase setup, you can unlock the benefits of multi-dimensional analysis just by virtue of using using the Excel add-in.  We all know the Excel add-in has some issues (it’s not exactly aging gracefully), but it’s simple, fast, and many users absolutely love it.  The financial types tend to “get it” right away and go on their merry way.  They also seem pretty sad when they are “forced” to go to Business Objects for some data that isn’t in Essbase.

Providing access to cubes via the add-in gives your users a lot already.  Historically, it seems that many companies have grown organically around the Excel add-in.  They evolve to using VBA automation for retrieval and reporting jobs, creating formal Excel solutions, and of course facilitating various processes with a good old “Lock and Send,” with some calcs thrown in for good measure.  This is all well and I good, but for larger endeavors this approach can suffer from maintenance, security, and scalability issues.

Without clearly defined standards, the VB implementations seem to go south and turn into a mess of spaghetti code that breaks at the most inopportune times.  There are different versions of everything floating around.  The deployment method ranges from emailing updated files, to copying them off the network drive, to even throwing them on a USB stick and moving them around when all else fails.  It’s not always bad, but surely there is a way to improve upon this paradigm?

So, let’s step back for a moment.  Our big win in the first place was these useful cubes that slice and dice data in a way that other systems in the enterprise simply can’t compare to.  If we want to expand upon this existing success, what do we put on the wish list?  What are my goals, and what do I want to provide for my users?

I want to give my users something that’s easy to use, has a short learning curve, leverages existing Excel/ad-hoc knowledge, and is cohesive. As for myself , I’m a bit more concerned about the back-end.  I want something that is easy to deploy, easy to update, adapts to changing business needs, and is scalable.  Since I know and love cubes so much, I also want to leverage my existing knowledge of Essbase.

Here’s where Dodeca comes in.  Dodeca complements Essbase functionality (particularly my existing Essbase functionality) very well, by providing everything on top of the cubes that I want to give to my users.  My deployment and update issues are essentially solved since Dodeca uses Microsoft ClickOnce technology — each time the user runs Dodeca on their workstation, a process checks against the current version that has been published to a central server, and updates files if it needs to.  This functionality is dependent on the client workstation having the .NET Framework installed — this is not some obscure third-party library either, so if your client workstations don’t already have it, it is not incredibly difficult to get setup.  Essentially in this scenario, we get the benefits of a client-side application (responsive interface, complex widgets), coupled with the distribution benefits that the net provides.  Also, since all content is stored and retrieved from a central server, we just have to update content in one place and our clients will pull it down the next time they use it.

Once a user launches Dodeca, they are presented with a list of Views that they have access to.  Views can be different reports, web pages, and other things.  Dodeca is highly configurable, so my following comments will reflect a typical usage scenario, but not necessarily the absolute way that things have to be setup.  I also find it useful to set a user’s default view to a web page that provides status updates and other informational messages.

A common Essbase activity with Excel is to refresh a particular report (or numerous reports, or mountains of reports) at the end of a period.  This means updating the time period in some fashion, and refreshing everything.  This is either done by hand or with a little home brew VBA action.  This functionality lends itself quite well to being implemented in Dodeca.  We can provide the same report in Dodeca by modifying the current Excel sheet a little bit, importing it, and setting a few other things (who has access to it, how the report gets updated, what database it comes out of, etc).  By doing this, we can now provide the same report, with user-selectable dimensional members (such as Time period, but just as easily Location, Department, Scenario, and so on) to anybody in the organization.  If we need to modify the report, we can do it in one place (the server) and seconds later, if a user pulls up the report, they now see the latest version.

So far we’ve just scratched the surface, but as you can see, we’ve addressed our needs and wants fairly well.  Deployment and upgrades are simple (and I am largely abstracted from a sometimes lethargic IT department), and content updates ridiculously simple.  My users have a cohesive (one window with multiple tabs) environment to do their work in, with all of the functionality of Excel.  As an administrator I have gotten to keep, leverage, and extend all of my existing Essbase infrastructure and functionality.  And best of all, these tools are not mutually exclusive in the least bit: my users can keep using the Excel add-in the love so much.  In fact, while in Dodeca, any sheet they pull up can be instantly sent to Excel just by clicking a button.

It works for me, it gets things done, and it’s a winning combination.  In the coming weeks and months I’ll be expanding more on this.  Happy Holidays!