Drillable Columns in Drillbridge

One of the more often requests that Drillbridge users have is to be able to have drillable columns. That is, they’d like to drill from a column in a Drillbridge report by clicking on the column value and pulling up another report, using the point-of-view (POV) of the original cell. This is most often used in a few different situations:

  • Drill into yet more detail for a given transaction
  • Drill to a PDF or image related to the current item
  • Drill to a custom URL, such as a Sharepoint URL

Drillbridge has supported this feature for quite some time, and it’s pretty easy to setup. Today I’m going to walk through the steps for setting up drillable columns in the latest version of Drillbridge Enterprise.

First of all, let’s take a look at a generic Drillbridge report that executes a query by converting the member from the Years dimension from something like “FY12” into the value of “2012” and then executes a query:

drillbridge-drillable-columns-source-query

Note that for readability I have broken the token up across several lines. This works just fine in the latest versions too and is a nice way to increase readability. Let’s see what this query generates by going over to the Test screen:

drillbridge-drillable-columns-testing-source-query

And building the report:

drillbridge-drillable-columns-source-query-data

Everything looks good so far – note that the drillable column hasn’t been setup yet so we shouldn’t be expecting any special links to appear yet.

Now let’s create a new report definition. This new report type will be a special report type known as a “Forwarding Link Report”. If you think about a normal Drillbridge report, it builds a SQL query by taking the POV and plugging it into a template. A Forwarding Link Report works in much the same way, except instead of building a SQL query, it builds a URL and forwards the browser to it.  In this case I’m just going to build a simple link over to Wikipedia since they have a nice simple URL structure that is good for illustrative purposes.

Here’s our link definition:

drillbridge-drillable-columns-link-query

Note the label above the text box for editing the link definition. Instead of saying it’s a SQL query, it refers to it specifically as the Link Template. This is because the report type itself has been changed (which is needed to make this type of report work). To change the report type we go to the Advanced menu and the click on Class, bringing up this editor:

drillbridge-drillable-columns-target-class-type

Note that in this case, the types available are JDBC drill-through report (the typical report type), Forwarding Link Report, or MDX report (yes, Drillbridge can execute MDX queries too!). The Drillbridge report system is completely modular and developers can add their own report types using a simple Java plugin.

Custom report plugins can also easily “advertise” their available options to Drillbridge so that Drillbridge can provide a nice UI for configuring them. For example, in the case of the forwarding link report, there is just one option available (as opposed to the 20 or so options available on a typical report) that let’s us choose the HTTP “verb” type to use:

drillbridge-drillable-columns-link-options

Let’s head over to the Test tab for this new link report and see what happens:

drillbridge-drillable-columns-testing-link

Let’s build it and see what happens:

drillbridge-drillable-columns-link-wikipedia

As expected, the given input was used to build a link and our browser was redirected there immediately. This is one of the nice things about doing drill-through in a web browser: you can do pretty much anything you want.

Now, what we’d like to do is make one the columns in our first report drill over to the results of this new link report we just setup. To do that, let’s go back to the original report, then go to the Advanced menu, then Drillable Columns:

drillbridge-drillable-columns-drillable-columns-list

Then click on New Drillable Column. The editor will be brought up:

drillbridge-drillable-columns-editing-column

Here we have a chance to give this drillable column a name, tell the report which column should be drillable, and then choose the target report. We can put in anything we want for the name. What’s important in the column index and target. The column indices start at 0 for the first column and then count up. In this case we want the third column to be a link, so this is column index 2 (0 = first column, 1 = second column, 2 = third column, etc.). Then we select our new report as the target report, which I named “Wikipedia”.

Let’s go and test this source report again and see that the third column has links now:

drillbridge-drillable-columns-viewing-drillable-column-links

And sure enough, clicking on one of the links brings up the Wikipedia report – which doesn’t bring up a report at all, per se, it just executes the report, which uses the POV to build a URL and automatically redirect the web browser. That’s it!

Report POV vs. row POV

It’s important to consider what the POV handed to our target report will be. The POV is actually the combination of two things, out of necessity: the POV of the source report merged with the POV of the row itself. Whereas the normal report POV might be something like Scenario = Actual, Year = FY12, Time = June, the POV handed to the target report will be all of those things, plus keys and values from the current row. For example, if the original report only considered a given account but showed details for transactions, and there was a column called TRANSACTION_NUM, then the POV would also include a key named TRANSACTION_NUM with a value of whatever the value from that row is. This is critical for users that want to link to something or execute another query using data from the row.

Drillable Columns is one of my favorite features of Drillbridge because it’s so easy but powerful, but more importantly, it saves people time, and that’s what Drillbridge is all about.

Dependent Selectors in Dodeca

The dependent selectors feature in Dodeca is one of my favorite “simple” features. I say “simple” because the implementation is very straightforward. I like it so much because the crux of this feature is about getting users to their data and reports faster, with the most streamlined UI possible. Not to get on my architectural high horse too much, but I believe that solutions – whether it’s a cube, a view, software, an ODI solutions, or whatnot – should be elegant, maintainable, and intuitive. Perhaps in another life I was a UI or UX designer. Advanced software such as Dodeca and its more niche cousin Drillbridge strive to make users and administrators alike as productive as possible.

To that end, dependent selectors are a way to provide an enhanced report/view build experience. In a typical Dodeca report, you typically select values from one or more selectors: choose a time period, choose a year, choose a location, build the report. You might choose multiple selections such as multiple time periods – it all just depends on how the selector list has been configured. A dependent selector is one whose value changes depending on one or more of the other selectors.

As a straightforward example, consider a view where the user must select a state and a city to build a report. The available states to select are, say, the 50 states in the US. And the available cities may span all of these states. But let’s say in this example that when we choose a state, we want to choose from a list of cities in that state using another selector. By doing this we get people to their data faster and improve the user experiencce.

This can be accomplished easily with a dependent selector. Let me show you how, starting with some simple tables modeling the states and the cities.

Here we have the table STATES for states and STATE_CITIES for cities. Also note that even though this is a bit of a “quick and dirty” example, I have nevertheless given the STATES table a primary key of STATE_ID (the two letter postal state code), and a STATE_NAME column with the “nice name” of the state. This will come into play more a little bit later when I talk about selectors and the difference between an ID and a value. Next, over in the STATE_CITIES table, we have a compound primary key of the combination of the STATE_ID (a foreign key into our STATES table), and the city name. These data constraints ensure that every state/city combination is unique, and that every city is associated with a particular state that exists in our states table.

states-and-cities-sql-server-tables

The tables we have designed so far will be used as the basis of a Dodeca selector. Dodeca is very flexible with respect to how selectors are populated, with two of the primary methods of populating them being from Essbase dimensions or generated with SQL. In this case we’ll use the SQL Passthrough DataSet editor to create a query that pulls back the list of states from this table:

state-sql-selector

We can quickly and easily test the selector by using the Test Data Set button in the interface, showing that the query does indeed work (note that I only populated the table with a few states):

state-test-data-in-dodeca

Now things get a little more interesting where we go to define the query for our city selector. We write a normal query, but in place of a value for the WHERE STATE_ID clause, we put in our states token (written as [T.States] in this case. Come runtime, the value of [T.States] will be replaced with value from our state selector, based on what the user has highlighted.state-city-selector

One of the newer features in Dodeca is the ability to save and edit test tokens so we can test the query out immediately instead of having to build a view first. Pressing the Edit TestTokens button brings up the following editor:state-city-selector-test-token

Let’s plugin a value of WA for [T.States] so we can test out which cities come back when the state is Washington. Note that my tokens value corresponds to the ID of the state in this case. I’ve configured the selector to use an ID value in this case, but I could have used the state name if I wanted to. Wherever possible I prefer to use the “core” or identifying value of a piece of data. Let’s test out the query to verify that the query works and the correct data comes back:state-city-test-results

For good measure let’s swing by the Selectors editor and ensure that we have created selectors for [T.States] and [T.State_Cities]:

selector-list-with-states

Now let’s look at the Selector List that corresponds to our State from the Selectors configuration. In particular note that the type of the selector list is a SQLPassthroughDataSet where we have chosen the associated data set ID:

state-selector-list-configuration

Now let’s go over to the city selector list, where things get a little more interesting. Here we can choose the DependentOnSelectorIDs value and tell this selector which other selectors will influence it. We simply choose States:state-city-selector-list-configuration

With the selectors all configured, we can now include them in our view by including them as we normally would. For those of you not familiar with Dodeca, a view is like a report template, and when we administrators design the view for users, we can control an incredibly large amount of its behavior very easily. Each view can have its selectors configured, which will influence the options that a user gets to select from in terms of configuring and building the view. So by going to the Selectors configuration for a particular view/report, we get to choose which selectors should be associated with the view. In this case, it’s the State and State_Cities selectors. We could easily have more (and usually would, such as time periods, years, and other things).
view-selector-configuration

With that all set, let’s commit our changes and go take a look at the view. Note how I have selected Washington from the State selector in the top right of the window. In real-time when I select Washington, the City selector refreshes instantaneously, showing just those cities from Washington:view-selector-state-and-city-lists

Since I have Auto Build turned on for this report, once I have selected a city the report refreshes using my selection:
view-built-with-dep-city

The report shown is just an incredibly minimal report for testing purposes that simply uses the city as part of a simple retrieve, but an arbitrarily complex report could also have been designed and used. 

Selector ID vs. Display Value

I mentioned earlier that while we were showing the full name of the states, we were actually using the state code under the hood, so to speak. This is somewhat analogous to member names and aliases in Essbase, where we have a core name (one that is often a little less “friendly” to the user) and a display name that is more friendly. As a configurable option, Dodeca let’s you use a value for display and an internal value for the ID – if you want. In this case it came in handy for the SQL because the query I wanted to write to fetch the cities was simplest if it got to use the state ID rather than the name. Without this feature I might have had to just show terse IDs to the user or rewrite the SQL query to join against another table so I could dereference the full state name. I think the design win here is that we got to show the “nice” name to the user but not have to make any sacrifices in the design (in terms of enduring additional complexity).

Kscope16 sessions I’m looking forward to

I was looking through the list of sessions coming up for Kscope16 to get an initial idea of what I want to attend. And, wow. There are so many incredible sounding presentations this year. I seem to say this every year, but the content this year seems just especially strong.

That said, a few sessions jumped right out at me as things I absolutely want to attend:

How ADT Gained User Acceptance in Its Delivery of Essbase and Oracle General Ledger Data to Business Users
Should be a very solid real-world example of using the Dodeca Spreadsheet Management System (disclosure: I am an Applied OLAP employee… but I’d be attending this anyway!).

Torn Between Two Worlds: Is Essbase a Business or an IT Tool
Joe always does a nice job on presentations, and I’m looking forward to one that might be more philosophical than technical, especially as the nuance of where Essbase resides (and is therefore managed, used, and more) is a very important issue that a lot of companies struggle with. As a developer, the most successful Essbase environment I have participated one is where Essbase was ‘owned’ by Finance or a line of business. On the other hand, I have done work for organizations where Essbase was managed (and sometimes mismanaged) by IT. But the reverse has also been true…

Essbase Does It, but Dodeca Makes It Easy
Dodeca’s success in the enterprise has resulted in an unprecedented number of Kscope sessions this year as more customers adopt this powerful technology. This presentation is by a customer – St. Jude Medical – and I am very curious to hear about their successes, particularly with respect to being long-time Essbase users.

A Stomp Through the Tulips of Essbase Cloud Service
Steve Liebermensch of Oracle always puts on good, information-packed presentations and I am confident this will be another one that I won’t want to miss.

Case Study: Reduce the Kroger Essbase Footprint without Sacrificing ANYTHING! And Have Room for Growth…
Kroger is near and dear to my heart. It’s where I started (literally) – in a small division, bagging groceries – and eventually worked up to finance where I helped deploy Essbase across the entire enterprise. This presentation immediately caught my eye, owing to its co-presenter. The main presenter is Kadee Rodriguez, who I never had the pleasure of working with directly). The co-presenter is Christine Blea. During my time at Kroger, Christine administered Essbase for another division, and it wasn’t uncommon for us to call or email and share some knowledge, calc script, formulas, or whatnot. If I’m not mistaken, this is Christina’s first time presenting (or at least co-presenting!) at Kscope. In any case, I just gotta go see what my favorite grocer is up to and cheer on the team.


 

And I’ll definitely be attending my own sessions:

Drillbridge: The Easy Way to Implement Hyperion Drill-through
Drilbridge is back for its second Kscope presentation. Last year I did a “soup to nuts” live demo of Drillbridge – starting at literally downloading the ZIP file from the web, unzipping it, installing Drillbridge, making a custom report, deploying it to Essbase, and using drill-through from Smart View – in exactly 14 minutes. Let’s just say that Drillbridge has learned a few new tricks since then.

Hey Mom! Look What I Built with the Essbase Java API!
This session is ostensibly presented by Tim Tow and Harry Gates as co-presenter. Which I guess makes me the co-co-presenter. In any case, this is a bit of a hybrid format presentation were we can talk about the Essbase Java API and some of the interesting things we’re doing with it.

On-Premises Planning vs. PBCS: Common Administrative Tasks Compared, Contrasted, and Recommended
Cameron and I will take a look at on-prem versus PBCS. My angle on this will be more on the REST API for PBCS and how its usage compares to traditional ways of implementing things.

Essbasepy updated for Python 3

A huge number of updates and cleanups were put in to the Essbase Python wrapper (essbasepy) recently. These were all made by Github user Kevin (‘nurse’) who updated many sections of code, modernized a number of things, cleaned up comments, and more. The updates look substantial and if you are using Python 3 with this module, absolutely recommended. Check out the GitHub Essbasepy page for more.

Drillbridge Community Edition 1.5.5 available

Hot on the heels of the announcement that Applied OLAP has acquired Drillbridge, there is a small update to the latest free edition of Drillbridge. Drillbridge 1.5.4 has been bumped up to version 1.5.5 and includes the following three very specific updates:

  1. Applied OLAP software license
  2. Fix for sporadic report deletion issue
  3. Change to default temp/working directory

To go into further detail, the license shipping with the previous version of Drillbridge has been changed to a standard Applied OLAP license that is more or less the same as that used for the free Outline Extractor. The switch to this license serves mostly to give uniformity and consistency to Applied OLAP software offerings rather than materially affect the usage of the software.

Second, there is a sporadic issue with respect to the ability to delete reports that has been fixed. How this would manifest is that the Delete button would just literally not show up sometimes to delete a report, thereby preventing the user from being able to delete it. The workaround was/is to go into the Drillbridge repository and manually delete the report entry.

Lastly, the default working directory has been changed. Some versions and combinations of Java 1.7, 1.8, and Windows Server 2008/2012, and various group/security policies would have problems launching Drillbridge. This was generally due to an issue where the temporary file directory to be used by Java would not be writable by the service running Drillbridge. So by default, the temp folder will try to be located inside of the Drillbridge installation folder itself, which should prevent most issues of this type from happening. This location can be changed via a config file as always, if needed.

As a bit of an aside, Drillbridge Enterprise 3.0 is still under development with a release planned for later this year. This Community Edition release of Drillbridge is currently slated to the last release in the 1.5.x series (unless further issues are discovered). Future versions will likely be based off of the 3.0 codebase.

Please let me know if you have any questions! As usual, Drillbridge can be downloaded from the Saxifrage Systems LLC downloads section. Note that in the hopefully not too distant future, this will be migrated over to the Applied OLAP downloads section but will remain where it is for now.

Drillbridge acquired by Applied OLAP

The last few months have been rather hectic (as evidenced by the lack of blog posts, unfortunately), but I am happy to report some exciting news! There are two main things of note:

First, I am now a full-time employee of Applied OLAP. Applied OLAP is one of the biggest and best names in the EPM space, having worked with Essbase for many, many years. Applied OLAP develops and supports the Dodeca Spreadsheet Management System (much more to come on this!). Back in my Kroger days I helped roll out one of the earliest Dodeca implementations to what became a very enthusiastic user base and incredibly happy executives. Many of you are just as, if not more familiar, with the Outline Extractor – software that is also maintained by Applied OLAP as a free tool for the benefit of the community.

Second, Applied OLAP has acquired Drillbridge in its entirety. The free edition of Drillbridge will now be available alongside the Essbase Outline Extractor and other tooles that many people are so familiar with. Enterprise licensees will now have their software supported by Applied OLAP. Drillbridge Enterprise has an exciting roadmap and will continue to exist as a standalone product.

That’s the short version for now, many more exciting things to come in the days ahead!

Introducing PBJ: The PBCS Java Client Project

k5791471

I’d like to introduce the forthcoming availability of a new open source project called PBJ (or PB&J if you want): A Java client for PBCS (Planning and Budgeting Cloud Service) that serves as a clean, robust, and rapid way to work with the PBCS REST API in Java.

The PBCS REST API provides similar functionality to the EPM Automate tool and can perform such actions as exporting data, adding members to an outline, running business rules, and more. The REST API is an exciting development as Planning continues its march to the cloud, because it will allow integration from many languages and tools.

Consuming a REST API isn’t necessarily hard, but there are a lot of moving parts – connecting, managing credentials, invoking the right method, parsing the output, and then deciding what to do with it. Most of this is pretty boilerplate code that can get verbose pretty quickly.

Therefore, I thought it would be nice if there were a high-quality library that could easily be used and reused. This is where the PBCS Java Client – PBJ – comes in: it is a freely available library that is released under the Apache Software License. The ASL is a very generous license that essentially means you can do anything you want with the code. My hope is that this library becomes the de facto client for using Java to work with the PBCS API.

As an example of how PBJ makes things simple, consider the following example of connecting to a server and running a business rule:

PbcsClient client = new PbcsClientImpl(server, identityDomain, username, password);
PbcsApplication app = client.getApplication("Vision");
app.launchBusinessRule("AggAll");

Here’s a small example of checking the status of a job:

PbcsClient client = new PbcsClientImpl(server, identityDomain, username, password);
PbcsApplication app = client.getApplication(appName);
PbcsJobStatus jobStatus = app.getJobStatus(558);
System.out.println("Job status: " + jobStatus);

Three lines of code and we have connected to our PBCS server, grabbed a particular application, and then invoked a business rule on it. At the moment the API supports running business rules, exporting data, refreshing cubes, downloading files, and more.

PBJ is under active development and will be released in the near future. Contributions from others are entirely welcome. Not every PBCS feature will be supported on day one. Please let me know if you’d like to contribute to or test out the API.

Book Review: Developing Essbase Applications – Hybrid Techniques and Practices

I am very pleased to be able to review Developing Essbase Applications: Hybrid Techniques and Practices for you today. I can’t believe it has already been over three years since my review of the first Developing Essbase Applications book. As with before, this book is a collection of in-depth chapters on various subjects, written by some of the best and brightest out there in the Essbase community. The author list is pared down slightly from before and contains some new, but familiar names.

Again leading the cat herding editing efforts is the venerable Cameron Lackpour, along with John Booth, Tim German, William Hodges, Mike Nader, Martin Neuliep, and Glenn Schwartzberg. Those of you that frequent OTN, the Network54 Essbase Forum, or conferences such as Kscope will recognize these names quite easily. I am quite honored to consider these authors my friends and colleagues.

The DEA reader will be treated to several compelling chapters on incredibly contemporary Essbase topics, including the following:

  1. Exalytics
  2. Hybrid Essbase
  3. Young Person’s Guide to Essbase Cube Design
  4. Essbase Performance and Load Testing
  5. Utilizing SQL to Enhance Essbase
  6. Integrating OBIEE and Essbase
  7. Managing Spreadsheets with Dodeca
  8. Smart View

Essentially, the book takes a divide and conquer approach where each author tackles one or more of the chapters. Now, when it comes to programming books, of which I am a fairly avid reader, I am not a fan of the “cookbook” approach – wherein the book is a collection of disparate topics. In the programming book realm these books tend to read like a bunch of polished up blog posts. For this Essbase book, I think the approach works wonderfully. I have to say that none of the chapters feels like a glorified blog post. They are incredibly densely packed with useful information, and perhaps more usefully, they are packed with an incredible amount of insight and pointers that are derived from countless hours of efforts on the part of the respective authors.

I remember talking to Cameron when the first book came out and I told him that it was a good book, but I wasn’t sure that it was a great book (in retrospect I think I was being overly critical). And I remember that he told me that if you can learn just one or two things from the book, it’d be worth it. And that notion really resonated with me.

Along those lines, the chapters on Hybrid Essbase, Performance and Load Testing, and Smart View are worth the price of admission on their own. The other chapters are also quite compelling but your particular skill set will affect how much you get out of a chapter. For instance, I have done my fair share of SQL work, so I found Glenn’s chapter on Utilizing SQL to be an interesting treatise on Essbase and SQL but I didn’t happen to personally get a lot of it. That said, while I take familiarity with SQL for granted, I now know that SQL isn’t in every Essbase developer’s wheelhouse (but it should be!).

So, does this book pass the “did you learn one or two things from it?” test?

Absolutely.

If you are at all serious about being an Essbase developer, I can easily and whole-heartedly recommend that you add this book to your collection. This is a no-brainer if you are a developer or a consultant. This is content from people at the top of their games that have put in countless hours to compile and write this content on incredibly useful topics. If you gain one insight on Exalytics, Hybrid Essbase, Smart View, OBIEE, SQL, cube design, performance testing, or learn something about the awesome Dodeca software, it will have been easily worth it.

If it were me, I might have spiced this book by adding in a Jason Jones chapter on ODI and Hyperion or some other interesting things, but I can’t fault them for that (next time, Lackpour, next time).

Nice job, guys. Now go buy this book.

Drillbridge vs. Essbase Studio

I get asked a lot about implementing drill-through with Drillbridge versus implementing with Essbase Studio. First of all, this question itself is kind of interesting to me – because many people look at Studio as a means to an end simply for achieving drill-through. Drill-through is one of the aspects of Essbase Studio, but certainly not the whole story.

Essbase Studio is a good tool. It’s the successor to Essbase Integration Services, which I was a huge fan of, and my blog articles from yesteryear continue to get a lot of web traffic, owing at least partly to their being very few EIS articles on the web.

In terms of implementing solutions with Essbase Studio, the most common scenario I see at organizations is that they want drill-through, and this alone drives the decision to use Essbase Studio. So the developer starts going through development: creating fact tables, dimension tables, foundational Essbase Studio elements, cube models, and deploying a cube.

Let’s take things a step further and think about the pieces involved with automating and managing the cube: some MaxL to deploy it via automation, ETL processes to load and update the dimension/fact tables, creating the drill-through reports with custom SQL, writing calc scripts, and more.

The point of all this isn’t to make Studio seem like a lot of work. It’s a reasonable amount of work if Essbase Studio solutions fit into your organization (more on that in a moment). I think it’s an unreasonable amount of work if you only want drill-through.

Implementing a solution with Studio just to get drill-through is what I sometimes call “backdoor drill-through” – basically overlaying an existing solution with Studio drill-through just so you can get drill-through, but not otherwise leveraging much of anything that Essbase Studio brings to the table.

That said, I think if your organization has a “strong” relational database model (e.g., a well-designed database designed for some system that has proper primary/foreign keys, constraints, and so on), plus some other unit of the business takes care of updating it on a regular basis, I think that’s a really good use-case for building out a cube with Essbase Studio. But why?

Again, it seems like most solutions built with Essbase Studio are built from tables and data specifically created for Studio – in other words, someone has to endeavor to design ETL processes to load tables from wherever that are specifically formatted for Studio and the cube to be built with it. One of the problems with this approach, however, is that many of us are cube experts and not necessarily relational database experts. You don’t have to be a relational database expert to make use of Studio, but it won’t hurt – in fact, it can help quite a bit.

So again, I think Essbase Studio can be a great choice to spin up multiple cubes when you are tapping in to an existing relational model. Not as much if you have to create that model yourself. Essbase Studio solutions should feel “natural” – you should be able to get away with the default data load code and stock drill-through code if you have designed everything correctly. All too often there is a custom data load and completely custom drill-through (which is often a sign that the data model is incorrectly built).

All this is why Drillbridge really shines in terms of drill-through ROI: You can keep your existing cube, automation, administration, and everything. Your relational data can come from anywhere. You can build completely custom web reports, drill from drill reports to other reports, and more. Drill-through on those dynamic calc YTD members in your alternate Time hierarchy are super easy to support. There is no 1,000 item limit on Oracle drill-to-bottom queries, and more.

But best of all: fewer moving parts, rapid deployment, and keep your processes exactly the same. I think this really resonates with organizations around the world. The free edition of Drillbridge is now in production at over 30 companies around the world (probably more), and there is a growing list of happy Drillbridge Enterprise customers. Even better: I get emails every single day from people that love Drillbridge and are enhancing the value of their Essbase solutions for their users.

Database stats and outline viewing with Vess

I had a little bit of time this weekend and dusted off Vess for some updates. I was able to greatly enhance the functionality and perhaps more important, widen the number of use cases that it could be used in.

I already wrote about a use case previously: using SQL to create a substitution variable. Vess supports fully symmetric substitution variable updates. That is to say, you can create, read, update, and delete substitution variables on your Essbase server just by changing the data in the proper table. So just to elaborate on my thoughts on the previous post, I think there are some nice use cases here. Of course, there’s nothing wrong with MaxL, but if you have an instance where you are dumping substitution variables via MaxL and scraping the ASCII art off of them in order to get to the real variable values, Vess offers a cleaner approach.

As of now, Vess also provides a window in to a whole slew of properties on various Essbase objects: server, application, and database. Here’s a view of some database properties (as shown in a generic JDBC GUI with Vess configured as a driver):

Using Vess to view Essbase database properties for Sample.Basic

There’s a use case here too. I know of more than a few people that are dumping database stats using MaxL (such as to monitor a database/server stats over time).  So instead of running a script, dumping to a file, parsing that out (or whatever), you could slam the data directly into SQL (using ODI or just some tool that can execute SQL statements).

Next up, Vess models tables for the various dimensions in a cube. This is not unlike how ODI would create a model of a cube: one table per dimension. Here’s a view of some information from Sample/Basic:

Using Vess to view outline information

Outline functionality in Vess is not symmetric: you can read but updates are disabled for now. The use case here is that you could pull outline information without needing to use ODI (or something else) and possibly if you are using the Essbase Outline Extractor in your automation. Since Vess is already a fully functional SQL database, you could pretty easily do some interesting things like just grabbing all of the level-0 members in an outline (“SELECT * FROM SAMPLE.BASIC_DIM_MARKET WHERE LEVEL = 0”).

For now, Vess supports data loads to a cube (look mom, no load rules!). I wrote about this before too. Data reads are turned off for now (as in, you can INSERT but SELECT always returns nothing). It’s a bit of an interesting issue when you want to map this into a relational model.

Lastly, I started writing up the ODI technology for Vess. I think this is an absolute win for Vess because it would mean that you could build a nice clean technology/Knowledge Module that doesn’t have to use a bunch of Jython to glue things together: you can treat everything as a vanilla JDBC database model. No idea when this could be ready but you can rest assured that I’ll blog about it.