Drillbridge vs. Essbase Studio

I get asked a lot about implementing drill-through with Drillbridge versus implementing with Essbase Studio. First of all, this question itself is kind of interesting to me – because many people look at Studio as a means to an end simply for achieving drill-through. Drill-through is one of the aspects of Essbase Studio, but certainly not the whole story.

Essbase Studio is a good tool. It’s the successor to Essbase Integration Services, which I was a huge fan of, and my blog articles from yesteryear continue to get a lot of web traffic, owing at least partly to their being very few EIS articles on the web.

In terms of implementing solutions with Essbase Studio, the most common scenario I see at organizations is that they want drill-through, and this alone drives the decision to use Essbase Studio. So the developer starts going through development: creating fact tables, dimension tables, foundational Essbase Studio elements, cube models, and deploying a cube.

Let’s take things a step further and think about the pieces involved with automating and managing the cube: some MaxL to deploy it via automation, ETL processes to load and update the dimension/fact tables, creating the drill-through reports with custom SQL, writing calc scripts, and more.

The point of all this isn’t to make Studio seem like a lot of work. It’s a reasonable amount of work if Essbase Studio solutions fit into your organization (more on that in a moment). I think it’s an unreasonable amount of work if you only want drill-through.

Implementing a solution with Studio just to get drill-through is what I sometimes call “backdoor drill-through” – basically overlaying an existing solution with Studio drill-through just so you can get drill-through, but not otherwise leveraging much of anything that Essbase Studio brings to the table.

That said, I think if your organization has a “strong” relational database model (e.g., a well-designed database designed for some system that has proper primary/foreign keys, constraints, and so on), plus some other unit of the business takes care of updating it on a regular basis, I think that’s a really good use-case for building out a cube with Essbase Studio. But why?

Again, it seems like most solutions built with Essbase Studio are built from tables and data specifically created for Studio – in other words, someone has to endeavor to design ETL processes to load tables from wherever that are specifically formatted for Studio and the cube to be built with it. One of the problems with this approach, however, is that many of us are cube experts and not necessarily relational database experts. You don’t have to be a relational database expert to make use of Studio, but it won’t hurt – in fact, it can help quite a bit.

So again, I think Essbase Studio can be a great choice to spin up multiple cubes when you are tapping in to an existing relational model. Not as much if you have to create that model yourself. Essbase Studio solutions should feel “natural” – you should be able to get away with the default data load code and stock drill-through code if you have designed everything correctly. All too often there is a custom data load and completely custom drill-through (which is often a sign that the data model is incorrectly built).

All this is why Drillbridge really shines in terms of drill-through ROI: You can keep your existing cube, automation, administration, and everything. Your relational data can come from anywhere. You can build completely custom web reports, drill from drill reports to other reports, and more. Drill-through on those dynamic calc YTD members in your alternate Time hierarchy are super easy to support. There is no 1,000 item limit on Oracle drill-to-bottom queries, and more.

But best of all: fewer moving parts, rapid deployment, and keep your processes exactly the same. I think this really resonates with organizations around the world. The free edition of Drillbridge is now in production at over 30 companies around the world (probably more), and there is a growing list of happy Drillbridge Enterprise customers. Even better: I get emails every single day from people that love Drillbridge and are enhancing the value of their Essbase solutions for their users.

Database stats and outline viewing with Vess

I had a little bit of time this weekend and dusted off Vess for some updates. I was able to greatly enhance the functionality and perhaps more important, widen the number of use cases that it could be used in.

I already wrote about a use case previously: using SQL to create a substitution variable. Vess supports fully symmetric substitution variable updates. That is to say, you can create, read, update, and delete substitution variables on your Essbase server just by changing the data in the proper table. So just to elaborate on my thoughts on the previous post, I think there are some nice use cases here. Of course, there’s nothing wrong with MaxL, but if you have an instance where you are dumping substitution variables via MaxL and scraping the ASCII art off of them in order to get to the real variable values, Vess offers a cleaner approach.

As of now, Vess also provides a window in to a whole slew of properties on various Essbase objects: server, application, and database. Here’s a view of some database properties (as shown in a generic JDBC GUI with Vess configured as a driver):

Using Vess to view Essbase database properties for Sample.Basic

There’s a use case here too. I know of more than a few people that are dumping database stats using MaxL (such as to monitor a database/server stats over time).  So instead of running a script, dumping to a file, parsing that out (or whatever), you could slam the data directly into SQL (using ODI or just some tool that can execute SQL statements).

Next up, Vess models tables for the various dimensions in a cube. This is not unlike how ODI would create a model of a cube: one table per dimension. Here’s a view of some information from Sample/Basic:

Using Vess to view outline information

Outline functionality in Vess is not symmetric: you can read but updates are disabled for now. The use case here is that you could pull outline information without needing to use ODI (or something else) and possibly if you are using the Essbase Outline Extractor in your automation. Since Vess is already a fully functional SQL database, you could pretty easily do some interesting things like just grabbing all of the level-0 members in an outline (“SELECT * FROM SAMPLE.BASIC_DIM_MARKET WHERE LEVEL = 0”).

For now, Vess supports data loads to a cube (look mom, no load rules!). I wrote about this before too. Data reads are turned off for now (as in, you can INSERT but SELECT always returns nothing). It’s a bit of an interesting issue when you want to map this into a relational model.

Lastly, I started writing up the ODI technology for Vess. I think this is an absolute win for Vess because it would mean that you could build a nice clean technology/Knowledge Module that doesn’t have to use a bunch of Jython to glue things together: you can treat everything as a vanilla JDBC database model. No idea when this could be ready but you can rest assured that I’ll blog about it.

Vess Updates Substitution Variables

A colleague of mine is running into an issue with substitution variables and was looking for a solution that he could use to sync values up. He thought maybe Vess would be a good fit. Vess, as I have blogged about before, is a “virtual” Essbase JDBC driver. Vess maps Essbase concepts and crams them into a typical database model. For example, Vess exposes tables that model substitution variables.

In the case of server-wide substitution variables, there is a “VARS” table that has two columns: NAME and VALUE. For each application, there is another table that contains four columns: NAME, VALUE, APPLICATION, DATABASE.

As a quick aside, this might seem a little odd to have separate tables. After all, this table is notionally about the same as the Variables screen in EAS. Well, you have to kind of flip your thinking a little. Don’t think of variables as being only either server or a cube: think about in terms of what variables are applicable to a cube. In other words, if you ask Essbase what variables are applicable to the whole server, then this would be the global variables only. If you ask Essbase what variables are applicable to a cube, then it’s the cube, app, AND the server specific variables. This is one of the reasons there are multiple tables to model the variables.

Getting back on track, given that we have these tables and we can treat them just like normal SQL tables, we can do some interesting things. Let’s say we want to create or update a variable specific to an app that exists in the global scope. We can do this in one line:


INSERT INTO SAMPLE.VARS<br />
SELECT NAME, VALUE, 'Sample', 'Basic'<br />
FROM VESS_SCHEMA.VARS WHERE NAME = 'Foo';

What’s going on here? In Vess, a schema named VESS_SCHEMA is presented for server-wide things (server wide variables are in the table VARS in this schema, as shown above). The server VARS table only has columns NAME and VALUE.

Each application on the Essbase sever is modeled as its own schema. In this case, our favorite app – Sample – gets a schema named SAMPLE. This schema also contains a VARS table (containing columns NAME, VALUE, APPLICATION, and DATABASE).

Given these tables we have, it’s a simple matter of selecting the server variable with the name we want (in this case, a variable named ‘Foo’), and insert it into the variables for the Sample app.

Of course, if we wanted to for some reason, we could alter the name using normal SQL (string truncating, substrings, etc), or whatever. We can also delete variables, such as this:


DELETE FROM SAMPLE.VARS WHERE CUBE = 'Basic';

As I’ve said before, Vess continues to be an “interesting” proof of concept. As time permits I am filling out more and more functionality. At present, Vess models things like substitution variables, metadata you might see in MaxL or EAS (cube statistics, user sessions, etc), can load data to cubes, and can do certain outline related operations.

Vess is not available as a public download at this time but I have handed a few copies out to get feedback. I think Vess is just about good enough to be used in automation and other things. If you’re interested in using this in a production situation (automation or otherwise), please contact me to discuss support options.

Custom Drillbridge Reports feature

I have been waiting to write about this feature for over a year – and I finally can.

The newest version of Drillbridge – Drillbridge Enterprise 2.0 – contains an amazing capability that is incredibly powerful. Drillbridge Enterprise now includes the ability to plug in custom reports.

What are custom reports? Custom reports are new report types that can be plugged in to Drillbridge by developers, just like a calc script or a CDF is added to Essbase. Think of the most common Drillbridge report type: Drillbridge users know this as just a “Report” but under the hood, it’s actually a specific type of report – one that receives the POV from Smart View, Planning, or Financial Reporting and then executes a specially crafted Drillbridge query against a database and displays the results to the user.

What if we could stand on Drillbridge’s shoulders for help with interpreting the point of view, opening a database connection, pulling members from an Essbase outline, and more, but then completely customize what is shown to the user – all using just a little bit of custom code?

This is where custom report modules come in: a custom report module gets to leverage all of the power and flexibility of Drillbridge and its framework, but then completely take over what is shown to the user. Don’t just customize a column in a table: customize the entire display!

There are several compelling use cases I can think of off the top of my head that are well-served by custom modules:

  • Show images/thumbnails in the drill-through results with links to full-size copies
  • Call a web-service to pull data to display to the user
  • Generate custom page layouts

As an aside, for the past year I have been somewhat regularly been getting asked the question, “Can I get the results directly in Excel?” To which I have said, “No, but it’s just one click away – and I don’t think you want to do that anyway.” What I didn’t say (but had in mind) was custom reports, where we could easily drop in a  module that would show us images, a custom layout, or anything we could dream up – but more importantly, generate something that just does not fit into our spreadsheet world. So not only is data just one click away from Excel (as always) but we can now display absolutely anything we want to the user.

Sweet.

Do you remember Linked Reporting Objects (LROs?). This is them on steroids – and more. Any intersection in your cube is up for grabs to drill into something interesting – user drilling on Actuals? Let’s show them PDFs of the original invoices. User drilling on dynamic calc YTD members under Budget in an ASO database? Let’s generate a link to our internal website with budget info – whatever. How about we search Twitter for a certain hashtag related to one of our products… why not? The possibilities are literally endless. Not only are they endless – they are within reach.

One last thing, the custom report modules also open the door for another oft-requested feature: generating custom links. It is now trivially easy to take the POV from a user, use the Drillbridge expression language to transform it into a URL (such as drilling to an internal site, an OBIEE page, or whatever), and forward the user to it.

Custom report modules (including the link forwarding report I just mentioned) are available right now in Drillbridge Enterprise 2.0. While Drillbridge Enterprise has a list of compelling features, this one is my absolute favorite.

If you’re feeling adventurous (and you’re a Java aficionado), you might check out a bare-bones custom report example on the Drillbridge wiki – and I’ll be posting some interesting examples in the near future.

Introducing Drillbridge Enterprise

I am very pleased to officially announce the launch of Drillbridge Enterprise. This new software offering represents the 2.0 version of Drillbridge and complements the existing free edition of Drillbridge.

The existing or “plain” version of Drillbridge continues to exist as a free product. This allows organizations to continue downloading and using Drillbridge to enhance the value of their investment in Essbase.

Drillbridge Enterprise is a paid and licensed version of Drillbridge that comes with enhanced features and capabilities and is intended for organizations that have advanced integration needs as well as need software support. These advanced features include such things as automation integration tools, enhanced security, custom report modules, a Java SDK, RESTful API, and more. It’s a really great release that is available immediately and is licensed through Saxifrage Systems LLC.

Moving forward, development will be focused on Drillbridge Enterprise, while the free version of Drillbridge (also known as just Drillbridge or as Drillbridge Community Edition) will see updates mostly focused on addressing one-off issues as needed.

Drillbridge Enterprise is built from the same solid code base as Drillbridge that has proven so robust, flexible, and easy to configure. In just the week alone after Kscope15 (where Drillbridge had a well-attended session), at least three different people independently downloaded, installed, and setup Drillbridge at their companies.

Clearly, people are hungry to enhance the value of their existing systems in a simple, non-invasive, and incremental way. I am absolutely astounded at the number of times that I get emails from people saying “It just works!”. It brings a smile to my face.

That said, there are some truly incredible features available right now in Drillbridge Enterprise 2.0, it is competitively priced, and I can’t wait to get it into the hands of Essbase users around the globe. Please check out the Drillbridge website for more information, and as always, don’t hesitate to reach out to me with any questions.

Manually injecting rows to an ODI journal table

Earlier this year I did an absolute whirlwind crash course on ODI journalization. In that article I talked about the layout of tables that ODI uses to track changes in particular tables (for simple journalization with one table). You may recall that changed (inserted, updated, deleted) rows are tracked in this table. This is done by way of some metadata as well as the primary key of the monitored table.

Typically after your processing is done, the entries from the ODI journal table that were processed will be deleted. Occasionally, however, you may need to manually fix the ODI journal table. For me this sometimes involves “manually injecting” rows to the journal table. You could achieve this in a couple of ways. This way we can force some rows to process that otherwise wouldn’t be processed because ODI doesn’t consider them to be new or updated.

The first way is to “touch” the rows in the monitored table in some way so that the trigger installed on that table gets activated and inserts rows to the ODI journal table. You could perform some sort of dummy update on a row. This can work but isn’t always desirable.

Another technique and the one I’ll talk about in this post is to manually add rows to the journal table that contain the primary key of the row to be processed. The trick, then, is what to fill in for the other values of the columns?

Recall that the general format of a table is the subscriber name, a consumed flag, the date it went into the table, whether it’s an insert or a delete, and the primary key of the row being journalized.

As an example, assuming the journal table is called J$TEST_CODE_RESULT (for a real table of TEST_CODE_RESULT), and also assuming a primary key name of TEST_CODE_RESULT_ID (that in this case is an integer), we can add a row to the journal table like this:

INSERT INTO TEST_CODES.J$TEST_CODE_RESULT VALUES ('TEST_CODE_PROCESSING', 0, 'I', sysdate, 12345);

The first value is the JRN_SUBSCRIBER. This corresponds to the name of the subscriber setup in the ODI journalizing and is also the same as the filter you use in the interface to get rows for just your particular subscriber. Next is the JRN_CONSUMED field which we set to 0 (during processing, ODI will change this to 1 to indicate it’s working on that data), the character ‘I’ (upper case i) to indicate the row was an insert or update, the current time (this assumes an Oracle database), and finally, the ID of the row we need to inject.

You can of course also insert as many rows as you want such as by writing an INSERT that gets values from a SELECT. Every now and then if you have a lot of rows you need to reprocess you might find yourself with a table that has the IDs, and you can just insert them into this table and they’ll get picked up on the next run.

 

More fun with Vess, the Essbase JDBC driver

All I can say about Vess is that it’s still… interesting. This is definitely one of the more complex things I’ve ever worked on, as well as one of the more elegant. I did a lot of plumbing on how the driver works which is now enabling so interesting and rapid progress.

Here are some interesting things that you can do as of right now:

  1. Query list of applications and metadata
  2. Query list of cubes and metadata
  3. Query list of sessions (same as Sessions in EAS)
  4. Load data without a load rule by using a SQL INSERT statement
  5. List users
  6. List substitution variables

Some of the really fun things are actually the most subtle. Wherever possible Vess tries to supply metadata for a modeled table that matches up with Essbase semantics or naming limits. For example, substitution variables are in a table where you have the variable name, value, application, and database. Each of these columns are normal String columns but they have character limits that match the maximum Essbase length (320, 256, 30, and 30, respectively, taking into account Unicode).

Similarly for member names on the data load table. Also wherever possible, a primary key or nullability constraint is set on a table where it also makes sense. For example, substitution variables are assumed to be non-null, as well as member names for an Essbase fact table, cube names, and more.

I have been using a free generic JDBC tool called Squirrel SQL to help facilitate testing. Squirrel supports any database for which you can supply a generic JDBC driver. Here are a few screenshots from testing out Vess when connected to Sample/Basic (list of cubes on server):

View list of cubes and metadata with Vess

List of active sessions on server:

Viewing list of active sessions on Essbase server using Vess

Some of the tables also support DELETE semantics, for example, deleting a row from the session table is equivalent to killing the session, as with substitution variables.

Lastly, all tables that are modeled in Vess are part of the SQL metadata store, so that tools that go in to ask for metadata (such as Squirrel) get it back. This is interesting/notable because theoretically you could already use Vess with ODI to perform data loads and some other things. I haven’t tried this yet, but theoretically I could add the drivers, reverse the metadata, and use one of the tables in an interface. So it’d be fairly trivial to update a substitution variable directly from SQL, perform a data load, and you could even kick off a calc script simply by invoking a procedure.

In short, things are exciting in the Vess world. While many of you want to load data to Essbase cubes without a load rule, just as many are interested in pulling database metadata (things you normally dump with MaxL and scrape out), and this tool might just fill that need in a clean way. More to come later, I have some presentations to get ready for Kscope15 (although sadly none on this little technological flight of fancy…)

Essbase data loads with Windows integrated authentication

This is a followup to a post I made on the Network54 Essbase forum: Yes, you can use Windows integrated authentication (indirectly) to perform Essbase data loads. It’s not necessarily supported, but it works.

First you start off with the connection definition in the Windows ODBC connection manager. This is where you setup a normal ODBC connection to SQL Server:

ODBC connections in Windows to be used by Essbase data loads

ODBC connections in Windows to be used by Essbase data loads

One of the configuration screens is where you specify the credentials, like this:

Connecting via integrated authentication

Connecting via integrated authentication

The SQL Server connection configuration lets you specify how to connect to the database instance. I would guess that much of the time this is configured with a particular native SQL Server ID. Note that it’s also possible to connect using “Windows NT authentication”. You can select this option to try and have the local machine connect via its own credentials to the SQL Server instance. That is, the user running the Essbase service on the local machine would need to have access to the given SQL Server instance.

With the ODBC connection setup, you can reference it in a load rule as normal:

Essbase Data Load SQL Definition

Next when you go to test the connection/data load, just specify anything you want for the user name and password when it prompts you in this dialog box:

Essbase SQL Data Load Credential Dialog

I have gotten into the habit of typing in ‘dummy’. As Peter Nitschke unhelpfully points out, in this case you I don’t need to document the automation author because because it’s already documented (heyoooo). Pete, I know you read this, by the way, is that rash cleared up yet? You actually have to put in something, otherwise the load won’t work.

Similarly if you do a normal data load, you have to put something in:

Essbase Data Load from SQL Load Rule

Essbase Data Load from SQL Load Rule

Or else you’ll get this helpful error message:

The error when the username is not specified

The error when the username is not specified

Do note that the username and password are both discarded or otherwise unneeded assuming the network credentials are sufficient to connect to the database. As a somewhat interesting aside, this seems to be a quirk in the Essbase code path that processes the data loads. It guards against NULL (empty) values and if it picks up one or the other for username and password, it errors out the load. If memory serves, you can get away with only filling in a username in the data load rule SQL editor test but you need both when doing a data load. This is due to differences in how the EAS GUI is verifying data. Also, if you write MaxL automation that relies on this Windows integrated authentication, you need to fill in some dummy or otherwise ignored values in the MaxL.

My guess is that Essbase actually tries using the username/password but when it is able to make a successful connection via the integrated authentication, the SQL Server driver just ignores any other connection efforts, so the “bad” username and password have no effect.

Apparently connecting this way is unsupported but I have set this up and used it numerous times without any issue. I think if Oracle really wanted to polish this up and make integrated authentication a more supported option, that’d be great, and I might start with some consistency on the EAS side of things as well as a modified MaxL syntax alternative that lets you explicitly specify integrated authentication (or put another way, a MaxL syntax that doesn’t need username/password in it).

Quadruple Backslashes

No, Quadruple Backslashes is not a college rock band (that I know of). But maybe it should be one. Anyway, ever see a quadruple backslashes on a path in a batch file? Like this:

SET FILE=\\\\some-server\\path-to-app-folder\\app\\data.txt

What’s going on here? This is a UNC path, and the normal form is like this:

SET FILE=\\some-server\path-to-app-folder\app\data.txt

Let’s say we are doing a data import in a MaxL script:

import database Sample.Basic data
    from data_file "$FILE"
    using server rules_file "LdAll"
    on error write to "errors.txt";

In many scripting languages, backslashes are special. They are used to “escape ” things inside of strings. What if you need a newline inside of a string? That’s \n. What if you want a tab? That’s \t. What if you want to print a backslash itself? You can’t just put a single backslash in because then the interpreter thinks that the next character is something special. So to write out just a backslash you escape a backslash itself (so it’s \\).

To further complicate things, many scripting languages have a notion of interpolated and non-interpolated strings. That means that strings contained inside of double quotes (“This is a double quoted string!”) have their contents parsed/scanned for special characters. Strings in single quotes don’t (‘This is a single quoted string!’). So in the single-quoted string, we can sometimes (not always) get away with doing whatever we want with backslashes. But if we use the single quoted string we can’t stick variables in it.

So now we need a double quoted string so we can put variables in it, but this makes putting in backslashes for our UNC path a little complicated. No worries – just escape the backslashes. Returning to our quadruple backslashed MaxL example that is surrounded with double quotes, what’s happening is that MaxL parses the string contents (the stuff between the double quotes) and the four backslashes become two backslashes in the final string. The double backslashes in the other parts of the UNC path get translated into a single backslash.

The final resulting string has the two backslashes to start the UNC path and any other path delimiters are just single backslashes.

Normal Paths

If you are just loading a file from a normal, non-networked path (or it’s a mapped drive), and you’re on Windows, then I highly recommend just using forward slashes to delimit the path. For example, consider an input file at D:\Hyperion\data.txt. Your double quoted string would be D:\\Hyperion\\data.txt, which gets translated down to just single quotes. For the last decade or more, Windows has had awesome support for the more traditional forward slash Unix path separator, meaning you can just use D:/Hyperion/data.txt. The forward slash isn’t special at all so you don’t need to escape it. You can’t use double forward slashes to denote a UNC path, though, so //server-name/path/to/file.txt will not work, thus requiring some backslash kung foo to get working.