Introducing PBJ: The PBCS Java Client Project


I’d like to introduce the forthcoming availability of a new open source project called PBJ (or PB&J if you want): A Java client for PBCS (Planning and Budgeting Cloud Service) that serves as a clean, robust, and rapid way to work with the PBCS REST API in Java.

The PBCS REST API provides similar functionality to the EPM Automate tool and can perform such actions as exporting data, adding members to an outline, running business rules, and more. The REST API is an exciting development as Planning continues its march to the cloud, because it will allow integration from many languages and tools.

Consuming a REST API isn’t necessarily hard, but there are a lot of moving parts – connecting, managing credentials, invoking the right method, parsing the output, and then deciding what to do with it. Most of this is pretty boilerplate code that can get verbose pretty quickly.

Therefore, I thought it would be nice if there were a high-quality library that could easily be used and reused. This is where the PBCS Java Client – PBJ – comes in: it is a freely available library that is released under the Apache Software License. The ASL is a very generous license that essentially means you can do anything you want with the code. My hope is that this library becomes the de facto client for using Java to work with the PBCS API.

As an example of how PBJ makes things simple, consider the following example of connecting to a server and running a business rule:

PbcsClient client = new PbcsClientImpl(server, identityDomain, username, password);
PbcsApplication app = client.getApplication("Vision");

Here’s a small example of checking the status of a job:

PbcsClient client = new PbcsClientImpl(server, identityDomain, username, password);
PbcsApplication app = client.getApplication(appName);
PbcsJobStatus jobStatus = app.getJobStatus(558);
System.out.println("Job status: " + jobStatus);

Three lines of code and we have connected to our PBCS server, grabbed a particular application, and then invoked a business rule on it. At the moment the API supports running business rules, exporting data, refreshing cubes, downloading files, and more.

PBJ is under active development and will be released in the near future. Contributions from others are entirely welcome. Not every PBCS feature will be supported on day one. Please let me know if you’d like to contribute to or test out the API.

Book Review: Developing Essbase Applications – Hybrid Techniques and Practices

I am very pleased to be able to review Developing Essbase Applications: Hybrid Techniques and Practices for you today. I can’t believe it has already been over three years since my review of the first Developing Essbase Applications book. As with before, this book is a collection of in-depth chapters on various subjects, written by some of the best and brightest out there in the Essbase community. The author list is pared down slightly from before and contains some new, but familiar names.

Again leading the cat herding editing efforts is the venerable Cameron Lackpour, along with John Booth, Tim German, William Hodges, Mike Nader, Martin Neuliep, and Glenn Schwartzberg. Those of you that frequent OTN, the Network54 Essbase Forum, or conferences such as Kscope will recognize these names quite easily. I am quite honored to consider these authors my friends and colleagues.

The DEA reader will be treated to several compelling chapters on incredibly contemporary Essbase topics, including the following:

  1. Exalytics
  2. Hybrid Essbase
  3. Young Person’s Guide to Essbase Cube Design
  4. Essbase Performance and Load Testing
  5. Utilizing SQL to Enhance Essbase
  6. Integrating OBIEE and Essbase
  7. Managing Spreadsheets with Dodeca
  8. Smart View

Essentially, the book takes a divide and conquer approach where each author tackles one or more of the chapters. Now, when it comes to programming books, of which I am a fairly avid reader, I am not a fan of the “cookbook” approach – wherein the book is a collection of disparate topics. In the programming book realm these books tend to read like a bunch of polished up blog posts. For this Essbase book, I think the approach works wonderfully. I have to say that none of the chapters feels like a glorified blog post. They are incredibly densely packed with useful information, and perhaps more usefully, they are packed with an incredible amount of insight and pointers that are derived from countless hours of efforts on the part of the respective authors.

I remember talking to Cameron when the first book came out and I told him that it was a good book, but I wasn’t sure that it was a great book (in retrospect I think I was being overly critical). And I remember that he told me that if you can learn just one or two things from the book, it’d be worth it. And that notion really resonated with me.

Along those lines, the chapters on Hybrid Essbase, Performance and Load Testing, and Smart View are worth the price of admission on their own. The other chapters are also quite compelling but your particular skill set will affect how much you get out of a chapter. For instance, I have done my fair share of SQL work, so I found Glenn’s chapter on Utilizing SQL to be an interesting treatise on Essbase and SQL but I didn’t happen to personally get a lot of it. That said, while I take familiarity with SQL for granted, I now know that SQL isn’t in every Essbase developer’s wheelhouse (but it should be!).

So, does this book pass the “did you learn one or two things from it?” test?


If you are at all serious about being an Essbase developer, I can easily and whole-heartedly recommend that you add this book to your collection. This is a no-brainer if you are a developer or a consultant. This is content from people at the top of their games that have put in countless hours to compile and write this content on incredibly useful topics. If you gain one insight on Exalytics, Hybrid Essbase, Smart View, OBIEE, SQL, cube design, performance testing, or learn something about the awesome Dodeca software, it will have been easily worth it.

If it were me, I might have spiced this book by adding in a Jason Jones chapter on ODI and Hyperion or some other interesting things, but I can’t fault them for that (next time, Lackpour, next time).

Nice job, guys. Now go buy this book.

Drillbridge vs. Essbase Studio

I get asked a lot about implementing drill-through with Drillbridge versus implementing with Essbase Studio. First of all, this question itself is kind of interesting to me – because many people look at Studio as a means to an end simply for achieving drill-through. Drill-through is one of the aspects of Essbase Studio, but certainly not the whole story.

Essbase Studio is a good tool. It’s the successor to Essbase Integration Services, which I was a huge fan of, and my blog articles from yesteryear continue to get a lot of web traffic, owing at least partly to their being very few EIS articles on the web.

In terms of implementing solutions with Essbase Studio, the most common scenario I see at organizations is that they want drill-through, and this alone drives the decision to use Essbase Studio. So the developer starts going through development: creating fact tables, dimension tables, foundational Essbase Studio elements, cube models, and deploying a cube.

Let’s take things a step further and think about the pieces involved with automating and managing the cube: some MaxL to deploy it via automation, ETL processes to load and update the dimension/fact tables, creating the drill-through reports with custom SQL, writing calc scripts, and more.

The point of all this isn’t to make Studio seem like a lot of work. It’s a reasonable amount of work if Essbase Studio solutions fit into your organization (more on that in a moment). I think it’s an unreasonable amount of work if you only want drill-through.

Implementing a solution with Studio just to get drill-through is what I sometimes call “backdoor drill-through” – basically overlaying an existing solution with Studio drill-through just so you can get drill-through, but not otherwise leveraging much of anything that Essbase Studio brings to the table.

That said, I think if your organization has a “strong” relational database model (e.g., a well-designed database designed for some system that has proper primary/foreign keys, constraints, and so on), plus some other unit of the business takes care of updating it on a regular basis, I think that’s a really good use-case for building out a cube with Essbase Studio. But why?

Again, it seems like most solutions built with Essbase Studio are built from tables and data specifically created for Studio – in other words, someone has to endeavor to design ETL processes to load tables from wherever that are specifically formatted for Studio and the cube to be built with it. One of the problems with this approach, however, is that many of us are cube experts and not necessarily relational database experts. You don’t have to be a relational database expert to make use of Studio, but it won’t hurt – in fact, it can help quite a bit.

So again, I think Essbase Studio can be a great choice to spin up multiple cubes when you are tapping in to an existing relational model. Not as much if you have to create that model yourself. Essbase Studio solutions should feel “natural” – you should be able to get away with the default data load code and stock drill-through code if you have designed everything correctly. All too often there is a custom data load and completely custom drill-through (which is often a sign that the data model is incorrectly built).

All this is why Drillbridge really shines in terms of drill-through ROI: You can keep your existing cube, automation, administration, and everything. Your relational data can come from anywhere. You can build completely custom web reports, drill from drill reports to other reports, and more. Drill-through on those dynamic calc YTD members in your alternate Time hierarchy are super easy to support. There is no 1,000 item limit on Oracle drill-to-bottom queries, and more.

But best of all: fewer moving parts, rapid deployment, and keep your processes exactly the same. I think this really resonates with organizations around the world. The free edition of Drillbridge is now in production at over 30 companies around the world (probably more), and there is a growing list of happy Drillbridge Enterprise customers. Even better: I get emails every single day from people that love Drillbridge and are enhancing the value of their Essbase solutions for their users.

Database stats and outline viewing with Vess

I had a little bit of time this weekend and dusted off Vess for some updates. I was able to greatly enhance the functionality and perhaps more important, widen the number of use cases that it could be used in.

I already wrote about a use case previously: using SQL to create a substitution variable. Vess supports fully symmetric substitution variable updates. That is to say, you can create, read, update, and delete substitution variables on your Essbase server just by changing the data in the proper table. So just to elaborate on my thoughts on the previous post, I think there are some nice use cases here. Of course, there’s nothing wrong with MaxL, but if you have an instance where you are dumping substitution variables via MaxL and scraping the ASCII art off of them in order to get to the real variable values, Vess offers a cleaner approach.

As of now, Vess also provides a window in to a whole slew of properties on various Essbase objects: server, application, and database. Here’s a view of some database properties (as shown in a generic JDBC GUI with Vess configured as a driver):

Using Vess to view Essbase database properties for Sample.Basic

There’s a use case here too. I know of more than a few people that are dumping database stats using MaxL (such as to monitor a database/server stats over time).  So instead of running a script, dumping to a file, parsing that out (or whatever), you could slam the data directly into SQL (using ODI or just some tool that can execute SQL statements).

Next up, Vess models tables for the various dimensions in a cube. This is not unlike how ODI would create a model of a cube: one table per dimension. Here’s a view of some information from Sample/Basic:

Using Vess to view outline information

Outline functionality in Vess is not symmetric: you can read but updates are disabled for now. The use case here is that you could pull outline information without needing to use ODI (or something else) and possibly if you are using the Essbase Outline Extractor in your automation. Since Vess is already a fully functional SQL database, you could pretty easily do some interesting things like just grabbing all of the level-0 members in an outline (“SELECT * FROM SAMPLE.BASIC_DIM_MARKET WHERE LEVEL = 0”).

For now, Vess supports data loads to a cube (look mom, no load rules!). I wrote about this before too. Data reads are turned off for now (as in, you can INSERT but SELECT always returns nothing). It’s a bit of an interesting issue when you want to map this into a relational model.

Lastly, I started writing up the ODI technology for Vess. I think this is an absolute win for Vess because it would mean that you could build a nice clean technology/Knowledge Module that doesn’t have to use a bunch of Jython to glue things together: you can treat everything as a vanilla JDBC database model. No idea when this could be ready but you can rest assured that I’ll blog about it.

Vess Updates Substitution Variables

A colleague of mine is running into an issue with substitution variables and was looking for a solution that he could use to sync values up. He thought maybe Vess would be a good fit. Vess, as I have blogged about before, is a “virtual” Essbase JDBC driver. Vess maps Essbase concepts and crams them into a typical database model. For example, Vess exposes tables that model substitution variables.

In the case of server-wide substitution variables, there is a “VARS” table that has two columns: NAME and VALUE. For each application, there is another table that contains four columns: NAME, VALUE, APPLICATION, DATABASE.

As a quick aside, this might seem a little odd to have separate tables. After all, this table is notionally about the same as the Variables screen in EAS. Well, you have to kind of flip your thinking a little. Don’t think of variables as being only either server or a cube: think about in terms of what variables are applicable to a cube. In other words, if you ask Essbase what variables are applicable to the whole server, then this would be the global variables only. If you ask Essbase what variables are applicable to a cube, then it’s the cube, app, AND the server specific variables. This is one of the reasons there are multiple tables to model the variables.

Getting back on track, given that we have these tables and we can treat them just like normal SQL tables, we can do some interesting things. Let’s say we want to create or update a variable specific to an app that exists in the global scope. We can do this in one line:

SELECT NAME, VALUE, 'Sample', 'Basic'<br />

What’s going on here? In Vess, a schema named VESS_SCHEMA is presented for server-wide things (server wide variables are in the table VARS in this schema, as shown above). The server VARS table only has columns NAME and VALUE.

Each application on the Essbase sever is modeled as its own schema. In this case, our favorite app – Sample – gets a schema named SAMPLE. This schema also contains a VARS table (containing columns NAME, VALUE, APPLICATION, and DATABASE).

Given these tables we have, it’s a simple matter of selecting the server variable with the name we want (in this case, a variable named ‘Foo’), and insert it into the variables for the Sample app.

Of course, if we wanted to for some reason, we could alter the name using normal SQL (string truncating, substrings, etc), or whatever. We can also delete variables, such as this:


As I’ve said before, Vess continues to be an “interesting” proof of concept. As time permits I am filling out more and more functionality. At present, Vess models things like substitution variables, metadata you might see in MaxL or EAS (cube statistics, user sessions, etc), can load data to cubes, and can do certain outline related operations.

Vess is not available as a public download at this time but I have handed a few copies out to get feedback. I think Vess is just about good enough to be used in automation and other things. If you’re interested in using this in a production situation (automation or otherwise), please contact me to discuss support options.

Custom Drillbridge Reports feature

I have been waiting to write about this feature for over a year – and I finally can.

The newest version of Drillbridge – Drillbridge Enterprise 2.0 – contains an amazing capability that is incredibly powerful. Drillbridge Enterprise now includes the ability to plug in custom reports.

What are custom reports? Custom reports are new report types that can be plugged in to Drillbridge by developers, just like a calc script or a CDF is added to Essbase. Think of the most common Drillbridge report type: Drillbridge users know this as just a “Report” but under the hood, it’s actually a specific type of report – one that receives the POV from Smart View, Planning, or Financial Reporting and then executes a specially crafted Drillbridge query against a database and displays the results to the user.

What if we could stand on Drillbridge’s shoulders for help with interpreting the point of view, opening a database connection, pulling members from an Essbase outline, and more, but then completely customize what is shown to the user – all using just a little bit of custom code?

This is where custom report modules come in: a custom report module gets to leverage all of the power and flexibility of Drillbridge and its framework, but then completely take over what is shown to the user. Don’t just customize a column in a table: customize the entire display!

There are several compelling use cases I can think of off the top of my head that are well-served by custom modules:

  • Show images/thumbnails in the drill-through results with links to full-size copies
  • Call a web-service to pull data to display to the user
  • Generate custom page layouts

As an aside, for the past year I have been somewhat regularly been getting asked the question, “Can I get the results directly in Excel?” To which I have said, “No, but it’s just one click away – and I don’t think you want to do that anyway.” What I didn’t say (but had in mind) was custom reports, where we could easily drop in a  module that would show us images, a custom layout, or anything we could dream up – but more importantly, generate something that just does not fit into our spreadsheet world. So not only is data just one click away from Excel (as always) but we can now display absolutely anything we want to the user.


Do you remember Linked Reporting Objects (LROs?). This is them on steroids – and more. Any intersection in your cube is up for grabs to drill into something interesting – user drilling on Actuals? Let’s show them PDFs of the original invoices. User drilling on dynamic calc YTD members under Budget in an ASO database? Let’s generate a link to our internal website with budget info – whatever. How about we search Twitter for a certain hashtag related to one of our products… why not? The possibilities are literally endless. Not only are they endless – they are within reach.

One last thing, the custom report modules also open the door for another oft-requested feature: generating custom links. It is now trivially easy to take the POV from a user, use the Drillbridge expression language to transform it into a URL (such as drilling to an internal site, an OBIEE page, or whatever), and forward the user to it.

Custom report modules (including the link forwarding report I just mentioned) are available right now in Drillbridge Enterprise 2.0. While Drillbridge Enterprise has a list of compelling features, this one is my absolute favorite.

If you’re feeling adventurous (and you’re a Java aficionado), you might check out a bare-bones custom report example on the Drillbridge wiki – and I’ll be posting some interesting examples in the near future.

Introducing Drillbridge Enterprise

I am very pleased to officially announce the launch of Drillbridge Enterprise. This new software offering represents the 2.0 version of Drillbridge and complements the existing free edition of Drillbridge.

The existing or “plain” version of Drillbridge continues to exist as a free product. This allows organizations to continue downloading and using Drillbridge to enhance the value of their investment in Essbase.

Drillbridge Enterprise is a paid and licensed version of Drillbridge that comes with enhanced features and capabilities and is intended for organizations that have advanced integration needs as well as need software support. These advanced features include such things as automation integration tools, enhanced security, custom report modules, a Java SDK, RESTful API, and more. It’s a really great release that is available immediately and is licensed through Saxifrage Systems LLC.

Moving forward, development will be focused on Drillbridge Enterprise, while the free version of Drillbridge (also known as just Drillbridge or as Drillbridge Community Edition) will see updates mostly focused on addressing one-off issues as needed.

Drillbridge Enterprise is built from the same solid code base as Drillbridge that has proven so robust, flexible, and easy to configure. In just the week alone after Kscope15 (where Drillbridge had a well-attended session), at least three different people independently downloaded, installed, and setup Drillbridge at their companies.

Clearly, people are hungry to enhance the value of their existing systems in a simple, non-invasive, and incremental way. I am absolutely astounded at the number of times that I get emails from people saying “It just works!”. It brings a smile to my face.

That said, there are some truly incredible features available right now in Drillbridge Enterprise 2.0, it is competitively priced, and I can’t wait to get it into the hands of Essbase users around the globe. Please check out the Drillbridge website for more information, and as always, don’t hesitate to reach out to me with any questions.

Manually injecting rows to an ODI journal table

Earlier this year I did an absolute whirlwind crash course on ODI journalization. In that article I talked about the layout of tables that ODI uses to track changes in particular tables (for simple journalization with one table). You may recall that changed (inserted, updated, deleted) rows are tracked in this table. This is done by way of some metadata as well as the primary key of the monitored table.

Typically after your processing is done, the entries from the ODI journal table that were processed will be deleted. Occasionally, however, you may need to manually fix the ODI journal table. For me this sometimes involves “manually injecting” rows to the journal table. You could achieve this in a couple of ways. This way we can force some rows to process that otherwise wouldn’t be processed because ODI doesn’t consider them to be new or updated.

The first way is to “touch” the rows in the monitored table in some way so that the trigger installed on that table gets activated and inserts rows to the ODI journal table. You could perform some sort of dummy update on a row. This can work but isn’t always desirable.

Another technique and the one I’ll talk about in this post is to manually add rows to the journal table that contain the primary key of the row to be processed. The trick, then, is what to fill in for the other values of the columns?

Recall that the general format of a table is the subscriber name, a consumed flag, the date it went into the table, whether it’s an insert or a delete, and the primary key of the row being journalized.

As an example, assuming the journal table is called J$TEST_CODE_RESULT (for a real table of TEST_CODE_RESULT), and also assuming a primary key name of TEST_CODE_RESULT_ID (that in this case is an integer), we can add a row to the journal table like this:


The first value is the JRN_SUBSCRIBER. This corresponds to the name of the subscriber setup in the ODI journalizing and is also the same as the filter you use in the interface to get rows for just your particular subscriber. Next is the JRN_CONSUMED field which we set to 0 (during processing, ODI will change this to 1 to indicate it’s working on that data), the character ‘I’ (upper case i) to indicate the row was an insert or update, the current time (this assumes an Oracle database), and finally, the ID of the row we need to inject.

You can of course also insert as many rows as you want such as by writing an INSERT that gets values from a SELECT. Every now and then if you have a lot of rows you need to reprocess you might find yourself with a table that has the IDs, and you can just insert them into this table and they’ll get picked up on the next run.


More fun with Vess, the Essbase JDBC driver

All I can say about Vess is that it’s still… interesting. This is definitely one of the more complex things I’ve ever worked on, as well as one of the more elegant. I did a lot of plumbing on how the driver works which is now enabling so interesting and rapid progress.

Here are some interesting things that you can do as of right now:

  1. Query list of applications and metadata
  2. Query list of cubes and metadata
  3. Query list of sessions (same as Sessions in EAS)
  4. Load data without a load rule by using a SQL INSERT statement
  5. List users
  6. List substitution variables

Some of the really fun things are actually the most subtle. Wherever possible Vess tries to supply metadata for a modeled table that matches up with Essbase semantics or naming limits. For example, substitution variables are in a table where you have the variable name, value, application, and database. Each of these columns are normal String columns but they have character limits that match the maximum Essbase length (320, 256, 30, and 30, respectively, taking into account Unicode).

Similarly for member names on the data load table. Also wherever possible, a primary key or nullability constraint is set on a table where it also makes sense. For example, substitution variables are assumed to be non-null, as well as member names for an Essbase fact table, cube names, and more.

I have been using a free generic JDBC tool called Squirrel SQL to help facilitate testing. Squirrel supports any database for which you can supply a generic JDBC driver. Here are a few screenshots from testing out Vess when connected to Sample/Basic (list of cubes on server):

View list of cubes and metadata with Vess

List of active sessions on server:

Viewing list of active sessions on Essbase server using Vess

Some of the tables also support DELETE semantics, for example, deleting a row from the session table is equivalent to killing the session, as with substitution variables.

Lastly, all tables that are modeled in Vess are part of the SQL metadata store, so that tools that go in to ask for metadata (such as Squirrel) get it back. This is interesting/notable because theoretically you could already use Vess with ODI to perform data loads and some other things. I haven’t tried this yet, but theoretically I could add the drivers, reverse the metadata, and use one of the tables in an interface. So it’d be fairly trivial to update a substitution variable directly from SQL, perform a data load, and you could even kick off a calc script simply by invoking a procedure.

In short, things are exciting in the Vess world. While many of you want to load data to Essbase cubes without a load rule, just as many are interested in pulling database metadata (things you normally dump with MaxL and scrape out), and this tool might just fill that need in a clean way. More to come later, I have some presentations to get ready for Kscope15 (although sadly none on this little technological flight of fancy…)