Oracle Open World 2016 Recap

As I mentioned a week or so ago, I made a last minute appearance at Oracle Open World this year. It was my first time attending and presenting at OOW. I actually didn’t catch too much of the conference as I only flew in on Wednesday and flew out on Thursday. Nevertheless, I had a bit of a whirlwind experience, but a very good one. While I hadn’t planned on it (I’m more of a Kscope guy), I am now looking forward to attending Open World next year.

As for the presentation I was part of, I think it went pretty well. Many thanks to Gabby Rubin of Oracle for coming up with the idea for the presentation and facilitating it. The presentation was on “Essbase Tools and Toys” and was meant to highlight, at a high level, some of the interesting things that folks such as myself are doing that involve the Essbase APIs or otherwise work with Essbase. The presentation discussed items created by me, Tim Tow, and Harry Gates. Additionally, Kumar Ramaiyer (also from Oracle) talked a bit about what’s coming with Essbase Cloud Service (EssCS).

I talked about several tools I have created over the years and I think it was the first time I’d really tried to convey an incredibly succinct business reason for why each of the tools exist. So by way of a recap, I wanted to go over those tools here for those that are interested.

In total, I talked about these tools:

All of the tools with the exception of Vess can be downloaded by finding the appropriate page under Projects in the menu for this website. Additionally, there were a few questions and answers that came up (Drillbridge!) that I fielded.


Cubedata is a tool that generates large (or small) quantities of data that can be loaded up to a cube and help test performance. The performance being tested can be the load speed, calc speed, gauging how big the database will be/get, and looking for perhaps better dense/sparse settings, among other settings that can be tweaked.

I created this tool because I couldn’t always work with production (sensitive) data in test, or I wanted to get a more comprehensive feel for how a cube would perform over the years. Automation, calc, and query performance have always been incredibly important to me. Many of the cubes I have designed have withstood the test of time and have been performing very diligently over the years, sometimes with very few changes needing or having been made. The test files generated with Cubedata played no small part in the development of some of these solutions.

In order to use Cubedata, all you need to do is specify lists of member names in some input files (one per line), then optionally pass some other parameters to this command line utility, such as the maximum number of data points to create, what range the values should between, and so on. Cubedata will take are of the rest. I have used to to generate in excess of 50GB of input data.

For bonus points you might even use the Applied OLAP Next Generation Outline Extractor to pull members from your outlines, then feed those in to Cubedata.


Camshaft runs an MDX query against an Essbase cube, then outputs the results to a text file. Many people use MaxL to run MDX queries and feed the results into a cube using a load rule. The bummer about this approach has traditionally been that MaxL puts a lot of junk on the output, so you end up using your load rule as a mini-ETL engine to parse out headers and other junk.

Modern versions of MaxL provide a little more flexibility for MDX output, as blogged about by Gary Crisci. That said, Camshaft gives you a lot of control over the output, and more importantly, keeps the output pretty clean.

Among other options, you can tell Camshaft how to round numbers, whether to show a header or not, how to indent/combine tuples, and more. For people with automation that needs to run an MDX script and load the results somewhere else (to a cube or SQL or otherwise), Camshaft can fit very neatly into the automation sequence and clean things up. Additionally, Camshaft is contained in a single Java JAR file that can be run with the java -jar command (no ugly classpath stuff to worry about!), so dropping it in to an automation sequence is pretty clean.


Vess is a relative newcomer to the stable of tools and at this stage is still “highly experimental” as I like to say (put another way, it’s fragile. Or ornery. Probably both.

Put simply, Vess is a JDBC driver for Essbase (the name is derived from “Virtual Essbase JDBC driver”). Vess presents a relational facade to Essbase. In a lot of ways, this is like pugging a round peg in a square hold (more relevantly or UML/Visio-ish might be it’s like trying to turn a cube into a cylinder). Vess “models” the Essbase server as a bunch of relational tables. For example, the substitution variables on a server are available in a table with columns such as APPLICATION, DATABASE, NAME, and VALUE.

Vess also models the outline data as a table (similar to the output from the ODI RKM for Essbase or from the Outline Extractor) and the data in a cube itself. You can actually do a normal SQL INSERT to the Vess data table and Vess will load that data into the corresponding cube. Perhaps more usefully, Vess even lets you get at the data in a cube itself via a SQL query.

This opens up a lot of interesting possibilities – such as being able to drop the Vess driver into any tool that works with JDBC, and have instant access to Essbase data. For example, this could mean dropping the driver into ODI to perform a data extract/load, into Data Visualizer, or some other tool.

Vess is a hobby right now but I have provided it to several people that want to play with it and provide feedback. I probably wouldn’t use Vess in production right now but nevertheless, to my knowledge there are some intrepid IT risk nightmares using it for certain things.


Hyperpipe runs a SQL command against a specified database (using a JDBC driver you provide, such as to Oracle, Microsoft SQL Server, MySQL, or whatever), then takes the results and loads them to a cube you specify. Given properly specified columns, Hyperpipe loads data without the use or need of a load rule.


Jessub is one of the earlier tools I threw over the fence to the public domain, and has a very narrow but useful purpose: it provides a rich syntax for generating substitution variables. Jessub was created because while many systems I worked on/improved used fragile or cumbersome hacks to generate and update substitution variables. Additionally, the vast majority of substitution variables that need to be updated via automation have to do with timing of one kind or another: the year, month/period, week, day, and so on. These are often clumsily (read: fragile) generated from an almost indecipherable batch file expression (or worse, updated manually). Jessub makes it easy to specify a way to generate a variable that might be based on the current year/date/time.

Outline Export Parser

The Outline Export Parser is a simple implementation of a reader in Java that can parse the outline export XML files that can be generated from MaxL in contemporary versions of Essbase. Parsing complex XML files might be beyond the reach of several admins that just want some “clean” output (such as just the member names) from such a file, in which case, the Outline Export Parser might be just the right tool, as you can feed in the XML file and it’ll output just the member names (or more if you want), with optional indenting and some other flourishes.

Rejected Record Summary

The Rejected Record Summary program is not very creatively named. It processes a rejected record file such as from a MaxL import, then summarizes the causes of the rejects. It’s useful if you want to setup your automation to perform a data load (from a text file or SQL), capture the rejects, and email them if there’s a problem (in which case, you might want your automation to email the summary file generated from this program).

Questions & Answers

The audience had a few questions regarding the things I talked about. From memory, here’s some of the things that were asked:

What’s up with Drillbridge?

The short version is this (more coming in the near future!): Drillbridge was acquired by Applied OLAP back in March of this year. It’s existing customers are continuing to receive support. The free edition of Drillbridge continues to live on as Drillbridge Community Edition. This version has always been and will stay free for companies to use as they wish, although it does not come with support or several features in the licensed version of Drillbridge. The licensed version of Drillbridge is now Drillbridge Plus. Drillbridge Plus comes with support from Applied OLAP, and many additional features over the free edition, including advanced reporting types, multi-report drilling, automation integration, report caching/paging, and more.

What’s the performance like with Camshaft?

The performance of Camshaft is essentially determined by how fast Essbase can serve up the MDX query that you run. If the query takes 30 seconds, then the Camshaft runtime will likely be about 31 seconds. Internally, Camshaft just uses the Essbase Java API to run the MDX query.

What’s the performance like with Vess?

Vess is a different story, performance-wise, than Camshaft. Vess (at the moment) doesn’t use MDX to facilitate data retrieves and queries. Rather, it uses the Essbase grid API along with a little magic to try and come up with the data it needs in the most performant way possible. In short, it’s highly variable. Sometimes it’s super fast, sometimes… not so much.

One thought on “Oracle Open World 2016 Recap

  1. Hi Jason,

    I liked very much your presentation at OOW2016 and would like to be part of the group of several people playing around with Vess and providing feedback on it. :-)

    Could we get in touch on this?

    Kind regards

Leave a Reply

Your email address will not be published.