JDBC and JNDI connections compared (with a Dodeca example)

Have you ever wondered what the difference between a JDBC and a JNDI connection is? If you’re familiar with at least one of these, it’s likely that you’re familiar with JDBC (but probably not JNDI).

JDBC connections come up often in the Oracle world (for good reason). It’s a standard model/framework for designing drivers that interact with relational databases. As it pertains to us in the Hyperion, Dodeca (and even Drillbridge!) world is that we often define connections in terms of specifying JDBC parameters. This typically means a driver class name (like com.mysql.jdbc.Driver for a MySQL driver), a JDBC URL (a URL specifying a server and optionally a database/schema and other parameters), and credentials (username/password). So if you’ve poked around in your infrastructure much at all, there’s a good chance that you’ve come across a JDBC connection.

You may have even come across something called JNDI and even vaguely known it was sort of an alternate way to configure a connection but never really had to bother with it. I’ll spare you the acronym details, but think of JNDI as a way of organizing database connections (and other objects actually, but we don’t need to worry about that at the moment) such that instead of our app/system having to know the server name and credentials, it just asks “Hello, can I have the resource that was defined for me with name XYZ?”

Continue Reading…

Deleting multiple files from PBCS using PBJ client

Earlier in the week, my archnemesis colleague Cameron Lackpour hosted a guest blog article by Chris Rothermel with a trick for deleting multiple files from PBCS using the epmautomate tool. The basic idea is that you can use the listfiles command to export the list of files to a temporary file, then use some batch scripting to iterate over every line in that file, then call epmautomate to delete the specific file. It’s a good example that will undoubtedly come in useful for many people.

Upon reading the article I thought it would interesting to do the same thing, but using the PBJ library. The PBJ library is an open source, 100% Java library for interacting with PBCS via its REST API. It can easily be dropped in to enterprise Java projects by including its dependency in your Maven configuration file (if you’re so inclined). The PBJ library is also used in at least one major piece of software: it’s the library that allows Drillbridge to perform upper level drill from PBCS to a relational database.

That all said, one of the nice things about having a domain specific library in a high-level language such as Java is that it is sometimes very easy and straightforward to implement functionality that doesn’t come out of the box. This doesn’t make it better than the batch scripting technique, just different.

Here’s the whole code file:

package com.jasonwjones.pbcs.misc;

import com.jasonwjones.pbcs.PbcsClient;
import com.jasonwjones.pbcs.PbcsClientFactory;
import com.jasonwjones.pbcs.TestHarness;
import com.jasonwjones.pbcs.interop.impl.ApplicationSnapshot;

public class DeleteAllFiles extends TestHarness {

	public static void main(String[] args) {
		PbcsClient client = new PbcsClientFactory().createClient(createConnection());
		
		for (ApplicationSnapshot snapshot : client.listFiles()) {
			client.deleteFile(snapshot.getName());
		}
	}

}

Most of this is pretty standard boilerplate Java. You can that after we setup a PbcsClient instance, we can very easily get a list of files (using listFiles()), and then iterate over those. To delete a file, we can use the deleteFile(String filename) method and pass in the actual file name from the “snapshot” object (which also contains a little bit of other information).

That all said, as I’ve written before, the best tool for the job is the one that fits in to your environment the best and is easiest to maintain. If you’re doing much in the way of batch scripting, then epmautomate is probably going to fit into your environment the best. If you’re writing an enterprise app in Java, you’re going to absolutely want to use the PBJ library rather than try and execute shell commands.

Hyperion Parent Inferrer Updated (after four years!)

I had a need for the Hyperion Parent Inferrer functionality for an internal project I am working on. It didn’t quite do what I needed out of the box so I updated things a bit. As quick background, the Hyperion Parent Inferrer is a simple one-off Java program/library I developed (apparently four years ago, wow) to parse indented data into an explicit parent/child file.

There are a few (apparently rare) cases where this is useful. In my case, I was modeling some hierarchical data and I find the indented format to be much easier on the eyes. Like so:

Time
 Q1
  January
  February
  March
 Q2
  April
  May
  June
 Q3
  July
  August
  September
 Q4
  October
  November
  December

But when it comes time to load in to Essbase, clearly we need something more explicit. The Hyperion Parent Inferrer takes that preceding as input and then outputs something like the following:

,Time
Time,Q1
Q1,January
Q1,February
Q1,March
Time,Q2
Q2,April
Q2,May
Q2,June
Time,Q3
Q3,July
Q3,August
Q3,September
Time,Q4
Q4,October
Q4,November
Q4,December

The program has been enhanced to allow for a custom indentation character (such as tabs), to be able to specify the text rendered when there is no parent (instead of null), and a couple other little cleanups.

Hyperion Parent Inferrer is free, open source (Apache Software License version 2), and can be run as a standalone command-line Java program or as a Java library that can be incorporated into a typical Java program. The updated code is available at the Hyperion Parent Inferrer GitHub page.

Essbase Renegade Members Revisited

For some reason the other day I was thinking “Whatever happened to that renegade members feature?” So I did some digging.

Renegade members, by the way, refers to this concept where instead of a data record being rejected, you can map it to some other member. Other names for this feature might have been “shovel members”, but renegade members sounds cooler. That said, it’s a feature with a cool name but an apparently terrible publicist.

Renegade members were blogged about as early as a few years ago, such as on Cameron’s blog (during the 2013 OpenWorld), in Russian (apparently), and even over at Rittman Mead’s blog (before Mark spent his days trying to get tea kettles to work with the internet, but I digress).

But there’s a a curious lack of information on renegade members since then. There is, however, just enough information on the internet to piece this together. There’s a little documentation about renegade members over on the official documentation. Just as important (for my purposes), there are two methods relating to renegade members that are in the Essbase JAPI Javadoc.

Continue Reading…

Running MDX queries through a JDBC driver (for fun?)

So there I am, sitting in front of the Alaska Airlines gate at Boston Logan airport, waiting for my flight home to Seattle. It’s not a particularly glamorous terminal – the divorce from Delta hasn’t been too kind to Alaska at BOS; Delta seems to have kept the house and kids while Alaska microwaves Lean Cuisine on a futon in its bachelor pad…

As I’m pondering why there are white rocking chairs in the terminal, my phone rings with a familiar name: Mr. Brian Marshall. We catch up and exchange pleasantries before pivoting over to more important matters (all things EPM of course!).

Brian: “So… Vess.”

Jason: “Oh boy…”

So we get to talking about accessing Essbase data through a Java database driver, á la Vess. And we get to talking about running MDX queries and dumping the output – á la Camshaft.

And as the talk goes on I end up saying something stupid like this: “You know what might work? Jjust pass an MDX query through the driver over to Essbase and map the output to a fake table… It’d be like an unholy combination of Vess and Camshaft. You could probably knock it out in a day or two.”

And at that moment I knew I wouldn’t be able to resist opening my laptop for the five plus hour flight home. Continue Reading…

Oracle Open World 2016 Recap

As I mentioned a week or so ago, I made a last minute appearance at Oracle Open World this year. It was my first time attending and presenting at OOW. I actually didn’t catch too much of the conference as I only flew in on Wednesday and flew out on Thursday. Nevertheless, I had a bit of a whirlwind experience, but a very good one. While I hadn’t planned on it (I’m more of a Kscope guy), I am now looking forward to attending Open World next year.

As for the presentation I was part of, I think it went pretty well. Many thanks to Gabby Rubin of Oracle for coming up with the idea for the presentation and facilitating it. The presentation was on “Essbase Tools and Toys” and was meant to highlight, at a high level, some of the interesting things that folks such as myself are doing that involve the Essbase APIs or otherwise work with Essbase. The presentation discussed items created by me, Tim Tow, and Harry Gates. Additionally, Kumar Ramaiyer (also from Oracle) talked a bit about what’s coming with Essbase Cloud Service (EssCS).

Continue Reading…

Camshaft (Essbase MDX query tool) 1.0.2 released

Apparently I’m having quite the productive Friday, what with showing how easy it is to setup drill-through with Dodeca and that I’m heading to Oracle Open World 2017 to contribute to a presentation on cool Essbase tools.

To these articles I’ll add that I just released a Camshaft point release. This release has a couple of fixes and enhancements. Thanks to André Märki and others for providing feedback.

This version of Camshaft fixes an issue where some data with many digits after the decimal would be rendered in scientific notation. Along with this fix I have added a new command-line switch, --maximum-fraction-digits (used on the command-line such as --maximum-fraction-digits=2) to set the max number of digits to render after a decimal.

Additionally, there was a bug with running a query from a file that is now fixed. You can now specify something like --query=somefile.mdx and Camshaft will look for the given file. If found, it’ll read its entire contents for an MDX query, then execute that. This option can help make command invocations with big gnarly MDX queries a little easier to manage.

Please keep that feedback coming and I’ll add enhancements/fixes to the best of my ability. I have some interesting Camshaft news coming in the near future that some people will really like!

As always the latest Camshaft documentation and download can be found linked from the Camshaft page.

Advanced integration with PBJ Java PBCS REST API library

This week’s blog posts are all about the upcoming Kscope16 conference and relate to the presentations I’m part of. This year I am co-presenting with Cameron Lackpour on on-premise Planning versus PBCS and talking about some different use cases. My particular focus for this presentation is how you might use the PBCS REST API with Java.

Over the last year I have put together a Java library that works with the PBCS REST API. It has the following characteristics:

  • Open Source (Apache Software License version 2.0)
  • Doesn’t depend on any Oracle libraries or code
  • High-quality, readable, fluent API

In my opinion, all of these goals have been met. Additionally, as of today, PBJ (PBCS Java Client) is available in Maven Central. What this means is that if you or your team programs in Java and use Maven for dependency management, you are just a few clicks away from being able to use this library.

For an overview of why you might want to use PBJ and how it compares to other scripting languages you might want to use instead (such as Groovy, Python, and more), come check out the presentation!

Today, however, I want to show how easy it is to incorporate PBJ into a Java program and do something quasi-practical. So the rest of this article will be oriented towards programmers, but for those of you that have employees or teammates that would be more likely to do the programming aspects of things, keep them in mind and send them a link.

First, let’s assume we’ve already created a new empty Maven project in Eclipse. Your experience will vary if you use IntelliJ IDEA or some other IDE. In this example I happen to be using Springsource Tool Suite (STS), which is pretty much the same as Eclipse.

pbj-up-and-running-01-empty-project

Next we need to open up our pom.xml (project definition file) to add a new dependency. You can do this manually by editing the XML file itself, but there’s a nice enough GUI in Eclipse that makes things even easier:

pbj-up-and-running-02-pom-editor

Next we need to go find the PBJ library. As I mentioned earlier, PBJ is available in the global Maven Central repository. If you have Maven set to update its index periodically or upon startup of your IDE, then you should be up to date and can find the PBJ library. As of right now the version of PBJ is just 1.0.1.

pbj-up-and-running-03-pbj-maven-dependency

Select the library, click OK, and then save the file. The PBJ library and its dependencies are added to your project.

pbj-up-and-running-04-pbj-added

Now we can create a new main class to test things out. At this point it’s just life as normal for the Java developer:

pbj-up-and-running-05-new-main-class

Just for good measure (and tradition!) let’s put in a simple code to print out “Hello world”, and run it. Note the output in the bottom middle pane:
pbj-up-and-running-06-hello-world

At this point we have project setup, we have all of the necessary PBJ files (and some additional transitive dependencies), and we are ready to write some Java code that uses methods in the PBJ library. The code to write is shown below:

pbj-up-and-running-07-refresh-cube

In the code you can see the following happen:

  1. A connection details object is created
  2. A connection to PBCS is made
  3. Ask for the list of available apps
  4. Print out the list of available apps
  5. Get a reference to a particular app (“Vision”)
  6. Call the refreshCube method on the app reference to refresh the cube
  7. Print message after cube refresh

It’s hard to imagine this code being much simpler. If might look like greek if you’re not familiar with programming or Java, but to a Java programmer, this will be readily comprehensible and its intent obvious. PBJ supports most of the REST API – importing data, metadata, business rules, and more.

To see a specific use-case and hear wry commentary from myself and Cameron, please swing by our presentation. We’ll cover an example of PBJ (don’t worry, it’s higher level than this!) but more generally some facets of administration of on-prem versus PBCS will be discussed as well. I think the presentation will really appeal to many different user groups.

Vess Updates Substitution Variables

A colleague of mine is running into an issue with substitution variables and was looking for a solution that he could use to sync values up. He thought maybe Vess would be a good fit. Vess, as I have blogged about before, is a “virtual” Essbase JDBC driver. Vess maps Essbase concepts and crams them into a typical database model. For example, Vess exposes tables that model substitution variables.

In the case of server-wide substitution variables, there is a “VARS” table that has two columns: NAME and VALUE. For each application, there is another table that contains four columns: NAME, VALUE, APPLICATION, DATABASE.

As a quick aside, this might seem a little odd to have separate tables. After all, this table is notionally about the same as the Variables screen in EAS. Well, you have to kind of flip your thinking a little. Don’t think of variables as being only either server or a cube: think about in terms of what variables are applicable to a cube. In other words, if you ask Essbase what variables are applicable to the whole server, then this would be the global variables only. If you ask Essbase what variables are applicable to a cube, then it’s the cube, app, AND the server specific variables. This is one of the reasons there are multiple tables to model the variables.

Getting back on track, given that we have these tables and we can treat them just like normal SQL tables, we can do some interesting things. Let’s say we want to create or update a variable specific to an app that exists in the global scope. We can do this in one line:


INSERT INTO SAMPLE.VARS<br />
SELECT NAME, VALUE, 'Sample', 'Basic'<br />
FROM VESS_SCHEMA.VARS WHERE NAME = 'Foo';

What’s going on here? In Vess, a schema named VESS_SCHEMA is presented for server-wide things (server wide variables are in the table VARS in this schema, as shown above). The server VARS table only has columns NAME and VALUE.

Each application on the Essbase sever is modeled as its own schema. In this case, our favorite app – Sample – gets a schema named SAMPLE. This schema also contains a VARS table (containing columns NAME, VALUE, APPLICATION, and DATABASE).

Given these tables we have, it’s a simple matter of selecting the server variable with the name we want (in this case, a variable named ‘Foo’), and insert it into the variables for the Sample app.

Of course, if we wanted to for some reason, we could alter the name using normal SQL (string truncating, substrings, etc), or whatever. We can also delete variables, such as this:


DELETE FROM SAMPLE.VARS WHERE CUBE = 'Basic';

As I’ve said before, Vess continues to be an “interesting” proof of concept. As time permits I am filling out more and more functionality. At present, Vess models things like substitution variables, metadata you might see in MaxL or EAS (cube statistics, user sessions, etc), can load data to cubes, and can do certain outline related operations.

Vess is not available as a public download at this time but I have handed a few copies out to get feedback. I think Vess is just about good enough to be used in automation and other things. If you’re interested in using this in a production situation (automation or otherwise), please contact me to discuss support options.

Vess: a Virtual Essbase JDBC driver

Normally I finish some programming and then throw it over the wall to you guinea pigs enthusiastic blog readers before I say anything about it (no vaporware here, nosiree!), but I thought I’d share something I’ve been playing with as a proof of concept. It’s sort of inspired by some of the ODI/Essbase integration I’ve been working with lately.

The Essbase/Hyperion knowledge modules in ODI are sort of different from typical knowledge modules because in a lot of ways they cram a round peg into a square hole. So you kind of get this relational facade over an Essbase cube in terms of loading and integrating data/metadata. Instead of relying on a normal JDBC driver, the Hyperion KMs more or less delegate out to some custom Java code that wraps the Essbase Java API. This isn’t a bad thing, it’s just how it works.

Unrelated to ODI, there are many situations where we are jumping through hoops to get data of some kind – not just cube data and metadata – out of Essbase. For example, you run a MaxL script to dump stats from a cube to a text file. Then you run some regular expression voodoo on it to get to the data you actually want.

Side note: parsing MaxL text output is surely amusing to the boffins at Oracle that designed it. I don’t think many people know this, but MaxL communicates with the Essbase server using a hacked up JDBC driver (it’s ingenious really). So when we parse the plusses and minuses and other ASCII crap off of some MaxL output, effectively what is happening is that a real dataset is coming back from the Essbase server, the MaxL interpreter is applying the extra work of prettying it up with some text art (the only thing missing is lime green letters from the 80’s), it gets dumped to a text file, and then what happens in so many places is that the text is reparsed into data.

To some extent the MaxL Perl and Essbasepy modules can be used to get MaxL data in a cleaner way. The Java API can definitely be used. Of course, I have no problem myself jumping in and coding up a Essbase Java API routine to pull some data down. But for people that aren’t as comfortable, stripping some info out of MaxL output offers a clear path with less resistance, so I can’t say I blame them.

So we have all of these instances where we can get data out of Essbase (not just actual cube data, but metrics from the server) using EAS, using MaxL, dumping MaxL to a text file, the C API, Java API, and so on. But it seems like a lot of these approaches have undesirable combinations of effort/quality/reward.

Let’s talk about JDBC for a moment. JDBC is the Java Database Connectivity model. Java is one of the most popular programming languages on the planet (perhaps not the sexiest language, but it’s getting better), and JDBC is the tried and true model for connecting to databases. Every popular database in the world has a JDBC driver – you can grab a generic SQL tool (I’ve been using the simple RazorSQL as of late), pop in the appropriate JDBC driver (such as for MySQL, Microsoft SQL Server, Oracle, DB2, whatever), connect, and start issuing SELECT statements to your heart’s content.

So, this all brings me to something interesting I’ve been playing with for awhile: Vess – a virtual Essbase JDBC driver. Vess is a JDBC-compliant driver that can be dropped into any generic SQL tool that supports JDBC drivers (RazorSQL) as well as used out of the box with ODI. Vess offers a JDBC-compliant URL for connecting to Essbase, such as the following:

jdbc:essbase://epm11123.corp.saxifrages.com:9000/Sample.Basic;mode=embedded

Vess then presents several tables, views, and stored procedures. For example, consider substitution variables. These are modeled in a table where the column types are String, String, String, String (the VARCHAR database type). So you can do this:

SELECT * FROM SYS.SUBSTITUION_VARIABLES

And you get back a dataset kind of like this:

Sample, Basic, CurrMonth, Jan

(It’s not actually text with commas, it’s a normal dataset). There are similar tables with other system/cube information in them, such as database stats, connected users (the Sessions tab in EAS), and more. There are views for dimensions/members/aliases/UDAs. There are stored procedures for calling calc scripts (e.g. sp_exec_calc(‘CalcAll’)). MDX queries can also be crammed into a normal JDBC ResultSet with a little bit of imagination.

So, why bother?

Using a JDBC driver would let us easily (and cleanly) get to Essbase data and metadata using any generic database tool, as well as being able to drop the driver right in to ODI and use it as part of a normal interface. Whereas ODI uses Jython to call custom Java classes, this could potentially simplify the layers a little bit and offer an interesting alternative. This could also potentially be used in places where MaxL output is being parsed out via black magic, obviate the need for writing custom Essbase JAPI programs, and some use cases with the outline extractor.

Status

As of right now, the framework of the driver is created already and you can drop it into a SQL tool such as the aforementioned RazorSQL and run a command like SELECT name FROM SYS.SUBSTITUTION_VARIABLES WHERE APP = 'Sample' and get the proper results back. Over time (as time permits) I will be adding a few more gizmos. If there are any intrepid Java developers out there that would like to play with this, let me know and I can make this an open source project on GitHub that could be refined over time.