PBCS Scripting with Groovy using the PBJ REST API Library

I was talking to a colleague the other day that wants to do some scripting with PBCS using Groovy. Of course, since PBCS has a REST API, we can do scripting with pretty much any modern language. There are even some excellent examples of scripting with PBCS using Groovy out there.

However, since Groovy runs on the JVM (Java Virtual Machine), we can actually leverage any existing Java library that we want to – including the already existing PBJ library that provides a super clean domain specific language for working with PBCS via its REST API. To make things nice and simple, PBJ can even be packaged as an “uber jar” – a self-contained JAR that contains all of its dependency JARs. This can make things a little simpler to manage, especially in cases where PBJ is used in places like ODI.

For this example I’m going to take the PBJ library uber jar, add it to a new Groovy project (in the IntelliJ IDE), then write some code to connect, fetch the list of applications, then iterate over those and print out the list of jobs in each application.

Continue Reading…

Happy New Year & Best Posts of 2017

My goodness – has another year gone by already? 2017 was busy, to say the least: trips/presentations/booths at Kscope17, Collaborate, and Oracle Openworld, new releases of Dodeca, the Dodeca Essbase Add-in for Excel, Drillbridge, the Outline Extractor, an upgrade to Oracle ACE status, lots of internal development going on, and more.

Speaking of Kscope, did you submit an abstract for Kscope18 yet? The submission deadline is very quickly approaching.

In any case, I covered a lot of ground on the blog this year, and as with last year, I thought it would be fun to take a look at the best and most popular posts of 2017!

Continue Reading…

Evolution of Essbase: new URL-based drill-through showed up in 11.1.1.3

Continuing on with the idea of getting insight into the Essbase feature set over time, as viewed through the lens of its Essbase Java API evolution, you can quite clearly see that the open/URL-style drill-through (as opposed to classic LRO-based drill-through) showed up in version 11.1.1.3, which in fact is pretty much the only thing that seemed to get added to this particular release, Java API-wise, along with some ancillary drill-through methods/functionality in some related classes.

More near to my heart: this is the exact functionality that paved the way for Drillbridge! Although it wasn’t available as a feature on day 1, subsequent versions of Drillbridge gained the ability to automatically deploy drill-through definitions to a given cube, and it uses exactly these API methods to accomplish it.

Drillbridge as drill-through solution with CSV data and replacing Access

An interesting use-case has come up with Drillbridge recently where drill-through is currently being “handled” with an Access database. I put the quotes around handled because the current solution requires the user to look at the current POV and then go fetch the corresponding data from an Access database. You might be thinking that this setup is horribly sub-optimal, but I wouldn’t characterize it as such. In my career on all sides of Hyperion – a developer, a consultant, and software developer – I have seen this pattern (particularly those involving Access) pop up again and again.

Access is often (perhaps all too often) the glue that binds finance solutions together, particularly in cases like this involving drill-through. It’s cheap, you can use it on the network simply by dropping the file onto a share drive, it gives you a quick and dirty GUI, and more. Many EPM projects I have been on involve many deliverables, often including drill-through. And all too often those projects had to cut it due to budget and time constraints. And if it gets cut, sure, finance might have to do the “quick and dirty” option like this with Access.

Now, the request du jour: use Drillbridge to quickly implement true drill-through, where the data currently resides in an Access database? A couple of options come to mind:

  • JDBC to ODBC data bridge to access current Access database
  • Export Access data to relational database
  • Export to CSV and access via JDBC CSV reader
  • Read CSV dynamically using Drillbridge’s embedded database

I won’t bore you with an exhaustive discussion of the pros and cons of these options, but I will say that the JDBC/ODBC bridge was a non-starter from the get-go (for me), mostly because I looked into it for another project years ago and the general consensus from Sun/Oracle was a) don’t do that [anymore] and b) performance is not too great. Regarding exporting Access to a relational database, yes that is more towards the ideal configuration, but if that were an easy/quick option in this case, we probably wouldn’t be on Access already (i.e., for whatever reason, finance didn’t have the time/patience to have the IT department stand up and manage a relational database, to say nothing of maintenance, ETL, and other things). Next, while there are a handful of JDBC CSV readers, they seem to have their quirks and various unsupported features, and hey, as it turns out, Drillbridge’s embedded database actually ships with a pretty capable CSV reading capability that let’s us essentially treat CSV files as tables, so that sounds perfect, and bonus: no additional JDBC drivers to ship. So let’s focus on that option and how to set it up! Continue Reading…

Speed up ASO SQL data loads by using multiple rules files

Just another quick post today about possibly speeding up data loads to an ASO database when loading from SQL. I got on a quick call with a former colleague that was looking to gain a little more performance on their load process to a massive ASO database, and the first thing that jumped out at me was that I recall you can do parallel loads with some native MaxL syntax.

Here’s a quick example of the syntax:


import database $APPLICATION.$DATABASE data
connect as $SQL_USER identified by $SQL_PW
using multiple rules_file $RULE1, $RULE2, $RULE3, $RULE4, $RULE5
to load_buffer_block starting with buffer_id 100 on error write to "errors.txt";

Basically, you provide multiple rules files (configured for your SQL datasource of course). The rules files are likely to be the same as each other but I suppose it’s possible you might want to partition the data in some logical way to try and speed things up even more.

For example, let’s say that in the code above, we are loading five years of data from a relational database. We might then make it so that each rule is set for this particular year by doing the following things:

  • Set the year in the data header
  • Remove that column from the list of SELECT columns
  • Put a filter/predicate in the WHERE clause on the query
  • Bonus points for using substitution variables in both the header definition and the where clause

Performance in this particular use case went up substantially. It’s my understanding that data loads that were taking an hour are now cut down to 17 minutes. Your mileage may vary, of course.

Let’s Not Forget About Hybrid BSO

That said, I think this can be an effective strategy for trying to squeeze performance out of some ASO cubes that need a smaller load window and you don’t want to go changing a lot of the internals in play. If you’re doing new development, then I strongly, strongly recommend using hybrid BSO (or rather, BSO and making sure the cube is configured properly so as to get the hybrid BSO performance benefits). I have been seeing hybrid BSO cubes absolutely killing it in performance, what with their ability to leverage ASO technology for aggregates, and massive calculation improvements owing to the smaller block sizes and indexes you get from having so many dynamic calc members in dimensions. Plus, you of course get all of the classic/rich/awesome BSO functionality out of the box, like dynamic time series, expense tagging, time balance, and more. These were never very strong areas for ASO and often required a lot of non-optimal workarounds to make users happy.

New webpage for Essbase Java API evolution

A fair bit of my job is dealing with and building solutions around the Essbase Java API. For many years, the Java API has been the premier way to programmatically work with Essbase (compared to say, the C and VB APIs, which have fallen out of favor). As part of this development work, it’s often important to see when (in terms of version) a certain class, method, interface, or other object has been added, modified, removed, or deprecated.

As a bit of a side project, I have been working with a library for comparing Java JARs to each other (japicmp). By processing and interpreting the results of just about every single Essbase Java JAR from 7.0.1, through the 9.x series, multiple 11.x’s, and finally to version 12.2.x, I have come up with something of a master table that shows all of these changes. You can view the initial results of the Essbase JAPI JAR evolution analysis. I’ll probably refresh this and enhance the output as new library versions become available or as I determine that additional insights become useful.

Screenshot from the Essbase Java API evolution analyzer

Continue Reading…

Configuring Drillbridge with Financial Reporting Web Studio

Drillbridge works perfectly with Financial Reporting Web Studio – the successor to the desktop-based version of Financial Reporting (also commonly called HFR, FR). FR was stuck with a very archaic client (let’s just say it’s from around the Clinton administration), but it has revamped for the future, with a completely web-based interface now. In retrospect, and based on my interactions with the interface, I think this product overall can be thought of as gap coverage for FR users. It’s not necessarily the place you want to do new development, especially given some of the other shifts/developments in the reporting ecosystem lately. My colleague Opal Alapat has posted some really great thoughts on FR and its place in this ever-changing world, which I encourage you to read.

In the meantime, there are countless current installs of FR that organizations need to support and perhaps transition to this newer incarnation of FR. As with before, Drillbridge works seamlessly to give you and your users advanced drill-through capabilities in Smart View, Hyperion Planning/PBCS, FR, and now FR web. I found that the UI had a few quirks to it, but I’ll walk through a simple example and try to point those out along the way.

Continue Reading…

Milestone for PBJ (PBCS REST API Java Client) Project: first outside contribution

Just a quick note on a fun milestone for the PBJ project: the first code contribution from an outside developer has been merged into the codebase. This is one of the things I love about open source. The PBJ project has a very flexible license (Apache Software License 2.0) and as such it is quite business friendly.

Sometimes when an open source project doesn’t do what you need it to do at a given point in time you have to roll up your sleeves and add some code yourself. And that’s exactly what one of the users of the library needed when they added some new methods to download large files from PBCS. So there are a couple of new methods for handling that use-case – and now everyone gets to benefit from it. This is exactly what I had envisioned when I created this project: a high-quality codebase with complete documentation, unit tests, and support for some of the exciting REST APIs being provided by modern Oracle technologies, and a chance to enjoy living one of my favorite quotes: a rising tide lifts all boats.

PBCS has a REST API, now what?

Note: this article was originally written for an ODTUG publication, but it never wound up getting published. So I thought I would just post it here instead.

Oracle’s Planning and Budgeting Cloud Service – PBCS – is the first Hyperion product to get the full cloud treatment. In addition to Planning’s move to the cloud, it has picked up a couple of new tricks. One of these new features is a REST API. This article will give a quick background on REST APIs, some integration opportunities now available to PBCS users, and information on how the PBCS REST API can be easily used from Java.

For those that aren’t familiar, a REST API typically means a few things to developers. First of all, an API is an application programming interface. From a developer standpoint, an API gives us the ability to write programs that interact with another system in a specific way. In the case of PBCS, the API provides access to functionality such as refreshing a cube, launching a business rule, uploading files, getting member information, and more.

Continue Reading…

New Drillbridge Plus feature: fetch attributes!

Drillbridge Plus has recently gained a new feature at the request of a customer. This one is kind of interesting and required a bit of deep thinking in terms of the best way to architect it. Here’s the deal: Smart View will let you drill-through on a data value where your grid is using attribute dimensions, but it won’t pass the attribute associations as part of the request. And as it turns out, there are instances where it’d be useful to have that attribute member so you can use it to dial in the SQL query that Drillbridge creates and executes.

What to do? Ask Drillbridge to go fetch those attribute member values for you anyway! In this post I’m going to walk through a use-case showing off the new feature, how to set it up, and I’m also going to show off some recent debugging enhancements that are really useful and have been around for awhile.

Let’s start. First, consider a normal Drillbridge report definition with a simple query:

A normal Drillbridge report definition (before adding attributes)

Continue Reading…