So I’m running for the ODTUG Board (Please Vote)

Hi all – I haven’t been blogging as much as I would like to lately (there’s a lot going on!) but I do want to write a quick note. I’m running for the ODTUG Board of Directors. You can check out my campaign statement, goals, and short biography over there along with the other fine candidates.

Briefly, I have been involved with ODTUG in one way or another for almost a decade and a half. It’s an organization that I respect and look up to – and I would love the opportunity to bring my energy and skills to the cause of improving this organization even more.

If you are eligible to vote for the ODTUG board (i.e. you are a member in good standing) and believe I would be a good addition to the board, then I ask that you please vote for me. Voting closes soon, so check your inbox for a mail from “association voting” from a week or so ago – it has the instructions and your unique ID to cast your vote.

Thank you!

A REST API Primer for EPM Users & Developers

There’s a lot of excitement in the EPM world these days when it comes to REST APIs – and rightfully so. As a developer heavily invested in the EPM space I am excited about some of the possibilities these new APIs offer – and what they will offer in the future. But all of this great new REST API stuff can be quite daunting – how does it work, why should you care, where does it fit in with your overall architecture, and so on. And with ODTUG‘s Kscope18 just around the corner I thought it might be useful to write a primer – or a crash course of sorts – for the EPM professional on what all this REST API business is about. Also be sure to check out one of my presentations at Kscope this year as I will be discussing the OAC Essbase REST API, how to use it, what it does, and more. Continue Reading…

Kscope15 Presentation Preview: ODI Workhorses

The other day I mentioned my goals for attendees for my upcoming ODTUG presentation on Drillbridge. Today I’m going to talk about my goals for my presentation on Oracle Data Integrator (ODI).

Over the last few years I have presented on ODI a handful of times. My main presentation on it has been highlighting a success story with it, where ODI was used to clean up and facilitate a lot of automation and ETL jobs for a health services company that [as you can imagine] has tons of data flying around everywhere. This previous presentation was more of a high-level affair, where I talked very generically about what the benefits of ODI were over the previous solution. Wanting to add a little more technical meat to the presentation, I appended what started off as a small section at the end where I take a look at just how ODI works under the covers.

While the “business” or high-level part of the presentation was all well and good, I found myself getting really excited to explain just how awesome the workings of the details of ODI ETL jobs were, and what started out as a 10-minute flight of fancy into the lower depths of something technical has now been promoted, as it were, to full on presentation.

In other words, I am going to spend an entire presentation literally tearing apart a single ODI interface, explaining why the steps are sequences the way they are, how to build idiomatic ODI interfaces, the affect of various options (journalization, query hints, delete all vs. truncate, etc.), update strategies, and more. I’m also going to marry up what is ostensibly an IT tool with various concepts from computer science, including the notion of idempotence (and why it’s a good thing).

With any luck, the attendee coming to this presentation will have a new or expanded understanding of how interfaces/mappings work, feel comfortable with modifying knowledge modules, and learn a trick or two when it comes to debugging ODI jobs. This will be a nuts and bolts, technical deep dive. While it’s ostensibly an advanced content presentation, I believe that people with even only a cursory familiarity should benefit from it as well. If you haven’t worked with ODI at all but are curious (and intrepid!) about what it can do, I think you’ll also benefit. So to all of my pure-Hyperion colleagues that haven’t dipped their toes in the ODI pool just yet, this is a great chance to hop on a different track and expand your horizons – I hope to see you there!

You’re going to love this.

I don’t usually talk about things I’m planning on doing or haven’t done yet, but I’m going to make an exception. I’m putting together a webinar for ODTUG (you are a member, right?) that I’ll be presenting in late October (October 28th to be exact, MARK YOUR CALENDARS!). The webinar will be on – you guessed it – Drillbridge (maybe I need to change this to Jason’s Drillbridge Blog…).

Thankfully, this won’t be my first presentation or webinar. Your humble author has presented on all manner of topics, including load rule optimization, Oracle Data Integrator tips and tricks, Dodeca, Essbase Web Services, and a few other things for good measure. I am delighted to report that this will be my first webinar on a piece of software I have created, though, so I’m pulling out all the stops and designing what I’m tentatively calling the Drillbridge WOW demo.

The overall webinar will be about an hour long, but what I want to do is have a section or segment where I kind of race to deploy drill-through functionality to a cube and show off how powerful the Drillbridge features and expression language can be.

So here’s what I’d like to do, all in one fell swoop: download and install it, configure a datasource, define a report, deploy the definition to a cube, AND use complex mappings/features, namely that the report itself will feature drill to bottom where the lowest level of the dimension needs to be mapped (such as from Jan to ’01’ or similar) and the other dimensions need some sort of similar mapping such as removing a prefix or otherwise altering the member from the point of view. On top of that, we’ll then flip on Smart Formatting with a different locale (French, anyone?), re-run the report, then download it to Excel. And to top it all off, I’ll do this all in 10 minutes or less.

That’s right – 10 minutes (or less!). That’s crazy. To think, I once spent hundreds of hours achieving this same result using different tools.

Sound good? Sound crazy? Well, the good news, as I said, is that this functionality all exists already and is ready to go in 1.3.0. As I mentioned in an earlier post, I’m just cleaning things up and testing them to ensure that Drillbridge is as robust as possible before releasing this major release.

I’m pretty excited, but more importantly I am excited to be able to help the larger Hyperion community provide drill-through to its users in an evolutionary and incremental way with this great little tool. Stay tuned for more.

Hyperion Essbase wish list: Import a compressed file

I thought this up while attending Dan Pressman’s Kscope presentation How ASO Works and How to Design for Performance, a presentation that definitely appealed to my inner Hyperion geek. Dan did a crazy deep dive on performance tuning with particular respect to loading ASO. He had some pretty bangin hardware to play with too.

Long story short, and many of us have known this for awhile, but there are ways to format your Essbase load files so that they load faster. Basically what you are trying to do is make things easier on Essbase: stream in less data, don’t repeat things you don’t need to repeat, don’t thrash blocks in and out of memory, and so on. That’s all well and good.

The advent and proliferation of SSDs in the enterprise has done wonderful things for Hyperion performance by  eliminating a lot of the performance quirks with rotational media and penalties from fragmentation. But at the end of the day we are still looking for ways to pump ever-increasing amounts of information into our cubes even faster than we were the day before.

For instances where we are loading a file that resides on the same machine as the Hyperion apps/cubes or even across the network, I wonder what, if any, performance benefits are to be had if we had the ability to import a zip file?

Zip files can get awesome compression on text files. They can also have their uncompressed contents streamed. In other words, it’s not necessary to extract the contents of a zip file before you can read the contents (starting at the beginning). In theory, if one achieved moderate to decent compression on their zip file and handed that to Essbase (say with a specialized import data MaxL command), it would be saving time on the disk-read aspect of the data load, at the expense of some additional CPU usage. Many Essbase load operations are disk I/O bound anyway so this seems like a reasonable tradeoff to make.

As an additional benefit or elaboration on the concept, perhaps multiple text files could be placed into the same zip file, perhaps with a “load manifest” or options on the load command, and Essbase would attempt to parallelize the data load to the extent it can. This would likely be an add-on feature once the basic support is in place. In all you would need to augment the data load process with a zip file reader routine (this would be an off-the-shelf library that is quite common), a couple new MaxL import data variants, and an augmentation to the Java API. I suppose you could leave the MaxL command alone and just program the interpreter to look for a .zip extension and treat it accordingly, but it seems like it’d be the better choice to specifically indicate the data load is from a compressed file.

Of course, if you’re loading just from SQL this whole thing wouldn’t apply to you. Loading data files may seem low-tech but it’s incredibly common and often times I prefer it as I have an exact text file to tie back to, if need be, versus a possibly changing SQL data store (but that’s a conversation for a different blog post). This feature would cater to the performance nuts out there – and if Kscope is any indication, there are plenty. I’d be curious to hear anyone’s thoughts on this.

Kscope13 Session Thoughts: Joe Aultman sprinkles magic Groovy dust on the Essbase Java API

There were lots of great sessions this year, as always. I tried to get outside my comfort zone a little bit and take in some new content. I thought over the next few posts I’d shine a little bit of light on a few of the sessions that were notable to me.

Fixing What’s Broken in the Essbase JAPI and Reaching New Heights with Groovy

As a programmer, this was right up my alley, naturally. This session was presented by Joe Aultman. Joe is using Groovy to do some automation that would otherwise be done with the venerable Java API. Joe, if you’re reading this, super cool presentation. In my mind I saw this presentation as being about two things: issues with a particular API and issues with a language – namely a lot of “boilerplate” Java code.

As an aside, there is something of a renaissance happening in the Java world with respect to the JVM. As a computer science guy (go Dawgs!) and general purpose programming nerd I have been keeping abreast of this language and several others, particularly Clojure. In a nutshell, what’s going on is this: Sun (now Oracle, of course) created Java many years ago. Java runs inside the JVM, or Java Virtual Machine. Whereas with a language like C or C++ you might compile your C source code into an executable meant to run on a particular system, in Java you just always compile down to the same code that in turn runs on a JVM. In other words, in theory, as long as you have a JVM available for a platform (Windows, Linux, OS X, etc), then you should be able to run the same old byte code. Hence the original notion, “Write once, run everywhere.” In practice there were and are many quirks to this, but by and large the statement is true – which is why generally speaking you are able to download Java JAR and class files that work irrespective of the underlying operating system (rather than separate downloads for Linux, OS X, and so on).

As it turns out, the JVM is useful for more than just Java. New languages that are able to compile down to code than runs on the JVM are able to stand on the shoulders of giants and leverage an incredible amount of infrastructure that has already been written and battle tested. Some of the more notable languages running in the JVM are Groovy, Scala (kind of a streamlined Java), Clojure (a “Lisp” dialect), and Jython (Python running inside of a JVM).

Getting back to Joe’s presentation, I think it’s fair to say that Joe is enamored with what’s called “syntactic sugar” in the programming world. This essentially means that a language provides features or inherent abilities that reduce the need for verbose boilerplate code. Groovy more or less delivers the goods in this regard. Furthermore, Joe has created some idiomatic Groovy enhancements specifically for the Essbase Java API that reduce the need to include some “clutter code”.

I definitely liked what I saw, although as an apparently diehard Essbase Java API guy I don’t think I’ll be switching anytime soon (you’ll convert me yet, Joe!). If you didn’t make the presentation then definitely give his slides a look. Joe is working through the red tape in his legal department to be able to post his code under an open source license. Joe, if you haven’t picked a license yet, give me a call and I’ll help point you in the right direction if I can.

In any case, nice job on the presentation, man.

In other news, I have been working a little bit on the side on a wrapper for the Essbase Java API that repents for some of its sins and modernizes the usage a bit more. This is tentatively called Java Essbase Antikythera Layer (JEAL). If you use the Essbase Java API at all you might really like how this simplifies your life. It’s unfortunately a labor of love that I can’t dedicate a lot of time to so if you want to help please drop me a line!

 

Kscope 13 Day 1 – Deep Thoughts, part 2 of 2

I thought I’d be having a little more time to write things during the conference, and yet here I am sitting at the airport after a long and eventful week. Well, I had good intentions, at least. For those interested, I had a few other thoughts to go along with part 1 of my recap of the first day.

Cross-pollination of ODTUG sessions as an indicator of broader convergence in the Oracle space

Although I didn’t have a chance to attend them, this year at ODTUG featured some cross pollination sessions where an EPM guy could see what it’s like on the other side and an Oracle guy could see what it’s like on the EPM side. I thought this was a really cool idea but also sort of interpreted it in another way. More so than any other ODTUG I’ve been to, there were sessions available that were not ostensibly in the EPM track that appealed to me. And this isn’t necessarily because the scope of my interest has miraculously increased, either: it’s simply due to the fact that Essbase is being leveraged as the heart of other tools and the way that our tools work and we provide solutions to customers are converging. My prediction (one that is hardly insightful) is that the convergence continues to the point where the line between the different camps is almost non-existant.

Vanilla Essbase Shops Seem on the Decline

I was in a session where the presenter asked for a show of hands regarding who had Essbase, Planning, and other tools. One of the questions was “Who has just Essbase and nothing else?”  and given the sizable crowd, just a few hands went up. I can’t say I was surprised but I can say that I was… disappointed. I’ve been pretty vocal (though not on this blog I suppose) about my qualms with the way Oracle bundles and sells Essbase and other products. To be succinct, I find it regrettable that we are operating in a context where Essbase is arbitrarily bundled with other products in such a way so as to benefit Oracle’s bottom line first and its customers needs second. This is not to say that Essbase exists in a vacuum and that there are no other tools to go with it, just that I would challenge Oracle to explain that the current way of doing things is the best or even a good way.

WaMu is a Hyperion Customer

At some point during one of the presentations I was in I saw a slide with a list of dozens of company logos including one for the now defunct Washington Mutual. I had a brief conversation in my head with a fictitious Oracle marketing person about whether it makes sense or not to leave this logo on a customer slide. In any case, it’s just mildly amusing to think about.

“Finance is Still Stuck in Spreadsheets”

I hear this at some point. This is true, but I don’t believe the negative connotation is necessary. My real takeaway is this: spreadsheets are ubiquitous and useful. Many of them evolve into complex tools with mazes of VLOOKUps and byzantine logic. One wonders how much better these organizations might fare if they recognized their homegrown spreadsheet mazes evolving into something complex and unwieldy and then had a tool to use that had lower barriers to entry than, say, Planning, but with less onerous administrative requirements and an economic model that makes sense for less than 25 users or so.

Closing Thoughts

These are just some of my high-level thoughts to go with part 1 that I have taken from my notes. This rounds out my summary of things from the first day. As time permits I’ll post some thoughts on specific sessions and even my own session!

A lot of you – an incredibly and surprisingly high number of you – came up to me and said that you read my blog. I really appreciate the kind comments. As I’ve mentioned before I just find this to be an increasingly quasi-therapeutic place to post my inane thoughts on whatever, which is reason enough to do it. The fact that some of you out there enjoy this is just icing on the cake. Please don’t be a stranger in the comments section.

Kscope 13 Day 1 – Deep Thoughts, part 1 of 2

My previous post contained general thoughts on the conference so far, but nothing about the content so far. So I’ll now share some high-level thoughts on the content of what is going on. Oracle has respectfully requested that we not divulge some of the sensitive particulars and roadmappy stuff, so I’ll gloss over that a bit and just say that maybe you should just come to these things if you want to be in the cool kids club, but I digress.

Essbase is in good hands

My first thought of the day was that anyone who thought that when Oracle bought Hyperion they would take Essbase out back and shoot it had nothing to worry about. Oracle is quite clearly putting tremendous resources into seemingly all facets of the product. Furthermore, it wouldn’t have been a bad strategy (but definitely not a good strategy) to put Hyperion on cruise control, throw a few resources at it to keep the lights on and then some, and keep it there. But Oracle quite obviously has some big thinkers and perhaps more importantly, big thinkers that Get Shit Done that zoomed out and strategized how they could effectively leverage, break down, take apart, and combine the good technology they bought into a comprehensive suite of tools.

Predictive Analytics & Exalytics

Years ago, a light bulb went off for me when I started to think of Essbase and multidimensional tools as not merely a way of seeing how an organization performed, but rather to predict how it would perform. To that end, Essbase is recognized as a critical tool for organizations to look aheadFor Oracle’s part, they recognize this and are acting accordingly. Despite my interest and recent exposure to big data and cloud computing I haven’t had a chance to touch the likes of Exalytics yet, and I haven’t gone out of my way to get involved with it. But after hearing more about all that is going on with it and where it is headed, I am going to move it up on my priority list. Despite the title of this section, I’m not saying that anything to do with predictive analytics is automatically correlated to Exalytics. I am saying, however, that if you want to model the future with many variables and dimensions, you need something that can crunch a crapload of data.

ADF Love

ADF is getting a ton of love. I am more of an Eclipse guy and the tooling for ADF in Eclipse has left me wanting in the past, and furthermore I have tended to stick with other technology stacks (even within the Java ecosystem), but as a developer that does a lot in the Oracle world I am going to give ADF a much, much stronger look in the upcoming year. There are some very interesting things going on with ADF Mobile that I was previously unaware of that are worth a look.I have a bias towards native apps as they seem to fit more into my view of apps being crafted with precision, fluidity that at present HTML5 can’t quite seem to beat. However, it is very compelling to be able to easily deploy to multiple disparate platforms with one code base. I have some colleagues, though, that lament the need to write once and fix everywhere, almost as if they reliving the initial and somewhat dishonest promise of early JVMs (“write once, crash everywhere” or thereabouts).

Modules Everywhere – There’s a module for that!

There’s no shortage of Oracle developing modules for this and that. Planning modules, modules for other things, and so on. These modules tend to be quite specific in nature, such as for dealing with workforce, capex, tax management, and so on. Architecturally, it’s good that these are modules because from a design standpoint you don’t want a huge monolithic product that tries to be all things to all people. Modules aren’t a bad thing. I just find the juxtaposition between a generic platform and domain specific modules on top of it to be amusing for technical and geeky reasons.

For example, think of a normal relational database server that only has the notion of numbers, strings, references to other columns, and so on. This generic database has no notion of taxes, employees, and whatnot. Those things can all be modeled within the technology itself, of course, and the database will happily comply. It’s within the purview of the developer to utilize the technology to solve the problems and provide for the needs of the business – the database or the cube become the blank slate upon which we paint the solution, adding semantics to abstract concepts. Like I said, it’s not a bad thing, I just find it a little amusing since I’m a geek. You’ve undoubtedly heard of “There’s an app for that.” – in my mind I picture the Oracle folks saying, “There’s a module for that!”

LCM

What used to be a vengeful, impossible-to-please digital bag of spite has evolved into an essential tool in the toolbox. So, yeah. Yay Oracle.

Onward to Part 2

I am absolutely spent. I have a few more thoughts that I’ll wrap up tomorrow. If you read this and are at the conference, please say hi!