Vess Updates Substitution Variables

A colleague of mine is running into an issue with substitution variables and was looking for a solution that he could use to sync values up. He thought maybe Vess would be a good fit. Vess, as I have blogged about before, is a “virtual” Essbase JDBC driver. Vess maps Essbase concepts and crams them into a typical database model. For example, Vess exposes tables that model substitution variables.

In the case of server-wide substitution variables, there is a “VARS” table that has two columns: NAME and VALUE. For each application, there is another table that contains four columns: NAME, VALUE, APPLICATION, DATABASE.

As a quick aside, this might seem a little odd to have separate tables. After all, this table is notionally about the same as the Variables screen in EAS. Well, you have to kind of flip your thinking a little. Don’t think of variables as being only either server or a cube: think about in terms of what variables are applicable to a cube. In other words, if you ask Essbase what variables are applicable to the whole server, then this would be the global variables only. If you ask Essbase what variables are applicable to a cube, then it’s the cube, app, AND the server specific variables. This is one of the reasons there are multiple tables to model the variables.

Getting back on track, given that we have these tables and we can treat them just like normal SQL tables, we can do some interesting things. Let’s say we want to create or update a variable specific to an app that exists in the global scope. We can do this in one line:


INSERT INTO SAMPLE.VARS<br />
SELECT NAME, VALUE, 'Sample', 'Basic'<br />
FROM VESS_SCHEMA.VARS WHERE NAME = 'Foo';

What’s going on here? In Vess, a schema named VESS_SCHEMA is presented for server-wide things (server wide variables are in the table VARS in this schema, as shown above). The server VARS table only has columns NAME and VALUE.

Each application on the Essbase sever is modeled as its own schema. In this case, our favorite app – Sample – gets a schema named SAMPLE. This schema also contains a VARS table (containing columns NAME, VALUE, APPLICATION, and DATABASE).

Given these tables we have, it’s a simple matter of selecting the server variable with the name we want (in this case, a variable named ‘Foo’), and insert it into the variables for the Sample app.

Of course, if we wanted to for some reason, we could alter the name using normal SQL (string truncating, substrings, etc), or whatever. We can also delete variables, such as this:


DELETE FROM SAMPLE.VARS WHERE CUBE = 'Basic';

As I’ve said before, Vess continues to be an “interesting” proof of concept. As time permits I am filling out more and more functionality. At present, Vess models things like substitution variables, metadata you might see in MaxL or EAS (cube statistics, user sessions, etc), can load data to cubes, and can do certain outline related operations.

Vess is not available as a public download at this time but I have handed a few copies out to get feedback. I think Vess is just about good enough to be used in automation and other things. If you’re interested in using this in a production situation (automation or otherwise), please contact me to discuss support options.

Essbase data loads with Windows integrated authentication

This is a followup to a post I made on the Network54 Essbase forum: Yes, you can use Windows integrated authentication (indirectly) to perform Essbase data loads. It’s not necessarily supported, but it works.

First you start off with the connection definition in the Windows ODBC connection manager. This is where you setup a normal ODBC connection to SQL Server:

ODBC connections in Windows to be used by Essbase data loads

ODBC connections in Windows to be used by Essbase data loads

One of the configuration screens is where you specify the credentials, like this:

Connecting via integrated authentication

Connecting via integrated authentication

The SQL Server connection configuration lets you specify how to connect to the database instance. I would guess that much of the time this is configured with a particular native SQL Server ID. Note that it’s also possible to connect using “Windows NT authentication”. You can select this option to try and have the local machine connect via its own credentials to the SQL Server instance. That is, the user running the Essbase service on the local machine would need to have access to the given SQL Server instance.

With the ODBC connection setup, you can reference it in a load rule as normal:

Essbase Data Load SQL Definition

Next when you go to test the connection/data load, just specify anything you want for the user name and password when it prompts you in this dialog box:

Essbase SQL Data Load Credential Dialog

I have gotten into the habit of typing in ‘dummy’. As Peter Nitschke unhelpfully points out, in this case you I don’t need to document the automation author because because it’s already documented (heyoooo). Pete, I know you read this, by the way, is that rash cleared up yet? You actually have to put in something, otherwise the load won’t work.

Similarly if you do a normal data load, you have to put something in:

Essbase Data Load from SQL Load Rule

Essbase Data Load from SQL Load Rule

Or else you’ll get this helpful error message:

The error when the username is not specified

The error when the username is not specified

Do note that the username and password are both discarded or otherwise unneeded assuming the network credentials are sufficient to connect to the database. As a somewhat interesting aside, this seems to be a quirk in the Essbase code path that processes the data loads. It guards against NULL (empty) values and if it picks up one or the other for username and password, it errors out the load. If memory serves, you can get away with only filling in a username in the data load rule SQL editor test but you need both when doing a data load. This is due to differences in how the EAS GUI is verifying data. Also, if you write MaxL automation that relies on this Windows integrated authentication, you need to fill in some dummy or otherwise ignored values in the MaxL.

My guess is that Essbase actually tries using the username/password but when it is able to make a successful connection via the integrated authentication, the SQL Server driver just ignores any other connection efforts, so the “bad” username and password have no effect.

Apparently connecting this way is unsupported but I have set this up and used it numerous times without any issue. I think if Oracle really wanted to polish this up and make integrated authentication a more supported option, that’d be great, and I might start with some consistency on the EAS side of things as well as a modified MaxL syntax alternative that lets you explicitly specify integrated authentication (or put another way, a MaxL syntax that doesn’t need username/password in it).

Essbase Version Archaeology

I was digging through some of the Essbase Java API internals the other day, trying to work around an issue I was encountering with Vess, my Essbase JDBC driver. Along the way I came across a constants file with a fun history (of sorts) of Essbase versions.

Complex software packages will often track their version somehow. This is for many reasons, but a very common one is so the software has a way to gracefully handle version discrepancies. For example, some client software (perhaps the latest version of Smart View) may be working with an older version of the Essbase server. If there was some feature that came out that Smart View supports but isn’t available in the older version of Essbase that it’s connecting to, it might find this out by asking the server for its version. The client can then hide/disable the feature as needed.

That said, the Essbase Java API has a file that tracks various versions of Essbase. This shouldn’t necessarily be interpreted as a full version history, but when the Essbase developers need to make a change that could affect version compatibility, it will often result in a change to the internal version matrix.

In any case, the constants in the file tell kind of an interesting story. We can see a handful of code names in the variables, such as Gemini, Orion, Levi, Sunspot, Eclipse, Cromwell, and Beckett.

Other versions are just designated by a major/minor/patch/build combination, such as Essbase 6.2, 6.5, 6.5.1 (this is the version of Essbase I cut my teeth on!), 7.0.0, 7.1, 9.0.1, 9.2.0.2, 9.2.0.3, 9.2.1, 9.3.1 (one of my favorite versions), and 9.3.1.4.

Then the jump to version 11 happens, where we can see 11.1.1, 11.1.1.2.1, 11.1.1.3, 11.1.2, 11.1.2.1, 11.1.2.2, 11.1.2.2.1, 11.1.2.2.5, and 11.1.2.3.

Curiously in the 11.x series, at some point there was an introduction of a seemingly separate series of variables, for editions 11.1.2.2.1 PV0, 11.1.2.3 PV0, and 11.1.2.3 PV1. I’m not totally clear what the PV is meant to indicate (patch version?).

Lastly, at some point someone introduced a set of three variables to track the release/revision/patch that would apparently always be the current version of the API, but it was only ever set with major version 6, minor version 5, and patch 0. I think someone decided that the variable structure, while seemingly clean/useful, was untenable in the long haul.

Lastly, there are a couple of “versions” that coincide with specific features that came out or were otherwise necessitated by testing: ECLIPSE_ALPHA, ECLIPSE_ALPHA_II, Hybrid Analysis (HYBRIDANALYSIS), non-unique names (NONUNIQUENAME), UNICODE, and some sort of “parity” (PARITY) marker that has the same version as 11.1.2.1. Un unqualified version variable seems to match up with the version of the library itself (I took this from an 11.1.2.3 JAR file, hence it matches the entry for 11.1.2.3).

I remember an Oracle product manager at some time indicating that the code names for Essbase versions were authors – hence Beckett and Cromwell – but I also distinctly remember hearing that one of these versions was codenamed “Tallyrand”, which isn’t reflected in the code names for this list. I’d be curious what some of the other code names were along the way, so if anyone out there knows, please send me an email and I can post them here for posterity (if nothing else).

Here’s the full list from the file from 11.1.2.3 for those that are curious:

public static final int ESS_API_VERSION = 725552;
public static final int ESS_API_VERSION_11_1_2_3 = 725552;
public static final int ESS_API_VERSION_11_1_2_2_5 = 725541;
public static final int ESS_API_VERSION_11_1_2_2_1 = 725537;
public static final int ESS_API_VERSION_11_1_2_2 = 725536;
public static final int ESS_API_VERSION_PARITY = 725520;
public static final int ESS_API_VERSION_11_1_2_1 = 725520;
public static final int ESS_API_VERSION_11_1_2 = 725504;
public static final int ESS_API_VERSION_11_1_1_3 = 725296;
public static final int ESS_API_VERSION_11_1_1_2_1 = 725281;
public static final int ESS_API_VERSION_11_1_1 = 725248;
public static final int ESS_API_VERSION_9314 = 602432;
public static final int ESS_API_VERSION_931 = 602368;
public static final int ESS_API_VERSION_BECKETT = 602112;
public static final int ESS_API_VERSION_921 = 598272;
public static final int ESS_API_VERSION_9203 = 598064;
public static final int ESS_API_VERSION_9202 = 598048;
public static final int ESS_API_VERSION_901 = 590080;
public static final int ESS_API_VERSION_NONUNIQUENAME = 589824;
public static final int ESS_API_VERSION_CROMWELL = 463360;
public static final int ESS_API_VERSION_71 = 462848;
public static final int ESS_API_VERSION_UNICODE = 458752;
public static final int ESS_API_VERSION_700 = 458752;
public static final int ESS_API_VERSION_651 = 413952;
public static final int ESS_API_VERSION_65 = 413696;
public static final int ESS_API_VERSION_HYBRIDANALYSIS = 413696;
public static final int ESS_API_VERSION_62 = 401408;
public static final int ESS_API_VERSION_ECLIPSE = 329472;
public static final int ESS_API_VERSION_ECLIPSE_ALPHA_II = 329216;
public static final int ESS_API_VERSION_ECLIPSE_ALPHA = 328960;
public static final int ESS_API_VERSION_SUNSPOT = 328192;
public static final int ESS_API_VERSION_LEVI = 327936;
public static final int ESS_API_VERSION_ORION = 327680;
public static final int ESS_API_VERSION_GEMINI = 262144;
public static final short ESS_API_RELEASE = 6;
public static final short ESS_API_REVISION = 5;
public static final short ESS_API_PATCH = 0;
public static final long ESS_API_VERSION_11_1_2_3_PV1 = 3116222111612928L;
public static final long ESS_API_VERSION_11_1_2_3_PV0 = 3116222111547392L;
public static final long ESS_API_VERSION_11_1_2_2_1_PV0 = 3116157687037952L;

Your daily Vess (Virtual Essbase JDBC driver) update

The other week I mentioned that I’d been playing around with the idea of a virtual JDBC driver for Essbase. Much to my amazement, quite a few people (relatively speaking…) expressed an interest in this project and even graciously offered to help test it.

To reiterate from last week, I still think that Vess is “interesting”. It has also been one of the more complicated things I have tried to create. Implementing such a thing is really kind of an exercise in pounding a round peg into a square whole (or pounding a cube into a cylinder…), requiring advanced knowledge of Java, the JDBC driver model, Essbase, the Essbase Java API, network programming, and more. I’m leveraging code from a half dozen projects I have written, some public and some not.

That said if you wondered why Oracle wasn’t nice enough to make a true JDBC driver for Essbase, it’s because you have to be a little crazy to do it. And apparently I’m just that right kind of masochistic crazy.

Anyway, the driver works – amazingly. It’s not production ready and won’t be for some time, but you can view your substitution variables from a table and even load data to a cube by doing a normal SQL INSERT.

In fact, you can even read a CSV file on the fly and insert it to a cube with no load rule, like this:

INSERT INTO VESS_SCHEMA.SAMPLE_BASIC_DATA ("Year", "Measures", "Product", "Market", "Scenario", "DATA")
SELECT * FROM CSVREAD('/Users/jasonwjones/test/sample_basic_data.txt');

Crazy, eh? So all you load rule haters rejoice.

You can even use SQL transformations to adjust the data and load that into a cube from a remote JDBC table:

INSERT INTO VESS_SCHEMA.DRILLING_DRILLING_DATA ("Years", "Time", "Location", "Account", "DATA")
SELECT 'FY' || RIGHT(FISCAL_YEAR, 2) AS Years, 'Jan' AS Time, DIVISION || '-' || LOCATION AS LOCATION, '0' || ACCOUNT, AMOUNT FROM TRANS_SMALL;

What next?

A lot of the hard stuff is in place so the name of the game now would be to iterate and start to introduce a few more table representations of Essbase concepts, such as cube stats, server sessions, and some other interesting things. I’m still contemplating how you could layer in the outline editing model in a sensible way.

Getting back to this being “interesting” – I think it’s really interesting in a couple of ways. One is the idea of getting to use any off-the-shelf JDBC tool to be able to do things with an Essbase server. The other thing that is interesting is ODI (which I guess is interesting by way of the first reason, since databases/JDBC “fit” into ODI better than OLAP). The way Essbase works in ODI right now is that some custom Java libraries are used and glued together with some Jython scripts and Knowledge Module steps. In theory you could instead drop in the Vess driver as a generic SQL data source and do all of this with vanilla RKMs, LKMs, and IKMs. How cool would that be? To the extent that supporting Hyperion inside of ODI has been “problematic” for Oracle, you could perhaps sidestep it this way.

Anyway, I have a few folks that have valiantly offered to help test this thing out. If you have a spare cube, Java 1.8 (or greater), can download the free Squirrel SQL tool, and want to play with this, please hit me up, I would love some feedback.

 

Essbase Java API digging: data load from FTP?

I spend a lot of time working with the Essbase Java API, so I have become pretty familiar with it. I have used it to build a middle tier for a mobile reporting solution, a core part of the innovative Drillbridge drill-through solution for Essbase, Planning, and Financial Reporting, and countless other little apps. (In fact, my experience with Essbase, Java, ODI, and Jython are all at the confluence of one of my upcoming endeavors, but more on that later…)

In any case, much like a trip to Costco, going through the Essbase Java API can be a bit of a treasure hunt at times. One such thing I came across recently had to do with the loadData() method on a cube object (IEssCube).

There are actually a few loadData() methods – in programming parlance, the method is overloaded. That is, there are multiple functions with the same name, but they differ with their calling argument types, so behind the scenes, Java is intelligently able to figure out which one to call. Method overloading is frequently done for programming convenience.

For example, an object might have a method getGreeting(String name) that takes a name and returns “Hello ” plus that name. Another method with the same name might be getGreeting(String name, Date time) and this returns a greeting that is customized by the time of day.

The Essbase cube object in the Java API contains three loadData methods. One of them caught my eye. There’s a loadData() method that takes a username and password. According to the docs, you can actually load a file over FTP (in which case the file name would be an FTP path, presumably with a server and maybe even prefixed with ftp://), and the username/password are at the FTP username/password.

I thought this was kind of cool because it’s not something that’s ostensibly visible in EAS. So it could be something that’s buried in the API that is used behind the scenes, or maybe it was some plumbing done for PBCS. Maybe it has even been there forever. Like I said, I thought this was interesting… there are also a few other fun tidbits I’ve seen over the years in the API so I’ll try and point those out in the future. If you know of some please send them my way!

Fun with Unix join (on Windows too!)

Last week I mentioned that I was going to take a look at some interesting Unix utilities that you can also run on Windows by way of GnuWin (or some similar GNU on Windows library).

The first item that I want to look at is the “join” program. Join can perform operations very similar to a SQL join, except that it works with text files. Let’s say I have two text files, one named people, and one named colors. The contents of the people file is a unique ID followed by a name, and the colors file is everyone’s favorite color – where the columns are the person’s ID and then their favorite color.

Here’s the people file:

1 Jason 
2 Cameron 
3 Celvin 
4 Michael 
5 Opal 
6 Amy

Here’s the colors file:

1 Orange
2 Pink
3 Purple
4 Red
5 Blue
6 Hot Pink

As a very simple case, we can run join with just the names of these two files:

join people colors

And we’ll get this output:

1 Jason Orange
2 Cameron Pink
3 Celvin Purple
4 Michael Red
5 Opal Blue
6 Amy Hot Pink

In this case, the join program just assumed that the the first columns of each file were the same thing, and join accordingly. Again by default, the output was all of the columns from each file.

Let’s change things up and use a different people file, this time named people-small and containing fewer entries, like this:

1 Jason 
4 Michael 
5 Opal

Now if we run join with this file but the same colors file, like this:

join people-small colors

This will be our output:

1 Jason Orange
4 Michael Red
5 Opal Blue

If you come from the SQL world at all, you will notice that by default, join performed an inner join with respect to the data – that is, the three rows in the “left” data store (people-small) were paired up with the matching rows from the colors file, but the colors that didn’t match anything were just discarded. Similarly, if we run the full people file with a smaller colors file (named colors-small), we’ll see something similar. Consider this colors-small file:

2 Pink
3 Purple
6 Hot Pink

And running this command:

join people color-small

Then we’ll get this:

2 Cameron Pink
3 Celvin Purple
6 Amy Hot Pink

Again, an inner join was performed, in that only matched up rows (from one side or another) were paired up and put into the output.

Note that this behavior (assuming the first column is the ID to match, also, performing an inner join) is just the default behavior for join. As with most Unix tools, join has a plethora of command-line switches that define its behavior. You can choose different join methods, fields, and more.

The Hyperion Connection

I wanted to mention this tool specifically because I think for many of us working with ostensibly relational data (in text files), it can come in handy every now and then. I actually had a perfect use case for this the other day were I just wanted to quickly load some data but I didn’t want to mess with ODI to load it up or deal with importing it to a database. I had to sort the records in the input file first, but after that the join program worked like a charm and generated a file that I could easily use with a load rule to load up to a cube.

Cool tool for generating data

I came across a really useful tool the other day for generating sample relational data. It’s a web service called genereatedata.com (inventively enough). It lets you define the columns, data types, how to generate the data (such as random names, states, numbers, numbers in a range, and more), then generate the data. You can get data in a delimited form or even directly as INSERT statements for your database of choice. It looks like it generates 100 rows of your data for you for free, but that you need to pay for a premium service that is able to generate more rows, if you need it. Fortunately in my case I was good with 100 rows.

Of course, if you need to generate data for your cubes, you might want to use a tool designed specifically for doing so, such as the Cubedata tool I built some time ago (version 2 is in the works!).

Code This, Not That: Python string interpolation/concatentation

That title is a bit of a mouthful, ain’t it?

I often see VB code in FDM jobs that is creating SQL statements – concatenating things together, adding in parameters from variables, lots of quotes, single quotes, plus signs, underscores, and more all over the place. It’s hideous.

For example, check some of this garbage out:

strSQL = "INSERT INTO " & TEMP & " (ENTITY,ACCOUNT,COSTCENTER,AMOUNT) " & _
" SELECT Entity.label, ACCOUNT.label, Entity.label+'_CC'", & Period & _
" from " & DATABASE & " as FACT"

For the record, I’m not saying if you see this or write this that you’re a bad developer. What I will say, however, is that you and/or your predecessors were hobbled by a clunky programming language and now you can atone for your syntactic sins with Python (Jython).

Python offers some nice facilities for handling strings. If you are looking for a straight translation of the normal VBScript string concatenation, you could do this:

account = "170100"
select = "SELECT Amount FROM Transactions WHERE Account = " + account

Python uses the plus operator (+) instead of the ampersand (&) to concatenate strings. This is straightforward and works well for simple situations. Of course, things get a little harder to read when we need to surround our value with single quotes (since it’s a SQL query after all):

select = "SELECT Amount FROM Transactions WHERE Account = '" + account + "'"

This isn’t terribly hard to read but we’re on a slippery slope. One thing we can try is simple “printf” style string interpolation:

select = "SELECT Amount FROM Transactions WHERE Account = '%s'" % account

In this case we have a %s placeholder meaning that we will place a string in it. Then we are using the modulo (%) operator to place the value of account into the string. This probably looks funny if you’re not familiar. What’s going on with this syntax?

Remember how you used the ampersand in VB to concatenate strings but in Python you use a normal plus? Well, the fact that we can use the plus in Python is due to operator overloading. This is programming term that essentially means we can actually program in our own behavior for math operations. You’re really used to this with integers (think 2 + 3 = 5), but maybe not so used to it for strings. It’s a clever trick and a nice facility that let’s us work with strings (or indeed, other types of objects) without having to resort to a lot of boilerplate code like newStr = StringUtilities.concatenate(query1, query2) or something similarly awkward.

Along those lines, whereas the modulo operator in math means “give me the remainder when dividing these two numbers” (such as 5 % 3 = 2), in Python this operator has been overloaded to provide convenient operations with strings (after all, how on earth would you ever perform modulo arithmetic on strings?). So when we write string1 % string2 (which is what we had above with account), Python calls the appropriate handler for the string object, which in this case is a fancy string replacement functionality.

This string replacement facility, as it turns out, is pretty handy – and this leads me to the final example of this post.

Python supports objects known as dictionaries. For those from the Java world, this is essentially the same as a Map (or perhaps more like a Map). VBScript has these objects too but they aren’t part of the language itself – they have a clunky syntax. In Python we can make a dictionary simply like this:

params = {
    "ACCOUNT" : "0170200",
    "PERIOD"  : 2
}

Next, Python also lets us create multi-line strings (and we don’t even need the line continuation operator from VB where the lines end with & _). We tell Python we have a multi-line string simply by using three quotes:

select = """
    SELECT 
        Amount
    FROM
        Transactions
    WHERE
        Account = '%(ACCOUNT)s'
        AND Period = %(PERIOD)d
""" 

Note that this string has been parameterized with ACCOUNT and PERIOD placeholders. The syntax is %(KEYNAME)s for strings and %(KEYNAME)d for integers (there are other codes as well but these are the most common).

Now we have a multi-line string with placeholders that we can replace with values from a dictionary. We could print this out easily:

print select % params

And we get this code:

SELECT 
    Amount
FROM
    Transactions
WHERE
    Account = '0170200'
    AND Period = 2   

This works exceptionally well for crafting SQL queries (also, yes, I know about placeholders with ? that are specific to SQL query parameters, let’s save that for another post). The wins here are that you get to format the query how you want, you don’t have quotes and ampersands and plus symbols all over the place, it’s easy to read, easy to maintain, and even faster to write.

Remember, code for readability. Also keep in mind that if you are converting VB to Python/Jython, you are not supposed to slavishly translate line by line – you are translating the solution as whole.

Advanced Drillbridge tokens: Protecting against unexpected members

Here’s another quick tip for setting up Drillbridge tokens. I recently created a Drillbridge report that needs to map a member from the Cost Centers dimension over to multiple columns in a relational database table. For example, a cost center may have the format 111-222-333-44, and the code 222 corresponds to a column in a table, 333 corresponds to a column in a table, and 44 corresponds to a column in a table (these are all in the same table, by the way).

Because the incoming member on the POV comes across as a Java string, we can perform normal Java string operations on it. In this case we can easily extract the “222” by doing the following (assuming the cost center dimension is named “Cost Centers”:

#Cost_Centers.substring(4, 7)

Recall that in Java, string character offsets start at 0, and that the substring we extract does not include the character at the ending offset (this is all spelled out nicely in the Java String documentation). So effectively, in the above function we are saying “give me the substring of the #Cost_Centers variable from characters 4 through 7 (but not including 7).

A problem with this approach is that members with fewer characters could potentially cause a StringIndexOutOfBounds Java exception. For example, a level one member or parent to our cost center might be “Other”. Therefore when the substring method is called on “Other”, we are asking Java to give us characters that do not exist, and an exception is thrown.

I thought for awhile on what the best way to handle this is. There are a lot of ways I could go in terms of the code, but I decided that the best approach is one that won’t really affect the Drillbridge code much at all right now.

The decision to use a full-fledged expression language as the basis of Drillbridge token expressions has turned out to be quite fortuitous, and one of the things it affords us inside of Drillbridge expressions is the ability to write complex scripts that can handle this situation for us. One such way to guard against member names that are too short is to use the ternary operator.

Many programming languages support the ternary operator. It’s a compact way of representing an “if-else” construct. For example, consider this simple code:

if (stopped) {
    return "Red";
} else {
    return "Green";
}

In this case, the “stopped” variable is a boolean variable that is always one of either true or false. If the variable is true, our function returns the text “Red” and if the variable is false, our function returns the value “Green” (assume that the if block is contained within a function that returns a String value.

This construct is so prevalent in computer programming that many languages support a way to compactly write this, using the ternary operator:

String colorText = stopped ? "Red" : "Green";

In the above example, the variable stopped gets evaluated and if true, the whole expression will evaluate to “Red” and if false, it’s “Green”. Then the String variable named colorText will get the value.

We can use this exact same construct inside of a Drillbridge token expression, thanks to the powerful expression language being used. Now consider this enhanced query example:

SELECT * FROM TRANSACTIONS
WHERE
    SEGMENT2 = '{{
        "name":"Cost Centers", "expression" : "#Cost_Centers.length() >= 13 ? #Cost_Centers.substring(4, 7) : '~~ Cost Center name not long enough to parse ~~'", "sampleValue":"111-222-333-44"
    }}'

Now we are checking the value of an expression first – checking the length of the incoming cost center. If it is at least 13 characters, then the return value of of the expression is safe to be figured out using the substring method. If the member fails the length test, then we return an arbitrary string, which in this case we have determined cannot match anything at all in the database, and therefore return nothing.

This little trick can be used in situations where we need to parse out sub-strings from member names but might have some members that don’t fit the format we need. Also, thanks to Drillbridge’s inventive handling processing of multiple members with the drill-to-bottom feature, we can also use this to effectively filter out members from a list of members, such that we keep the ones we want and discard any others that we don’t need (for example, it’s not uncommon to have “non-conforming” member names for input members).