MDX Solve Order, SCOPE_ISOLATION and the Aggregate() function


Reblog from http://cwebbbi.wordpress.com/2014/10/14/mdx-solve-order-scope_isolation-and-the-aggregate-function/

Solve order in MDX is a mess. Back in the good old days of Analysis Services 2000 it was a difficult concept but at least comprehensible; unfortunately when Analysis Services 2005 was released a well-intentioned attempt at making it easier to work with in fact ended up making things much, much worse. In this post I’m going to summarise everything I know about solve order in MDX to try to make this complicated topic a little bit easier to understand.

If you’re an experienced MDXer, at this point you’ll probably lose interest because you think you know everything there is to know about solve order already. Up until two weeks ago that’s what I though too, so even if you know everything I say in the first half of this post keep reading – there’s some new stuff at the end I’ve only just found out about.

Let’s start with a super-simple cube built from a single table, with two measures (Sales Amount and Cost Amount) and a Product dimension containing a single attribute hierarchy with two members (Apples and Oranges). Everything is built from the following table:

image

Solve Order and calculated members in the WITH clause

To understand what solve order is and how it can be manipulated, let’s start off looking at an example that uses only calculated members in the WITH clause of a query. Consider the following:

WITH

MEMBER [Measures].[Cost %] AS
DIVIDE([Measures].[Cost Amount],[Measures].[Sales Amount]),
FORMAT_STRING='0.0%'

MEMBER [Product].[Product].[Total Fruit] AS
SUM({[Product].[Product].&[Apples],
[Product].[Product].&[Oranges]})

SELECT
{[Measures].[Sales Amount],
[Measures].[Cost Amount],
MEASURES.[Cost %]}
ON COLUMNS,
{[Product].[Product].&[Apples],
[Product].[Product].&[Oranges],
[Product].[Product].[Total Fruit]}
ON ROWS
FROM SALES

There are two calculated members here:

  • Cost % divides Cost Amount by Sales Amount to find the percentage that costs make up of the sales amount
  • Total Fruit sums up the values for Apples and Oranges

The output of the query is as follows:

image

Solve order controls the order that MDX calculations are evaluated when two or more of them overlap in the same cell. In this case Cost % and Total Fruit are both evaluated in the bottom right-hand cell; Total Fruit is calculated first, giving the values of 30 for Sales Amount and 21 for Cost Amount, and Cost % is calculated after that. The bottom right-hand cell is the only cell where these two calculations overlap and the only cell where solve order is relevant in this query.

In this case, 70% is the value you would expect to get. You, however, can control solve order for calculations in the WITH clause by setting the SOLVE_ORDER property for each calculated member, like so:

WITH

MEMBER [Measures].[Cost %] AS
DIVIDE([Measures].[Cost Amount],[Measures].[Sales Amount]),
FORMAT_STRING='0.0%',
SOLVE_ORDER=1

MEMBER [Product].[Product].[Total Fruit] AS
SUM({[Product].[Product].&[Apples],
[Product].[Product].&[Oranges]}),
SOLVE_ORDER=2

SELECT
{[Measures].[Sales Amount],
[Measures].[Cost Amount],
MEASURES.[Cost %]}
ON COLUMNS,
{[Product].[Product].&[Apples],
[Product].[Product].&[Oranges],
[Product].[Product].[Total Fruit]}
ON ROWS
FROM SALES

image

Now the value in the bottom right-hand corner is 135% instead of 70%: Cost % is calculated first, then Total Fruit second so 60%+75%=135%. The SOLVE_ORDER property of a calculated member is an integer value, and the lower the SOLVE_ORDER value the earlier the calculation will be evaluated, so with Cost % having a solve order of 1 and Total Fruit having a solve order of 2, this forces Cost % to be calculated first now even though in this case it gives what is clearly an ‘incorrect’ result.

Solve Order and calculated members defined on the cube

Things now get a bit more complicated. There’s a different way of controlling solve order if your calculations are defined on the cube itself: in this case, solve order is determined by the order that the calculations appear on the Calculations tab. So if the calculations tab of the Cube Editor contains the calculations in this order:

CREATE MEMBER CURRENTCUBE.[Measures].[Cost %] AS
DIVIDE([Measures].[Cost Amount],[Measures].[Sales Amount]),
FORMAT_STRING='0.0%';

CREATE MEMBER CURRENTCUBE.[Product].[Product].[Total Fruit] AS
SUM({[Product].[Product].&[Apples],
[Product].[Product].&[Oranges]});

image

…and you run the following query:

SELECT
{[Measures].[Sales Amount],
[Measures].[Cost Amount],
MEASURES.[Cost %]}
ON COLUMNS,
{[Product].[Product].&[Apples],
[Product].[Product].&[Oranges],
[Product].[Product].[Total Fruit]}
ON ROWS
FROM SALES

You get the incorrect result again:

image

…but if you change the order of the calculations so that Total Fruit comes first:…and rerun the same query, you get the correct results:

image

image

The SOLVE_ORDER property can also be used with calculations defined on the cube to override the effect of the order of calculations. So defining the following calculations on the cube:

CREATE MEMBER CURRENTCUBE.MEASURES.[Cost %] AS
DIVIDE([Measures].[Cost Amount], [Measures].[Sales Amount]),
FORMAT_STRING='PERCENT', SOLVE_ORDER=2;

CREATE MEMBER CURRENTCUBE.[Product].[Product].[Total Fruit] AS
SUM({[Product].[Product].&[Apples],
[Product].[Product].&[Oranges]}), SOLVE_ORDER=1;

image

…means that, even though Total Fruit comes after Cost % on the Calculations tab, because it has a lower solve order set using the SOLVE_ORDER property it is evaluated before Cost % and the query still returns the correct value:

image

Solve order and calculations defined in the WITH clause and on the cube

What happens if some calculations are defined on the cube, and some are defined in the WITH clause of a query? By default, calculations defined on the cube always have a lower solve order than calculations defined in the WITH clause of a query; the SOLVE_ORDER property has no effect here. So if Total Fruit is defined in the WITH clause and Cost % on the cube, you get the incorrect result:

image

WITH

MEMBER [Product].[Product].[Total Fruit] AS
SUM({[Product].[Product].&[Apples],
[Product].[Product].&[Oranges]})

SELECT
{[Measures].[Sales Amount],
[Measures].[Cost Amount],
MEASURES.[Cost %]}
ON COLUMNS,
{[Product].[Product].&[Apples],
[Product].[Product].&[Oranges],
[Product].[Product].[Total Fruit]}
ON ROWS
FROM SALES

image

Of course, if Total Fruit is defined on the cube and Cost % is defined in the WITH clause you will get the correct answer. However, usually measures like Cost % are defined on the cube and it’s calculations like Total Fruit, which define custom groupings, that are defined on an ad hoc basis in the WITH clause. This is a problem.

The SCOPE_ISOLATION property

This default behaviour of calculations defined on the cube always having a lower solve order than calculations in the WITH clause can be overridden using the SCOPE_ISOLATION property. Setting SCOPE_ISOLATION=CUBE for a calculated member defined in the WITH clause will give that calculated member a lower solve order than any calculations defined on the cube. So, with Cost % still defined on the cube the following query now gives the correct results:

WITH

MEMBER [Product].[Product].[Total Fruit] AS
SUM({[Product].[Product].&[Apples],
[Product].[Product].&[Oranges]}),
SCOPE_ISOLATION=CUBE

SELECT
{[Measures].[Sales Amount],
[Measures].[Cost Amount],
MEASURES.[Cost %]}
ON COLUMNS,
{[Product].[Product].&[Apples],
[Product].[Product].&[Oranges],
[Product].[Product].[Total Fruit]}
ON ROWS
FROM SALES

image

The Aggregate() function

Using the MDX Aggregate() function (and in fact also the VisualTotals() function – but you probably won’t ever want to use it) inside a calculation has a similar effect to the SCOPE_ISOLATION property in that it forces a calculation to be evaluated at a lower solve order than anything else. Therefore, in the previous example, instead of using the SCOPE_ISOLATION property you can change the calculation to use the Aggregate() function instead of Sum() and get the correct results:

WITH

MEMBER [Product].[Product].[Total Fruit] AS
AGGREGATE({[Product].[Product].&[Apples],
[Product].[Product].&[Oranges]})

SELECT
{[Measures].[Sales Amount],
[Measures].[Cost Amount],
MEASURES.[Cost %]}
ON COLUMNS,
{[Product].[Product].&[Apples],
[Product].[Product].&[Oranges],
[Product].[Product].[Total Fruit]}
ON ROWS
FROM SALES

image

The general rule is, therefore, whenever you are creating custom-grouping type calculated members like Total Fruit in the WITH clause of a query, to use the Aggregate() function rather than Sum(). The fact that Aggregate() takes into account the AggregateFunction property of each measure on the cube (so that distinct count, min and max measures are dealt with correctly) is another good reason to use it.

Using the Aggregate() function in calculations defined on the cube has the same effect. Even when the Total Fruit calculated member is defined after Cost % on the Calculations tab, as here:

image

…so long as Total Fruit uses the Aggregate() function, running the test query gives the correct result:

SELECT
{[Measures].[Sales Amount],
[Measures].[Cost Amount],
MEASURES.[Cost %]}
ON COLUMNS,
{[Product].[Product].&[Apples],
[Product].[Product].&[Oranges],
[Product].[Product].[Total Fruit]}
ON ROWS
FROM SALES

image

There are some very interesting details about the way Aggregate() changes solve order though.

First of all, using the Aggregate() function in a calculated member doesn’t change the solve order of the whole calculation, just the part of the calculation that uses the Aggregate() function. With the following calculations defined on the cube:

CREATE MEMBER CURRENTCUBE.[Measures].[Cost %] AS
DIVIDE([Measures].[Cost Amount],[Measures].[Sales Amount]),
FORMAT_STRING='0.0%';

CREATE MEMBER CURRENTCUBE.[Product].[Product].[One Aggregate] AS
AGGREGATE({[Product].[Product].&[Apples],
[Product].[Product].&[Oranges]});

CREATE MEMBER CURRENTCUBE.[Product].[Product].[One Sum] AS
SUM({[Product].[Product].&[Apples],
[Product].[Product].&[Oranges]});

CREATE MEMBER CURRENTCUBE.[Product].[Product].[Two Aggregates] AS
AGGREGATE({[Product].[Product].&[Apples],
[Product].[Product].&[Oranges]})
+
AGGREGATE({[Product].[Product].&[Apples],
[Product].[Product].&[Oranges]});

CREATE MEMBER CURRENTCUBE.[Product].[Product].[Two Sums] AS
SUM({[Product].[Product].&[Apples],
[Product].[Product].&[Oranges]})
+
SUM({[Product].[Product].&[Apples],
[Product].[Product].&[Oranges]});

CREATE MEMBER CURRENTCUBE.[Product].[Product].[One Aggregate One Sum] AS
AGGREGATE({[Product].[Product].&[Apples],
[Product].[Product].&[Oranges]})
+
SUM({[Product].[Product].&[Apples],
[Product].[Product].&[Oranges]});

…running the following query:

SELECT
{[Measures].[Sales Amount],
[Measures].[Cost Amount],
MEASURES.[Cost %]}
ON COLUMNS,
{[Product].[Product].&[Apples],
[Product].[Product].&[Oranges],
[Product].[Product].[One Aggregate],
[Product].[Product].[One Sum],
[Product].[Product].[Two Aggregates],
[Product].[Product].[Two Sums],
[Product].[Product].[One Aggregate One Sum]}
ON ROWS
FROM SALES

…gives these results:

image

The value returned for the calculation [One Aggregate One Sum], which contains an Aggregate() and a Sum(), shows that the value returned by the Aggregate() is evaluated at a different solve order than the value returned by Sum(), even if they are inside the same calculated member.

Furthermore, in some very obscure cases the contents of the set passed to the Aggregate() function determine whether its special solve order behaviour happens or not. I don’t know for sure what all those cases are but I have seen this happen with time utility (aka date tool aka shell) dimensions. Here’s an example.

The demo cube I’ve been using in this post has been changed to add a new dimension, called Data Type, which has just one hierarchy with one member on it called Actuals; Data Type is a fairly standard time utility dimension. The Cost % calculation has also been changed so that it’s now a calculated member on the Data Type dimension, although it is still defined on the cube. Here’s its new definition:

CREATE MEMBER CURRENTCUBE.[Data Type].[Data Type].[Cost %] AS
DIVIDE(
([Measures].[Cost Amount],[Data Type].[Data Type].&[Actuals]),
([Measures].[Sales Amount],[Data Type].[Data Type].&[Actuals])),
FORMAT_STRING='0.0%';

Now if I run the following query:

WITH

MEMBER [Product].[Product].[Simple Set] AS
AGGREGATE({[Product].[Product].&[Apples],
[Product].[Product].&[Oranges]})

MEMBER [Product].[Product].[Nextmember Function Used] AS
AGGREGATE({[Product].[Product].&[Apples],
[Product].[Product].&[Apples].NEXTMEMBER})

MEMBER [Product].[Product].[Descendants Function Used] AS
AGGREGATE(DESCENDANTS({[Product].[Product].&[Apples],
[Product].[Product].&[Oranges]}))

MEMBER [Product].[Product].[Descendants Function Used Twice] AS
AGGREGATE({
DESCENDANTS([Product].[Product].&[Apples]),
DESCENDANTS([Product].[Product].&[Oranges])
})

MEMBER [Product].[Product].[Descendants Function Used Twice With Union] AS
AGGREGATE(
UNION(
DESCENDANTS([Product].[Product].&[Apples]),
DESCENDANTS([Product].[Product].&[Oranges])
))

SELECT
{[Measures].[Sales Amount]}
*
[Data Type].[Data Type].ALLMEMBERS
ON COLUMNS,
{[Product].[Product].&[Apples],
[Product].[Product].&[Oranges],
[Product].[Product].[Simple Set],
[Product].[Product].[Nextmember Function Used],
[Product].[Product].[Descendants Function Used],
[Product].[Product].[Descendants Function Used Twice],
[Product].[Product].[Descendants Function Used Twice With Union]}
ON ROWS
FROM [Sales With Data Type]

I get these results:

image

Note that for some of the calculations, the Aggregate() function results in a lower solve order in the way we’ve already seen, but not for all of them. Using the NextMember() function, or having two Descendants() functions without wrapping them in a Union() function, seems to stop SSAS assigning the calculation a lower solve order. Ugh. Luckily, though, I have only been able to replicate this with calculated members from two non-measures dimensions; if Cost % is a calculated measure Aggregate() always gives the lower solve order. Apparently this is something that SSAS does on purpose to try to recognise ‘visual total’-like calculated members and make them work the way you want automatically. This is definitely something to beware of if you are using time utility dimensions and calculations on other dimensions though, as it may result in incorrect values being displayed or performance problems if you’re not careful.

[Thanks to Gabi Münster for showing me how Aggregate() works with different sets and Marius Dumitru for confirming that this is intended behaviour]

Pentaho Data Integration scheduling with Jenkins


“As a System Administrator I need  to find a scheduling solution for our Pentaho Data Integration Jobs “
Reblog from  http://opendevelopmentnotes.blogspot.com/2014/09/pentaho-data-integration-scheduling.html
Scheduling is a crucial task in all ETL and Data Integration processes. The scheduling options available on the community edition of Pentaho Data Integration (Kettle) basically relay on the Operating System capability (Cron on Linux, Task Scheduler on Windows) but there is at last another free, open source and solid alternative for job scheduling,Jenkins.
Jenkins is a Continuos Integration tool, the de facto standard adopted in Java projects, and it’s so extensible and  easy to use that do a perfect job in scheduling Jobs and Transformations developed in Kettle.
So let start to build a production ready (probably) scheduling solution.

System configuration

OS: Oracle Linux 6
PDI: 5.1.0.0
Java: 1.7
Jenkins: 1.5

Install Jenkins

Jenkins install on Linux is trivial, just run some commands and in a few minutes you will have the system up and running.

#sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo
#sudo rpm –import https://jenkins-ci.org/redhat/jenkins-ci.org.key
#sudo yum install jenkins

At the end of the installation process you will have your Jenkins system ready to run.

Before starting Jenkins verify to have Java installed running:

#java -version

and if it’s not found on your system just install it with:

#sudo yum install java

Now it’s time to start Jenkis:

#sudo service jenkins start

Open you browser and go to console page.

Resolve port conflict

If you are not able to navigate to the web page check the log file:

#sudo cat /var/log/jenkins

Probably there is a port conflict (in my case I was running another web application on the same machine).

Look at your config file:

#sudo nano /etc/sysconfig/jenkins

and change the default ports:

JENKINS_PORT=”8082″

JENKINS_AJP_PORT=”8011″

Job example

Now that Jenkis is up and running is time to test a simple Job.

The transformation and job are self explained:

Scheduling

Go to the Jenkins web console and click on New Item.
Give it a name and check the Free style project box.
Set the schedule (each minutes only to test the job).
Now fill the Build section with the Kitchen command and save the project.
Just wait one minute and look at the left side of the page, you will find your Job running.
Click the Build Item and select Console Output. You will be able to see the main output of Kitchen.

CONCLUSION

Jenkins is a powerful tool and, even if it’s not the primary purpose, you can use it as your Enterprise Scheduler taking advantage of all the options for executing, monitoring and manage your Kettle Jobs.
Explore all the features that Jenkins provides and build your own free, solid and open source scheduling solution.
Take advantage of the big Jenkins community in order to meet the most complex scheduling scenarios and from time to time, if you find any interesting thing, remember to give back it to the community.

Creating a connection to SAP HANA using Pentaho PDI


 

Reblog from http://scn.sap.com/community/developer-center/hana/blog/2014/09/04/creating-a-connection-to-sap-hana-using-pentaho-pdi

In this blog post we are going to learn how to create a HANA Database Connection within Pentaho PDI.

1)  Go to SAP HANA CLIENT installation path and copy the “ngdbc.jar”

*You can get SAP HANA CLIENT & SAP HANA STUDIO from :https://hanadeveditionsapicl.hana.ondemand.com/hanadevedition/

 

1.png

2) Copy and paste the jar file to : <YourPentahoRootFolder>/data-integration/lib

2.png

3) Start Pentaho PDI and create a new Connection

* Make sure your JAVA_HOME environment variable is setting correctly.

3.png

3_1.png

3_2.png

4) Create a transformation,  rick click on Database connection to create a new database connection

4.png

 

5) Select “Generic Database” connection type and Access as “Native(JDBC)”

 

5.png

6)  Fill the following parameter on Settings

Connection Name: NAMEYOURCONNECTION

Custom Connection URL: jdbc:sap://YOUR_IP_ADDREES:30015

Custom Driver Class Name: com.sap.db.jdbc.Driver

User Name: YOURHANAUSER

Password: YOURHANAPASSWORD

6.png

 

7) Test your connection.

7.png

How to create custom reports using data from MongoDB


anonymousbi:

This demonstrate below show how to visually create a report directly against data stored in MongoDB (with no coding required). The following topics are shown:

Pentaho Data Integration tool is used to create a transformation that does the following:
Connect to and query MongoDB.
Query results are sorted.
Sorted results are grouped.
Pentaho Report Designer is used to visually create a report by using the data from a PDI transformation.

Originally posted on Tech Ramblings:

This demonstrate below show how to visually create a report directly against data stored in MongoDB (with no coding required).  The following topics are shown:

  1. Pentaho Data Integration tool is used to create a transformation that does the following:
    1. Connect to and query MongoDB.
    2. Query results are sorted.
    3. Sorted results are grouped.
  2. Pentaho Report Designer is used to visually create a report by using the data from a PDI transformation.

View original

Oracle convert a string field with a list of elements in a set of rows


I will show one tricky way of creating a  subquery to build a set of rows coming from a string field which includes a list of valuaes separated by comma

Given the example of a string field with the following content ‘A,B,C,D’.Using REGEXP_SUBSTR you can extract only one of the 4 matches (A,B,C,D): the regex [^,]+ matches any character sequence in the string which does not contain a comma.

If you run:

SELECT REGEXP_SUBSTR ('A,B,C,D','[^,]+') as set_of_rows
FROM   DUAL

you’ll get A.

and if you’ll try running:

SELECT REGEXP_SUBSTR ('A,B,C,D','[^,]+',1,1) as set_of_rows
FROM   DUAL

you’ll also get A only that now we also sent two additional parameters: start looking in position 1 (which is the default), and return the 1st occurrence.

Now lets run:

SELECT REGEXP_SUBSTR ('A,B,C,D','[^,]+',1,2) as set_of_rows
FROM   DUAL

this time we’ll get B (2nd occurrence) and using 3 as the last parameter will return C and so on.

The use of recursive connected by along with level makes sure you’ll receive all the relevant results (not necessarily in the original order though!):

SELECT DISTINCT REGEXP_SUBSTR ('A,B,C,D','[^,]+',1,LEVEL) as set_of_rows
FROM   DUAL
CONNECT BY REGEXP_SUBSTR ('A,B,C,D','[^,]+',1,LEVEL) IS NOT NULL
order by 1

will return:

set_of_rows
A
B
C
D

which not only contains all 4 results, but also breaks it into separate rows in the resultset and will be useful to add it on an IN() sql clause

This query “abuses” the connect by functionality to generate rows in a query on dual. As long as the expression passed to connect by is true, it will generate a new row and increase the value of the pseudo column LEVEL. Then LEVEL is passed to regex_substr to get the nth value when applying the regular expression

World Cup Dashboard 2014 – in 15 minutes


anonymousbi:

Awesome Dashboard by the way

Originally posted on Pentaho Business Analytics Blog:

dashboard fifa

Are you caught in the World Cup craze? Two ofmy passions are English football and analytics (hence I’m a SE at Pentaho based in London). So when it came time for this years’ World Cup, naturally I combined my passions to analyse who is going to win and what makes a winning team?

It turns out that a Big Data Analytics team in Germany tried to predict the winners based on massive data sets. Thus far three of their top five predicted teams have faltered. So what went wrong? Is Big Data not accurate? Are analytics not the answer?

Fret not. Exploring their methodology, their analysis was based from only one source of data. At Pentaho, we believe that the strongest insights come from blended data. We don’t just connect to large data sets; we make connecting toall data easy, regardless of format or location.

So why is my little…

View original 386 more words